metadata_version
string
name
string
version
string
summary
string
description
string
description_content_type
string
author
string
author_email
string
maintainer
string
maintainer_email
string
license
string
keywords
string
classifiers
list
platform
list
home_page
string
download_url
string
requires_python
string
requires
list
provides
list
obsoletes
list
requires_dist
list
provides_dist
list
obsoletes_dist
list
requires_external
list
project_urls
list
uploaded_via
string
upload_time
timestamp[us]
filename
string
size
int64
path
string
python_version
string
packagetype
string
comment_text
string
has_signature
bool
md5_digest
string
sha256_digest
string
blake2_256_digest
string
license_expression
string
license_files
list
recent_7d_downloads
int64
2.4
AstrBot
4.17.6
Easy-to-use multi-platform LLM chatbot and development framework
![AstrBot-Logo-Simplified](https://github.com/user-attachments/assets/ffd99b6b-3272-4682-beaa-6fe74250f7d9) <div align="center"> <a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_en.md">English</a> | <a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_ja.md">日本語</a> | <a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_zh-TW.md">繁體中文</a> | <a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_fr.md">Français</a> | <a href="https://github.com/AstrBotDevs/AstrBot/blob/master/README_ru.md">Русский</a> <div> <a href="https://trendshift.io/repositories/12875" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12875" alt="Soulter%2FAstrBot | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> <a href="https://hellogithub.com/repository/AstrBotDevs/AstrBot" target="_blank"><img src="https://api.hellogithub.com/v1/widgets/recommend.svg?rid=d127d50cd5e54c5382328acc3bb25483&claim_uid=ZO9by7qCXgSd6Lp&t=2" alt="Featured|HelloGitHub" style="width: 250px; height: 54px;" width="250" height="54" /></a> </div> <br> <div> <img src="https://img.shields.io/github/v/release/AstrBotDevs/AstrBot?color=76bad9" href="https://github.com/AstrBotDevs/AstrBot/releases/latest"> <img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="python"> <img src="https://deepwiki.com/badge.svg" href="https://deepwiki.com/AstrBotDevs/AstrBot"> <a href="https://zread.ai/AstrBotDevs/AstrBot" target="_blank"><img src="https://img.shields.io/badge/Ask_Zread-_.svg?style=flat&color=00b0aa&labelColor=000000&logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iMTYiIHZpZXdCb3g9IjAgMCAxNiAxNiIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTQuOTYxNTYgMS42MDAxSDIuMjQxNTZDMS44ODgxIDEuNjAwMSAxLjYwMTU2IDEuODg2NjQgMS42MDE1NiAyLjI0MDFWNC45NjAxQzEuNjAxNTYgNS4zMTM1NiAxLjg4ODEgNS42MDAxIDIuMjQxNTYgNS42MDAxSDQuOTYxNTZDNS4zMTUwMiA1LjYwMDEgNS42MDE1NiA1LjMxMzU2IDUuNjAxNTYgNC45NjAxVjIuMjQwMUM1LjYwMTU2IDEuODg2NjQgNS4zMTUwMiAxLjYwMDEgNC45NjE1NiAxLjYwMDFaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik00Ljk2MTU2IDEwLjM5OTlIMi4yNDE1NkMxLjg4ODEgMTAuMzk5OSAxLjYwMTU2IDEwLjY4NjQgMS42MDE1NiAxMS4wMzk5VjEzLjc1OTlDMS42MDE1NiAxNC4xMTM0IDEuODg4MSAxNC4zOTk5IDIuMjQxNTYgMTQuMzk5OUg0Ljk2MTU2QzUuMzE1MDIgMTQuMzk5OSA1LjYwMTU2IDE0LjExMzQgNS42MDE1NiAxMy43NTk5VjExLjAzOTlDNS42MDE1NiAxMC42ODY0IDUuMzE1MDIgMTAuMzk5OSA0Ljk2MTU2IDEwLjM5OTlaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik0xMy43NTg0IDEuNjAwMUgxMS4wMzg0QzEwLjY4NSAxLjYwMDEgMTAuMzk4NCAxLjg4NjY0IDEwLjM5ODQgMi4yNDAxVjQuOTYwMUMxMC4zOTg0IDUuMzEzNTYgMTAuNjg1IDUuNjAwMSAxMS4wMzg0IDUuNjAwMUgxMy43NTg0QzE0LjExMTkgNS42MDAxIDE0LjM5ODQgNS4zMTM1NiAxNC4zOTg0IDQuOTYwMVYyLjI0MDFDMTQuMzk4NCAxLjg4NjY0IDE0LjExMTkgMS42MDAxIDEzLjc1ODQgMS42MDAxWiIgZmlsbD0iI2ZmZiIvPgo8cGF0aCBkPSJNNCAxMkwxMiA0TDQgMTJaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik00IDEyTDEyIDQiIHN0cm9rZT0iI2ZmZiIgc3Ryb2tlLXdpZHRoPSIxLjUiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIvPgo8L3N2Zz4K&logoColor=ffffff" alt="zread"/></a> <a href="https://hub.docker.com/r/soulter/astrbot"><img alt="Docker pull" src="https://img.shields.io/docker/pulls/soulter/astrbot.svg?color=76bad9"/></a> <img src="https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fapi.soulter.top%2Fastrbot%2Fplugin-num&query=%24.result&suffix=%E4%B8%AA&label=%E6%8F%92%E4%BB%B6%E5%B8%82%E5%9C%BA&cacheSeconds=3600"> <img src="https://gitcode.com/Soulter/AstrBot/star/badge.svg" href="https://gitcode.com/Soulter/AstrBot"> </div> <br> <a href="https://astrbot.app/">文档</a> | <a href="https://blog.astrbot.app/">Blog</a> | <a href="https://astrbot.featurebase.app/roadmap">路线图</a> | <a href="https://github.com/AstrBotDevs/AstrBot/issues">问题提交</a> </div> AstrBot 是一个开源的一站式 Agentic 个人和群聊助手,可在 QQ、Telegram、企业微信、飞书、钉钉、Slack、等数十款主流即时通讯软件上部署,此外还内置类似 OpenWebUI 的轻量化 ChatUI,为个人、开发者和团队打造可靠、可扩展的对话式智能基础设施。无论是个人 AI 伙伴、智能客服、自动化助手,还是企业知识库,AstrBot 都能在你的即时通讯软件平台的工作流中快速构建 AI 应用。 ![521771166-00782c4c-4437-4d97-aabc-605e3738da5c (1)](https://github.com/user-attachments/assets/61e7b505-f7db-41aa-a75f-4ef8f079b8ba) ## 主要功能 1. 💯 免费 & 开源。 2. ✨ AI 大模型对话,多模态,Agent,MCP,Skills,知识库,人格设定,自动压缩对话。 3. 🤖 支持接入 Dify、阿里云百炼、Coze 等智能体平台。 4. 🌐 多平台,支持 QQ、企业微信、飞书、钉钉、微信公众号、Telegram、Slack 以及[更多](#支持的消息平台)。 5. 📦 插件扩展,已有近 800 个插件可一键安装。 6. 🛡️ [Agent Sandbox](https://docs.astrbot.app/use/astrbot-agent-sandbox.html) 隔离化环境,安全地执行任何代码、调用 Shell、会话级资源复用。 7. 💻 WebUI 支持。 8. 🌈 Web ChatUI 支持,ChatUI 内置代理沙盒、网页搜索等。 9. 🌐 国际化(i18n)支持。 <br> <table align="center"> <tr align="center"> <th>💙 角色扮演 & 情感陪伴</th> <th>✨ 主动式 Agent</th> <th>🚀 通用 Agentic 能力</th> <th>🧩 900+ 社区插件</th> </tr> <tr> <td align="center"><p align="center"><img width="984" height="1746" alt="99b587c5d35eea09d84f33e6cf6cfd4f" src="https://github.com/user-attachments/assets/89196061-3290-458d-b51f-afa178049f84" /></p></td> <td align="center"><p align="center"><img width="976" height="1612" alt="c449acd838c41d0915cc08a3824025b1" src="https://github.com/user-attachments/assets/f75368b4-e022-41dc-a9e0-131c3e73e32e" /></p></td> <td align="center"><p align="center"><img width="974" height="1732" alt="image" src="https://github.com/user-attachments/assets/e22a3968-87d7-4708-a7cd-e7f198c7c32e" /></p></td> <td align="center"><p align="center"><img width="976" height="1734" alt="image" src="https://github.com/user-attachments/assets/0952b395-6b4a-432a-8a50-c294b7f89750" /></p></td> </tr> </table> ## 快速开始 #### Docker 部署(推荐 🥳) 推荐使用 Docker / Docker Compose 方式部署 AstrBot。 请参阅官方文档 [使用 Docker 部署 AstrBot](https://astrbot.app/deploy/astrbot/docker.html#%E4%BD%BF%E7%94%A8-docker-%E9%83%A8%E7%BD%B2-astrbot) 。 #### uv 部署 ```bash uv tool install astrbot astrbot ``` #### 启动器一键部署(AstrBot Launcher) 进入 [AstrBot Launcher](https://github.com/Raven95676/astrbot-launcher) 仓库,在 Releases 页最新版本下找到对应的系统安装包安装即可。 #### 宝塔面板部署 AstrBot 与宝塔面板合作,已上架至宝塔面板。 请参阅官方文档 [宝塔面板部署](https://astrbot.app/deploy/astrbot/btpanel.html) 。 #### 1Panel 部署 AstrBot 已由 1Panel 官方上架至 1Panel 面板。 请参阅官方文档 [1Panel 部署](https://astrbot.app/deploy/astrbot/1panel.html) 。 #### 在 雨云 上部署 AstrBot 已由雨云官方上架至云应用平台,可一键部署。 [![Deploy on RainYun](https://rainyun-apps.cn-nb1.rains3.com/materials/deploy-on-rainyun-en.svg)](https://app.rainyun.com/apps/rca/store/5994?ref=NjU1ODg0) #### 在 Replit 上部署 社区贡献的部署方式。 [![Run on Repl.it](https://repl.it/badge/github/AstrBotDevs/AstrBot)](https://repl.it/github/AstrBotDevs/AstrBot) #### Windows 一键安装器部署 请参阅官方文档 [使用 Windows 一键安装器部署 AstrBot](https://astrbot.app/deploy/astrbot/windows.html) 。 #### CasaOS 部署 社区贡献的部署方式。 请参阅官方文档 [CasaOS 部署](https://astrbot.app/deploy/astrbot/casaos.html) 。 #### 手动部署 首先安装 uv: ```bash pip install uv ``` 通过 Git Clone 安装 AstrBot: ```bash git clone https://github.com/AstrBotDevs/AstrBot && cd AstrBot uv run main.py ``` 或者请参阅官方文档 [通过源码部署 AstrBot](https://astrbot.app/deploy/astrbot/cli.html) 。 #### 系统包管理器安装 ##### Arch Linux ```bash yay -S astrbot-git # 或者使用 paru paru -S astrbot-git ``` #### 桌面端(Tauri) 桌面端已迁移为独立仓库(Tauri):[https://github.com/AstrBotDevs/AstrBot-desktop](https://github.com/AstrBotDevs/AstrBot-desktop)。 ## 支持的消息平台 **官方维护** - QQ - OneBot v11 协议实现 - Telegram - 企微应用 & 企微智能机器人 - 微信客服 & 微信公众号 - 飞书 - 钉钉 - Slack - Discord - LINE - Satori - Misskey - Whatsapp (将支持) **社区维护** - [Matrix](https://github.com/stevessr/astrbot_plugin_matrix_adapter) - [KOOK](https://github.com/wuyan1003/astrbot_plugin_kook_adapter) - [VoceChat](https://github.com/HikariFroya/astrbot_plugin_vocechat) ## 支持的模型服务 **大模型服务** - OpenAI 及兼容服务 - Anthropic - Google Gemini - Moonshot AI - 智谱 AI - DeepSeek - Ollama (本地部署) - LM Studio (本地部署) - [AIHubMix](https://aihubmix.com/?aff=4bfH) - [优云智算](https://www.compshare.cn/?ytag=GPU_YY-gh_astrbot&referral_code=FV7DcGowN4hB5UuXKgpE74) - [302.AI](https://share.302.ai/rr1M3l) - [小马算力](https://www.tokenpony.cn/3YPyf) - [硅基流动](https://docs.siliconflow.cn/cn/usercases/use-siliconcloud-in-astrbot) - [PPIO 派欧云](https://ppio.com/user/register?invited_by=AIOONE) - ModelScope - OneAPI **LLMOps 平台** - Dify - 阿里云百炼应用 - Coze **语音转文本服务** - OpenAI Whisper - SenseVoice **文本转语音服务** - OpenAI TTS - Gemini TTS - GPT-Sovits-Inference - GPT-Sovits - FishAudio - Edge TTS - 阿里云百炼 TTS - Azure TTS - Minimax TTS - 火山引擎 TTS ## ❤️ 贡献 欢迎任何 Issues/Pull Requests!只需要将你的更改提交到此项目 :) ### 如何贡献 你可以通过查看问题或帮助审核 PR(拉取请求)来贡献。任何问题或 PR 都欢迎参与,以促进社区贡献。当然,这些只是建议,你可以以任何方式进行贡献。对于新功能的添加,请先通过 Issue 讨论。 ### 开发环境 AstrBot 使用 `ruff` 进行代码格式化和检查。 ```bash git clone https://github.com/AstrBotDevs/AstrBot pip install pre-commit pre-commit install ``` ## 🌍 社区 ### QQ 群组 - 1 群:322154837 - 3 群:630166526 - 5 群:822130018 - 6 群:753075035 - 7 群:743746109 - 8 群:1030353265 - 开发者群:975206796 ### Telegram 群组 <a href="https://t.me/+hAsD2Ebl5as3NmY1"><img alt="Telegram_community" src="https://img.shields.io/badge/Telegram-AstrBot-purple?style=for-the-badge&color=76bad9"></a> ### Discord 群组 <a href="https://discord.gg/hAVk6tgV36"><img alt="Discord_community" src="https://img.shields.io/badge/Discord-AstrBot-purple?style=for-the-badge&color=76bad9"></a> ## ❤️ Special Thanks 特别感谢所有 Contributors 和插件开发者对 AstrBot 的贡献 ❤️ <a href="https://github.com/AstrBotDevs/AstrBot/graphs/contributors"> <img src="https://contrib.rocks/image?repo=AstrBotDevs/AstrBot" /> </a> 此外,本项目的诞生离不开以下开源项目的帮助: - [NapNeko/NapCatQQ](https://github.com/NapNeko/NapCatQQ) - 伟大的猫猫框架 开源项目友情链接: - [NoneBot2](https://github.com/nonebot/nonebot2) - 优秀的 Python 异步 ChatBot 框架 - [Koishi](https://github.com/koishijs/koishi) - 优秀的 Node.js ChatBot 框架 - [MaiBot](https://github.com/Mai-with-u/MaiBot) - 优秀的拟人化 AI ChatBot - [nekro-agent](https://github.com/KroMiose/nekro-agent) - 优秀的 Agent ChatBot - [LangBot](https://github.com/langbot-app/LangBot) - 优秀的多平台 AI ChatBot - [ChatLuna](https://github.com/ChatLunaLab/chatluna) - 优秀的多平台 AI ChatBot Koishi 插件 - [Operit AI](https://github.com/AAswordman/Operit) - 优秀的 AI 智能助手 Android APP ## ⭐ Star History > [!TIP] > 如果本项目对您的生活 / 工作产生了帮助,或者您关注本项目的未来发展,请给项目 Star,这是我们维护这个开源项目的动力 <3 <div align="center"> [![Star History Chart](https://api.star-history.com/svg?repos=astrbotdevs/astrbot&type=Date)](https://star-history.com/#astrbotdevs/astrbot&Date) </div> <div align="center"> _陪伴与能力从来不应该是对立面。我们希望创造的是一个既能理解情绪、给予陪伴,也能可靠完成工作的机器人。_ _私は、高性能ですから!_ <img src="https://files.astrbot.app/watashiwa-koseino-desukara.gif" width="100"/> </div>
text/markdown
null
null
null
null
null
Astrbot, Astrbot Module, Astrbot Plugin
[]
[]
null
null
>=3.12
[]
[]
[]
[ "aiocqhttp>=1.4.4", "aiodocker>=0.24.0", "aiofiles>=25.1.0", "aiohttp>=3.11.18", "aiosqlite>=0.21.0", "anthropic>=0.51.0", "apscheduler>=3.11.0", "audioop-lts; python_full_version >= \"3.13\"", "beautifulsoup4>=4.13.4", "certifi>=2025.4.26", "chardet~=5.1.0", "click>=8.2.1", "cryptography>=44.0.3", "dashscope>=1.23.2", "defusedxml>=0.7.1", "deprecated>=1.2.18", "dingtalk-stream>=0.22.1", "docstring-parser>=0.16", "faiss-cpu>=1.12.0", "filelock>=3.18.0", "google-genai>=1.56.0", "jieba>=0.42.1", "lark-oapi>=1.4.15", "loguru>=0.7.2", "lxml-html-clean>=0.4.2", "markitdown-no-magika[docx,xls,xlsx]>=0.1.2", "mcp>=1.8.0", "openai>=1.78.0", "ormsgpack>=1.9.1", "packaging>=24.2", "pillow>=11.2.1", "pip>=25.1.1", "psutil<7.2.0,>=5.8.0", "py-cord>=2.6.1", "pydantic>=2.12.5", "pydub>=0.25.1", "pyjwt>=2.10.1", "pypdf>=6.1.1", "python-socks>=2.8.0", "python-telegram-bot>=22.0", "qq-botpy>=1.2.1", "quart>=0.20.0", "rank-bm25>=0.2.2", "readability-lxml>=0.8.4.1", "shipyard-python-sdk>=0.2.4", "silk-python>=0.2.6", "slack-sdk>=3.35.0", "sqlalchemy[asyncio]>=2.0.41", "sqlmodel>=0.0.24", "telegramify-markdown>=0.5.1", "tenacity>=9.1.2", "watchfiles>=1.0.5", "websockets>=15.0.1", "wechatpy>=1.8.18", "xinference-client" ]
[]
[]
[]
[]
uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-20T10:43:15.613115
astrbot-4.17.6-py3-none-any.whl
831,002
94/ba/6c11b31619dee2639d2e6f3d13567eaf0e167df1607bb22a2c45c0538fef/astrbot-4.17.6-py3-none-any.whl
py3
bdist_wheel
null
false
040d33fd6be8a1db8f9b975f6b669603
36e09a0ae050b0f2aee99f993aabc1e7352896b84d005c39c5048c0cc68e7131
94ba6c11b31619dee2639d2e6f3d13567eaf0e167df1607bb22a2c45c0538fef
null
[ "LICENSE" ]
0
2.4
mkdocs-static-i18n
1.3.1
MkDocs i18n plugin using static translation markdown files
![logo by max.icons](https://github.com/ultrabug/mkdocs-static-i18n/blob/main/docs/assets/logo_by_maxicons.png) # MkDocs static i18n plugin ![mkdocs-static-i18n pypi version](https://img.shields.io/pypi/v/mkdocs-static-i18n.svg) *The MkDocs plugin that helps you support multiple language versions of your site / documentation.* *Like what you :eyes:? Using this plugin? Give it a :star:!* The `mkdocs-static-i18n` plugin allows you to support multiple languages of your documentation by adding static translation files to your existing documentation pages. Multi language support is just **one `.<language>.md` file away**! Even better, `mkdocs-static-i18n` also allows you to build and serve localized versions of any file extension to display localized images, medias and assets. Localized images/medias/assets are just **one `.<language>.<extension>` file away**! Don't like file suffixes? You're more into a folder based structure? We got you covered as well! ## Documentation Check out the [plugins' documentation here](https://ultrabug.github.io/mkdocs-static-i18n/). TL;DR? There's a [quick start guide](https://ultrabug.github.io/mkdocs-static-i18n/getting-started/quick-start/) for you! ## Upgrading from 0.x versions :warning: Version 1.0.0 brings **breaking changes** to the configuration format of the plugin. Check out the [upgrade to v1.0.0 guide](https://ultrabug.github.io/mkdocs-static-i18n/setup/upgrading-to-1/) to ease updating your `mkdocs.yml` file! ## See it in action This plugin is proudly bringing localized content of [hundreds of projects](https://github.com/ultrabug/mkdocs-static-i18n/network/dependents) to their users. Check it out live: - [On this repository documentation](https://ultrabug.github.io/mkdocs-static-i18n/) - [On my own website: ultrabug.fr](https://ultrabug.fr) But also in our hall of fame: - [AWS Copilot CLI](https://aws.github.io/copilot-cli/) - [OWASP Top 10](https://github.com/OWASP/Top10) - [Spaceship Prompt](https://spaceship-prompt.sh/) - [FederatedAI FATE](https://fate.readthedocs.io/en/latest/) - [Privacy Guides Org](https://www.privacyguides.org/en/) - [Computer Science Self Learning Wiki](https://csdiy.wiki/) ## Contributions welcome Feel free to ask questions, enhancements and to contribute to this project! ## Development The project is managed with `hatch`. [Install `hatch`](https://hatch.pypa.io/1.9/install/#gui-installer) first. Run the tests: ```shell hatch run test:test hatch run style:check ``` Serve the documentation: ```shell hatch run doc:serve ``` ## Credits - Logo by [max.icons](https://www.flaticon.com/authors/maxicons)
text/markdown
null
Ultrabug <ultrabug@ultrabug.net>
null
null
null
null
[ "License :: OSI Approved :: MIT License", "Operating System :: POSIX :: Linux", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
null
null
>=3.8
[]
[]
[]
[ "mkdocs>=1.5.2", "mkdocs-material<9.7.2; extra == \"material\"" ]
[]
[]
[]
[ "Documentation, https://github.com/ultrabug/mkdocs-static-i18n#readme", "Download, https://github.com/ultrabug/mkdocs-static-i18n/tags", "Funding, https://ultrabug.fr/#support-me", "Homepage, https://github.com/ultrabug/mkdocs-static-i18n", "Source, https://github.com/ultrabug/mkdocs-static-i18n", "Tracker, https://github.com/ultrabug/mkdocs-static-i18n/issues" ]
Hatch/1.16.3 cpython/3.14.2 HTTPX/0.28.1
2026-02-20T10:42:41.835200
mkdocs_static_i18n-1.3.1.tar.gz
1,371,325
ce/f9/51e2ffda9c7210bc35a24f3717b08c052cd4b728dfa87f901c00d8005259/mkdocs_static_i18n-1.3.1.tar.gz
source
sdist
null
false
70a81efb097b00c79f514da23854f40e
a6125ea7db6cc1a900d76a967f262535af09831160a93c56d7f0d522a79b5faf
cef951e2ffda9c7210bc35a24f3717b08c052cd4b728dfa87f901c00d8005259
MIT
[ "LICENSE" ]
2,447
2.4
envcipher
0.1.3
Secure .env file encryption using OS keychain. Keep secrets encrypted at rest.
# Envcipher [![Crates.io](https://img.shields.io/crates/v/envcipher.svg)](https://crates.io/crates/envcipher) [![PyPI](https://img.shields.io/pypi/v/envcipher.svg)](https://pypi.org/project/envcipher/) [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE) Encrypt `.env` files using AES-256-GCM with keys stored in your OS keychain. Decrypt on demand for local development without managing separate key files. --- ## Installation <details open> <summary><strong>Python</strong></summary> ```bash pip install envcipher ``` Provides both the CLI and Python library. </details> <details> <summary><strong>Rust</strong></summary> ```bash cargo install envcipher ``` CLI only. </details> <details> <summary><strong>From Source</strong></summary> ```bash git clone https://github.com/iamprecieee/envcipher cd envcipher cargo install --path . ``` </details> --- ## Usage ### CLI ```bash envcipher init # Generate key, store in OS keychain envcipher edit # Decrypt -> edit -> re-encrypt envcipher lock # Encrypt .env in place envcipher unlock # Decrypt .env to plaintext envcipher run -- <cmd> # Run command with decrypted env vars envcipher status # Show encryption status ``` <details> <summary><strong>Python Library</strong></summary> ```python import envcipher import os # Load encrypted .env into os.environ envcipher.load() # Access secrets api_key = os.getenv("API_KEY") ``` Custom path: ```python envcipher.load(path="/path/to/.env") ``` Works with both encrypted and plaintext files. </details> --- ## Team Sharing ```bash # Export key envcipher export-key # Output: qQWntX6r7eANxsyKHbkJtuXtzW0Hy5zjJGvDSxMKM9I= # Import on another machine envcipher import-key qQWntX6r7eANxsyKHbkJtuXtzW0Hy5zjJGvDSxMKM9I= ``` Share keys through secure channels only. --- ## Security | Component | Implementation | |-----------|----------------| | Encryption | AES-256-GCM, 96-bit random nonces | | Key Storage | OS keychain (Keychain / Credential Manager / Secret Service) | | Memory | Keys zeroized on drop | | Format | `ENVCIPHER:v1:<nonce>:<ciphertext>` | **Designed for:** Protecting secrets from accidental commits, local development encryption at rest, small team key sharing. **Not designed for:** Production secret management, zero-trust environments, HSM requirements. --- ## FAQ <details> <summary>Can I manually edit the encrypted file?</summary> No. Use `envcipher edit` or the unlock-edit-lock workflow. Manual edits corrupt the format. </details> <details> <summary>Can I commit the encrypted .env file?</summary> Yes, but we recommend using `.gitignore` and sharing via `export-key`/`import-key` instead. Committing encrypted files is safe only if your team securely shares the key. </details> <details> <summary>What if I lose my key?</summary> Keys are stored in your OS keychain. If you lose access (e.g., fresh OS install), get a teammate to run `export-key`. </details> <details> <summary>How do I rotate keys?</summary> Currently manual: decrypt with old key, run `init` in a fresh directory to generate new key, re-encrypt. </details> <details> <summary>Does it work in CI/CD?</summary> Not recommended. Envcipher is designed for local development. CI runners have ephemeral keychains, and storing the key as a CI secret defeats the purpose. Use native secret management instead (GitHub Secrets, AWS Secrets Manager, etc.). </details> <details> <summary>Can I use this on multiple projects?</summary> Yes. Each project directory gets its own key (hashed by directory path). Moving a project folder requires re-importing the key. </details> --- ## License [MIT](LICENSE) --- [Contributing](docs/CONTRIBUTING.md) | [Code of Conduct](docs/CODE_OF_CONDUCT.md) | [Security](docs/SECURITY.md)
text/markdown; charset=UTF-8; variant=GFM
null
iamprecieee <emmypresh777@gmail.com>
null
null
null
env, encryption, secrets, security, python, dotenv
[ "Programming Language :: Rust", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy" ]
[]
null
null
>=3.7
[]
[]
[]
[ "maturin>=1.12.3" ]
[]
[]
[]
[]
maturin/1.12.3
2026-02-20T10:42:38.718427
envcipher-0.1.3.tar.gz
26,264
c1/99/27c43da1d867f8c161ccde97d2717164ce68da7707b432db8a9bf995ad08/envcipher-0.1.3.tar.gz
source
sdist
null
false
4e2ab70d59a92bed603a270727b4c8ab
c73b9ad74e735f4399d4a30b98f4e87e5db909c1f35915b65c67d1f366790011
c19927c43da1d867f8c161ccde97d2717164ce68da7707b432db8a9bf995ad08
null
[]
242
2.4
usearch-iscc
2.24.2
Smaller & Faster Single-File Vector Search Engine from Unum (ISCC Foundation Fork)
<h1 align="center">USearch</h1> <h3 align="center"> Smaller & <a href="https://www.unum.cloud/blog/2023-11-07-scaling-vector-search-with-intel">Faster</a> Single-File<br/> Similarity Search & Clustering Engine for <a href="https://github.com/ashvardanian/simsimd">Vectors</a> & 🔜 <a href="https://github.com/ashvardanian/stringzilla">Texts</a> </h3> <br/> <p align="center"> <a href="https://discord.gg/A6wxt6dS9j"><img height="25" src="https://github.com/unum-cloud/.github/raw/main/assets/discord.svg" alt="Discord"></a> &nbsp;&nbsp;&nbsp; <a href="https://www.linkedin.com/company/unum-cloud/"><img height="25" src="https://github.com/unum-cloud/.github/raw/main/assets/linkedin.svg" alt="LinkedIn"></a> &nbsp;&nbsp;&nbsp; <a href="https://twitter.com/unum_cloud"><img height="25" src="https://github.com/unum-cloud/.github/raw/main/assets/twitter.svg" alt="Twitter"></a> &nbsp;&nbsp;&nbsp; <a href="https://unum.cloud/post"><img height="25" src="https://github.com/unum-cloud/.github/raw/main/assets/blog.svg" alt="Blog"></a> &nbsp;&nbsp;&nbsp; <a href="https://github.com/unum-cloud/usearch"><img height="25" src="https://github.com/unum-cloud/.github/raw/main/assets/github.svg" alt="GitHub"></a> </p> <p align="center"> Spatial • Binary • Probabilistic • User-Defined Metrics <br/> <a href="https://unum-cloud.github.io/USearch/cpp">C++11</a> • <a href="https://unum-cloud.github.io/USearch/python">Python 3</a> • <a href="https://unum-cloud.github.io/USearch/javascript">JavaScript</a> • <a href="https://unum-cloud.github.io/USearch/java">Java</a> • <a href="https://unum-cloud.github.io/USearch/rust">Rust</a> • <a href="https://unum-cloud.github.io/USearch/c">C99</a> • <a href="https://unum-cloud.github.io/USearch/objective-c">Objective-C</a> • <a href="https://unum-cloud.github.io/USearch/swift">Swift</a> • <a href="https://unum-cloud.github.io/USearch/csharp">C#</a> • <a href="https://unum-cloud.github.io/USearch/golang">Go</a> • <a href="https://unum-cloud.github.io/USearch/wolfram">Wolfram</a> <br/> Linux • macOS • Windows • iOS • Android • WebAssembly • <a href="https://unum-cloud.github.io/USearch/sqlite">SQLite</a> </p> <div align="center"> <a href="https://pepy.tech/project/usearch"> <img alt="PyPI" src="https://static.pepy.tech/personalized-badge/usearch?period=total&units=abbreviation&left_color=black&right_color=blue&left_text=Python%20PyPI%20installs"> </a> <a href="https://www.npmjs.com/package/usearch"> <img alt="NPM" src="https://img.shields.io/npm/dy/usearch?label=JavaScript%20NPM%20installs"> </a> <a href="https://crates.io/crates/usearch"> <img alt="Crate" src="https://img.shields.io/crates/d/usearch?label=Rust%20Crate%20installs"> </a> <a href="https://www.nuget.org/packages/Cloud.Unum.USearch"> <img alt="NuGet" src="https://img.shields.io/nuget/dt/Cloud.Unum.USearch?label=CSharp%20NuGet%20installs"> </a> <!-- Maven Central publishing is deprecated for now; fat-JAR download is the supported path. --> <img alt="GitHub code size in bytes" src="https://img.shields.io/github/languages/code-size/unum-cloud/usearch?label=Repo%20size"> </div> --- > **ISCC Foundation Fork** -- This is a maintained fork of [USearch](https://github.com/unum-cloud/usearch) > by the [ISCC Foundation](https://iscc.io), published on PyPI as > [`usearch-iscc`](https://pypi.org/project/usearch-iscc/). The Python import name remains `usearch` > for compatibility. Install with: `pip install usearch-iscc` > > **Fork divergence from upstream:** > - 128-bit key support (Python): `Index(ndim=..., key_kind="uuid")` for packed 16-byte keys > - Multi-index UUID support (Python): `Indexes` works with both u64 and uuid-keyed shards > - NPHD metric (all bindings): Normalized Prefix Hamming Distance for length-prefixed binary vectors > - Build: published as `usearch-iscc` on PyPI with independent release cycle --- - ✅ __[10x faster][faster-than-faiss]__ [HNSW][hnsw-algorithm] implementation than [FAISS][faiss]. - ✅ Simple and extensible [single C++11 header][usearch-header] __library__. - ✅ [Trusted](#integrations) by giants like Google and DBs like [ClickHouse][clickhouse-docs] & [DuckDB][duckdb-docs]. - ✅ [SIMD][simd]-optimized and [user-defined metrics](#user-defined-functions) with JIT compilation. - ✅ Hardware-agnostic `f16` & `i8` - [half-precision & quarter-precision support](#memory-efficiency-downcasting-and-quantization). - ✅ [View large indexes from disk](#serialization--serving-index-from-disk) without loading into RAM. - ✅ Heterogeneous lookups, renaming/relabeling, and on-the-fly deletions. - ✅ Binary Tanimoto and Sorensen coefficients for [Genomics and Chemistry applications](#usearch--rdkit--molecular-search). - ✅ [NPHD metric](#nphd-metric-normalized-prefix-hamming-distance) for variable-length binary fingerprint comparison. - ✅ Space-efficient point-clouds with `uint40_t`, accommodating 4B+ size. - ✅ Compatible with OpenMP and custom "executors" for fine-grained parallelism. - ✅ [Semantic Search](#usearch--uform--ucall--multimodal-semantic-search) and [Joins](#joins-one-to-one-one-to-many-and-many-to-many-mappings). - 🔄 Near-real-time [clustering and sub-clustering](#clustering) for Tens or Millions of clusters. [faiss]: https://github.com/facebookresearch/faiss [usearch-header]: https://github.com/unum-cloud/usearch/blob/main/include/usearch/index.hpp [obscure-use-cases]: https://ashvardanian.com/posts/abusing-vector-search [hnsw-algorithm]: https://arxiv.org/abs/1603.09320 [simd]: https://en.wikipedia.org/wiki/Single_instruction,_multiple_data [faster-than-faiss]: https://www.unum.cloud/blog/2023-11-07-scaling-vector-search-with-intel [clickhouse-docs]: https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes#usearch [duckdb-docs]: https://duckdb.org/2024/05/03/vector-similarity-search-vss.html __Technical Insights__ and related articles: - [Uses Arm SVE and x86 AVX-512's masked loads to eliminate tail `for`-loops](https://ashvardanian.com/posts/simsimd-faster-scipy/#tails-of-the-past-the-significance-of-masked-loads). - [Uses Horner's method for polynomial approximations, beating GCC 12 by 119x](https://ashvardanian.com/posts/gcc-12-vs-avx512fp16/). - [For every language implements a custom separate binding](https://ashvardanian.com/posts/porting-cpp-library-to-ten-languages/). ## Comparison with FAISS FAISS is a widely recognized standard for high-performance vector search engines. USearch and FAISS both employ the same HNSW algorithm, but they differ significantly in their design principles. USearch is compact and broadly compatible without sacrificing performance, primarily focusing on user-defined metrics and fewer dependencies. | | FAISS | USearch | Improvement | | :------------------------------------------- | ----------------------: | -----------------------: | ----------------------: | | Indexing time ⁰ | | | | | 100 Million 96d `f32`, `f16`, `i8` vectors | 2.6 · 2.6 · 2.6 h | 0.3 · 0.2 · 0.2 h | __9.6 · 10.4 · 10.7 x__ | | 100 Million 1536d `f32`, `f16`, `i8` vectors | 5.0 · 4.1 · 3.8 h | 2.1 · 1.1 · 0.8 h | __2.3 · 3.6 · 4.4 x__ | | | | | | | Codebase length ¹ | 84 K [SLOC][sloc] | 3 K [SLOC][sloc] | maintainable | | Supported metrics ² | 9 fixed metrics | any metric | extendible | | Supported languages ³ | C++, Python | 10 languages | portable | | Supported ID types ⁴ | 32-bit, 64-bit | 32-bit, 40-bit, 64-bit | efficient | | Filtering ⁵ | ban-lists | any predicates | composable | | Required dependencies ⁶ | BLAS, OpenMP | - | light-weight | | Bindings ⁷ | SWIG | Native | low-latency | | Python binding size ⁸ | [~ 10 MB][faiss-weight] | [< 1 MB][usearch-weight] | deployable | [sloc]: https://en.wikipedia.org/wiki/Source_lines_of_code [faiss-weight]: https://pypi.org/project/faiss-cpu/#files [usearch-weight]: https://pypi.org/project/usearch/#files > ⁰ [Tested][intel-benchmarks] on Intel Sapphire Rapids, with the simplest inner-product distance, equivalent recall, and memory consumption while also providing far superior search speed. > ¹ A shorter codebase of `usearch/` over `faiss/` makes the project easier to maintain and audit. > ² User-defined metrics allow you to customize your search for various applications, from GIS to creating custom metrics for composite embeddings from multiple AI models or hybrid full-text and semantic search. > ³ With USearch, you can reuse the same preconstructed index in various programming languages. > ⁴ The 40-bit integer allows you to store 4B+ vectors without allocating 8 bytes for every neighbor reference in the proximity graph. > ⁵ With USearch the index can be combined with arbitrary external containers, like Bloom filters or third-party databases, to filter out irrelevant keys during index traversal. > ⁶ Lack of obligatory dependencies makes USearch much more portable. > ⁷ Native bindings introduce lower call latencies than more straightforward approaches. > ⁸ Lighter bindings make downloads and deployments faster. [intel-benchmarks]: https://www.unum.cloud/blog/2023-11-07-scaling-vector-search-with-intel Base functionality is identical to FAISS, and the interface must be familiar if you have ever investigated Approximate Nearest Neighbors search: ```py # pip install usearch import numpy as np from usearch.index import Index index = Index(ndim=3) # Default settings for 3D vectors vector = np.array([0.2, 0.6, 0.4]) # Can be a matrix for batch operations index.add(42, vector) # Add one or many vectors in parallel matches = index.search(vector, 10) # Find 10 nearest neighbors assert matches[0].key == 42 assert matches[0].distance <= 0.001 assert np.allclose(index[42], vector, atol=0.1) # Ensure high tolerance in mixed-precision comparisons ``` More settings are always available, and the API is designed to be as flexible as possible. The default storage/quantization level is hardware-dependant for efficiency, but `bf16` is recommended for most modern CPUs. ```py index = Index( ndim=3, # Define the number of dimensions in input vectors metric='cos', # Choose 'l2sq', 'ip', 'haversine' or other metric, default = 'cos' dtype='bf16', # Store as 'f64', 'f32', 'f16', 'i8', 'b1'..., default = None connectivity=16, # Optional: Limit number of neighbors per graph node expansion_add=128, # Optional: Control the recall of indexing expansion_search=64, # Optional: Control the quality of the search multi=False, # Optional: Allow multiple vectors per key, default = False ) ``` ## 128-bit Keys (UUID Mode) By default, USearch uses 64-bit unsigned integer keys. This fork adds support for 128-bit keys via `key_kind="uuid"`, allowing you to pack structured identifiers (e.g. content hashes, chunk pointers) directly into the key. ```py import numpy as np from usearch.index import Index # Create an index with 128-bit keys index = Index(ndim=128, metric='cos', key_kind='uuid') # Keys are 16-byte values: single keys as bytes, batches as numpy V16 arrays batch_size = 1000 keys = np.empty(batch_size, dtype='V16') vectors = np.random.randn(batch_size, 128).astype(np.float32) for i in range(batch_size): body = i.to_bytes(8, 'big') # 8 bytes: content identity offset = (i * 16).to_bytes(4, 'big') # 4 bytes: chunk offset size = (1024 + i).to_bytes(4, 'big') # 4 bytes: chunk size keys[i] = body + offset + size # 16 bytes total index.add(keys, vectors) matches = index.search(vectors[0], count=5) for match in matches: print(match.key, match.distance) # match.key is bytes(16) # Single-key operations use bytes(16) single_key = keys[0].tobytes() index.contains(single_key) # bool index.get(single_key) # np.ndarray or None index.remove(single_key) # Save/load preserves key kind; mismatched load raises ValueError index.save('index.usearch') restored = Index.restore('index.usearch') # auto-detects uuid mode ``` > **Note:** Auto-generated keys are not supported in uuid mode — you must always pass explicit keys to `add()`. ## NPHD Metric (Normalized Prefix Hamming Distance) NPHD is a built-in distance metric for comparing length-prefixed binary vectors. Each vector's first byte stores the data length in bytes. The metric computes the Hamming distance over the common prefix of two vectors and normalizes by the shorter vector's bit count, returning a value in `[0.0, 1.0]`. This is useful for content identification systems like [ISCC](https://iscc.codes) where binary fingerprints may have variable-length prefixes. Previously this required a custom Numba `@cfunc` metric (~500MB of dependencies) and `change_metric()` hacks after every `load()`/`view()`. The native metric eliminates both. ```py import numpy as np from usearch.index import Index, MetricKind, ScalarKind # Vector layout: [length_byte, data_byte_0, data_byte_1, ..., padding...] # ndim is total size in bits, including the length byte. ndim = 264 # 33 bytes = 1 length byte + up to 32 data bytes index = Index(ndim=ndim, metric=MetricKind.NPHD, dtype=ScalarKind.B1) def make_vector(length, data_bytes): """Build a length-prefixed binary vector.""" vec = np.zeros(ndim // 8, dtype=np.uint8) vec[0] = length vec[1:1 + len(data_bytes)] = data_bytes return vec a = make_vector(4, [0xAA, 0xBB, 0xCC, 0xDD]) b = make_vector(4, [0xAA, 0xBB, 0xCC, 0x00]) index.add(0, a) index.add(1, b) matches = index.search(a, 2) print(matches[0].key, matches[0].distance) # 0, 0.0 print(matches[1].key, matches[1].distance) # 1, ~0.15625 # Save/load preserves the metric — no change_metric() needed index.save("nphd_index.usearch") restored = Index.restore("nphd_index.usearch") assert str(restored.metric_kind) == "MetricKind.NPHD" ``` **Key details:** - Only valid with `dtype=ScalarKind.B1` (binary vectors). - The length byte encodes the number of data bytes (not bits), excluding itself. - When vectors have different lengths, only the common prefix is compared. - A length byte of 0 yields distance 0.0 (no data to compare). ## Serialization & Serving `Index` from Disk USearch supports multiple forms of serialization: - Into a __file__ defined with a path. - Into a __stream__ defined with a callback, serializing or reconstructing incrementally. - Into a __buffer__ of fixed length or a memory-mapped file that supports random access. The latter allows you to serve indexes from external memory, enabling you to optimize your server choices for indexing speed and serving costs. This can result in __20x cost reduction__ on AWS and other public clouds. ```py index.save("index.usearch") index.load("index.usearch") view = Index.restore("index.usearch", view=True, ...) other_view = Index(ndim=..., metric=...) other_view.view("index.usearch") ``` ## Exact vs. Approximate Search Approximate search methods, such as HNSW, are predominantly used when an exact brute-force search becomes too resource-intensive. This typically occurs when you have millions of entries in a collection. For smaller collections, we offer a more direct approach with the `search` method. ```py from usearch.index import search, MetricKind, Matches, BatchMatches import numpy as np # Generate 10'000 random vectors with 1024 dimensions vectors = np.random.rand(10_000, 1024).astype(np.float32) vector = np.random.rand(1024).astype(np.float32) one_in_many: Matches = search(vectors, vector, 50, MetricKind.L2sq, exact=True) many_in_many: BatchMatches = search(vectors, vectors, 50, MetricKind.L2sq, exact=True) ``` If you pass the `exact=True` argument, the system bypasses indexing altogether and performs a brute-force search through the entire dataset using SIMD-optimized similarity metrics from [SimSIMD](https://github.com/ashvardanian/simsimd). When compared to FAISS's `IndexFlatL2` in Google Colab, __[USearch may offer up to a 20x performance improvement](https://github.com/unum-cloud/usearch/issues/176#issuecomment-1666650778)__: - `faiss.IndexFlatL2`: __55.3 ms__. - `usearch.index.search`: __2.54 ms__. ## User-Defined Metrics While most vector search packages concentrate on just two metrics, "Inner Product distance" and "Euclidean distance", USearch allows arbitrary user-defined metrics. This flexibility allows you to customize your search for various applications, from computing geospatial coordinates with the rare [Haversine][haversine] distance to creating custom metrics for composite embeddings from multiple AI models, like joint image-text embeddings. You can use [Numba][numba], [Cppyy][cppyy], or [PeachPy][peachpy] to define your [custom metric even in Python](https://unum-cloud.github.io/USearch/python#user-defined-metrics-and-jit-in-python): ```py from numba import cfunc, types, carray from usearch.index import Index, MetricKind, MetricSignature, CompiledMetric ndim = 256 @cfunc(types.float32(types.CPointer(types.float32), types.CPointer(types.float32))) def python_inner_product(a, b): a_array = carray(a, ndim) b_array = carray(b, ndim) c = 0.0 for i in range(ndim): c += a_array[i] * b_array[i] return 1 - c metric = CompiledMetric(pointer=python_inner_product.address, kind=MetricKind.IP, signature=MetricSignature.ArrayArray) index = Index(ndim=ndim, metric=metric, dtype=np.float32) ``` Similar effect is even easier to achieve in C, C++, and Rust interfaces. Moreover, unlike older approaches indexing high-dimensional spaces, like KD-Trees and Locality Sensitive Hashing, HNSW doesn't require vectors to be identical in length. They only have to be comparable. So you can apply it in [obscure][obscure] applications, like searching for similar sets or fuzzy text matching, using [GZip][gzip-similarity] compression-ratio as a distance function. [haversine]: https://ashvardanian.com/posts/abusing-vector-search#geo-spatial-indexing [obscure]: https://ashvardanian.com/posts/abusing-vector-search [gzip-similarity]: https://twitter.com/LukeGessler/status/1679211291292889100?s=20 [numba]: https://numba.readthedocs.io/en/stable/reference/jit-compilation.html#c-callbacks [cppyy]: https://cppyy.readthedocs.io/en/latest/ [peachpy]: https://github.com/Maratyszcza/PeachPy ## Filtering and Predicate Functions Sometimes you may want to cross-reference search-results against some external database or filter them based on some criteria. In most engines, you'd have to manually perform paging requests, successively filtering the results. In USearch you can simply pass a predicate function to the search method, which will be applied directly during graph traversal. In Rust that would look like this: ```rust let is_odd = |key: Key| key % 2 == 1; let query = vec![0.2, 0.1, 0.2, 0.1, 0.3]; let results = index.filtered_search(&query, 10, is_odd).unwrap(); assert!( results.keys.iter().all(|&key| key % 2 == 1), "All keys must be odd" ); ``` ## Memory Efficiency, Downcasting, and Quantization Training a quantization model and dimension-reduction is a common approach to accelerate vector search. Those, however, are only sometimes reliable, can significantly affect the statistical properties of your data, and require regular adjustments if your distribution shifts. Instead, we have focused on high-precision arithmetic over low-precision downcasted vectors. The same index, and `add` and `search` operations will automatically down-cast or up-cast between `f64_t`, `f32_t`, `f16_t`, `i8_t`, and single-bit `b1x8_t` representations. You can use the following command to check, if hardware acceleration is enabled: ```sh $ python -c 'from usearch.index import Index; print(Index(ndim=768, metric="cos", dtype="f16").hardware_acceleration)' > sapphire $ python -c 'from usearch.index import Index; print(Index(ndim=166, metric="tanimoto").hardware_acceleration)' > ice ``` In most cases, it's recommended to use half-precision floating-point numbers on modern hardware. When quantization is enabled, the "get"-like functions won't be able to recover the original data, so you may want to replicate the original vectors elsewhere. When quantizing to `i8_t` integers, note that it's only valid for cosine-like metrics. As part of the quantization process, the vectors are normalized to unit length and later scaled to [-127, 127] range to occupy the full 8-bit range. When quantizing to `b1x8_t` single-bit representations, note that it's only valid for binary metrics like Jaccard, Hamming, etc. As part of the quantization process, the scalar components greater than zero are set to `true`, and the rest to `false`. ![USearch uint40_t support](https://github.com/unum-cloud/usearch/blob/main/assets/usearch-neighbor-types.png?raw=true) Using smaller numeric types will save you RAM needed to store the vectors, but you can also compress the neighbors lists forming our proximity graphs. By default, 32-bit `uint32_t` is used to enumerate those, which is not enough if you need to address over 4 Billion entries. For such cases we provide a custom `uint40_t` type, that will still be 37.5% more space-efficient than the commonly used 8-byte integers, and will scale up to 1 Trillion entries. ## `Indexes` for Multi-Index Lookups For larger workloads targeting billions or even trillions of vectors, parallel multi-index lookups become invaluable. Instead of constructing one extensive index, you can build multiple smaller ones and view them together. ```py from usearch.index import Indexes multi_index = Indexes( indexes=[index_a, index_b], # Merge in-memory shards paths=["shard_a.usearch", "shard_b.usearch"], # Or load from disk view=False, threads=0, ) multi_index.search(query_vectors, 10) ``` `Indexes` supports both u64 and uuid key kinds. The key kind is auto-detected from the first merged shard or path, or can be set explicitly: ```py # Auto-detect from shards indexes = Indexes([uuid_index_a, uuid_index_b]) # Auto-detect from paths indexes = Indexes(paths=["uuid_shard.usearch"]) # Explicit key kind indexes = Indexes(key_kind="uuid") indexes.merge(uuid_index) # Incremental loading indexes = Indexes() indexes.merge_path("shard.usearch") ``` ## Clustering Once the index is constructed, USearch can perform K-Nearest Neighbors Clustering much faster than standalone clustering libraries, like SciPy, UMap, and tSNE. Same for dimensionality reduction with PCA. Essentially, the `Index` itself can be seen as a clustering, allowing iterative deepening. ```py clustering = index.cluster( min_count=10, # Optional max_count=15, # Optional threads=..., # Optional ) # Get the clusters and their sizes centroid_keys, sizes = clustering.centroids_popularity # Use Matplotlib to draw a histogram clustering.plot_centroids_popularity() # Export a NetworkX graph of the clusters g = clustering.network # Get members of a specific cluster first_members = clustering.members_of(centroid_keys[0]) # Deepen into that cluster, splitting it into more parts, all the same arguments supported sub_clustering = clustering.subcluster(min_count=..., max_count=...) ``` The resulting clustering isn't identical to K-Means or other conventional approaches but serves the same purpose. Alternatively, using Scikit-Learn on a 1 Million point dataset, one may expect queries to take anywhere from minutes to hours, depending on the number of clusters you want to highlight. For 50'000 clusters, the performance difference between USearch and conventional clustering methods may easily reach 100x. ## Joins, One-to-One, One-to-Many, and Many-to-Many Mappings One of the big questions these days is how AI will change the world of databases and data management. Most databases are still struggling to implement high-quality fuzzy search, and the only kind of joins they know are deterministic. A `join` differs from searching for every entry, requiring a one-to-one mapping banning collisions among separate search results. | Exact Search | Fuzzy Search | Semantic Search ? | | :----------: | :----------: | :---------------: | | Exact Join | Fuzzy Join ? | Semantic Join ?? | Using USearch, one can implement sub-quadratic complexity approximate, fuzzy, and semantic joins. This can be useful in any fuzzy-matching tasks common to Database Management Software. ```py men = Index(...) women = Index(...) pairs: dict = men.join(women, max_proposals=0, exact=False) ``` > Read more in the post: [Combinatorial Stable Marriages for Semantic Search 💍](https://ashvardanian.com/posts/searching-stable-marriages) ## Functionality By now, the core functionality is supported across all bindings. Broader functionality is ported per request. In some cases, like Batch operations, feature parity is meaningless, as the host language has full multi-threading capabilities and the USearch index structure is concurrent by design, so the users can implement batching/scheduling/load-balancing in the most optimal way for their applications. | | C++ 11 | Python 3 | C 99 | Java | JavaScript | Rust | Go | Swift | | :---------------------- | :----: | :------: | :---: | :---: | :--------: | :---: | :---: | :---: | | Add, search, remove | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Save, load, view | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | User-defined metrics | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | | Batch operations | ❌ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | | Filter predicates | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | | Joins | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | Variable-length vectors | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | 4B+ capacities | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ## Application Examples ### USearch + UForm + UCall = Multimodal Semantic Search AI has a growing number of applications, but one of the coolest classic ideas is to use it for Semantic Search. One can take an encoder model, like the multi-modal [UForm](https://github.com/unum-cloud/uform), and a web-programming framework, like [UCall](https://github.com/unum-cloud/ucall), and build a text-to-image search platform in just 20 lines of Python. ```python from ucall import Server from uform import get_model, Modality from usearch.index import Index import numpy as np import PIL as pil processors, models = get_model('unum-cloud/uform3-image-text-english-small') model_text = models[Modality.TEXT_ENCODER] model_image = models[Modality.IMAGE_ENCODER] processor_text = processors[Modality.TEXT_ENCODER] processor_image = processors[Modality.IMAGE_ENCODER] server = Server() index = Index(ndim=256) @server def add(key: int, photo: pil.Image.Image): image = processor_image(photo) vector = model_image(image) index.add(key, vector.flatten(), copy=True) @server def search(query: str) -> np.ndarray: tokens = processor_text(query) vector = model_text(tokens) matches = index.search(vector.flatten(), 3) return matches.keys server.run() ``` Similar experiences can also be implemented in other languages and on the client side, removing the network latency. For Swift and iOS, check out the [`ashvardanian/SwiftSemanticSearch`](https://github.com/ashvardanian/SwiftSemanticSearch) repository. <table> <tr> <td> <img src="https://github.com/ashvardanian/ashvardanian/blob/master/demos/SwiftSemanticSearch-Dog.gif?raw=true" alt="SwiftSemanticSearch demo Dog"> </td> <td> <img src="https://github.com/ashvardanian/ashvardanian/blob/master/demos/SwiftSemanticSearch-Flowers.gif?raw=true" alt="SwiftSemanticSearch demo with Flowers"> </td> </tr> </table> A more complete [demo with Streamlit is available on GitHub](https://github.com/ashvardanian/usearch-images). We have pre-processed some commonly used datasets, cleaned the images, produced the vectors, and pre-built the index. | Dataset | Modalities | Images | Download | | :---------------------------------- | --------------------: | -----: | ------------------------------------: | | [Unsplash][unsplash-25k-origin] | Images & Descriptions | 25 K | [HuggingFace / Unum][unsplash-25k-hf] | | [Conceptual Captions][cc-3m-origin] | Images & Descriptions | 3 M | [HuggingFace / Unum][cc-3m-hf] | | [Arxiv][arxiv-2m-origin] | Titles & Abstracts | 2 M | [HuggingFace / Unum][arxiv-2m-hf] | [unsplash-25k-origin]: https://github.com/unsplash/datasets [cc-3m-origin]: https://huggingface.co/datasets/conceptual_captions [arxiv-2m-origin]: https://www.kaggle.com/datasets/Cornell-University/arxiv [unsplash-25k-hf]: https://huggingface.co/datasets/unum-cloud/ann-unsplash-25k [cc-3m-hf]: https://huggingface.co/datasets/unum-cloud/ann-cc-3m [arxiv-2m-hf]: https://huggingface.co/datasets/unum-cloud/ann-arxiv-2m ### USearch + RDKit = Molecular Search Comparing molecule graphs and searching for similar structures is expensive and slow. It can be seen as a special case of the NP-Complete Subgraph Isomorphism problem. Luckily, domain-specific approximate methods exist. The one commonly used in Chemistry is to generate structures from [SMILES][smiles] and later hash them into binary fingerprints. The latter are searchable with binary similarity metrics, like the Tanimoto coefficient. Below is an example using the RDKit package. ```python from usearch.index import Index, MetricKind from rdkit import Chem from rdkit.Chem import AllChem import numpy as np molecules = [Chem.MolFromSmiles('CCOC'), Chem.MolFromSmiles('CCO')] encoder = AllChem.GetRDKitFPGenerator() fingerprints = np.vstack([encoder.GetFingerprint(x) for x in molecules]) fingerprints = np.packbits(fingerprints, axis=1) index = Index(ndim=2048, metric=MetricKind.Tanimoto) keys = np.arange(len(molecules)) index.add(keys, fingerprints) matches = index.search(fingerprints, 10) ``` That method was used to build the ["USearch Molecules"](https://github.com/ashvardanian/usearch-molecules), one of the largest Chem-Informatics datasets, containing 7 billion small molecules and 28 billion fingerprints. [smiles]: https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system [rdkit-fingerprints]: https://www.rdkit.org/docs/RDKit_Book.html#additional-information-about-the-fingerprints ### USearch + POI Coordinates = GIS Applications Similar to Vector and Molecule search, USearch can be used for Geospatial Information Systems. The Haversine distance is available out of the box, but you can also define more complex relationships, like the Vincenty formula, that accounts for the Earth's oblateness. ```py from numba import cfunc, types, carray import math # Define the dimension as 2 for latitude and longitude ndim = 2 # Signature for the custom metric signature = types.float32( types.CPointer(types.float32), types.CPointer(types.float32)) # WGS-84 ellipsoid parameters a = 6378137.0 # major axis in meters f = 1 / 298.257223563 # flattening b = (1 - f) * a # minor axis def vincenty_distance(a_ptr, b_ptr): a_array = carray(a_ptr, ndim) b_array = carray(b_ptr, ndim) lat1, lon1, lat2, lon2 = a_array[0], a_array[1], b_array[0], b_array[1] L, U1, U2 = lon2 - lon1, math.atan((1 - f) * math.tan(lat1)), math.atan((1 - f) * math.tan(lat2)) sinU1, cosU1, sinU2, cosU2 = math.sin(U1), math.cos(U1), math.sin(U2), math.cos(U2) lambda_, iterLimit = L, 100 while iterLimit > 0: iterLimit -= 1 sinLambda, cosLambda = math.sin(lambda_), math.cos(lambda_) sinSigma = math.sqrt((cosU2 * sinLambda) ** 2 + (cosU1 * sinU2 - sinU1 * cosU2 * cosLambda) ** 2) if sinSigma == 0: return 0.0 # Co-incident points cosSigma, sigma = sinU1 * sinU2 + cosU1 * cosU2 * cosLambda, math.atan2(sinSigma, cosSigma) sinAlpha, cos2Alpha = cosU1 * cosU2 * sinLambda / sinSigma, 1 - (cosU1 * cosU2 * sinLambda / sinSigma) ** 2 cos2SigmaM = cosSigma - 2 * sinU1 * sinU2 / cos2Alpha if not math.isnan(cosSigma - 2 * sinU1 * sinU2 / cos2Alpha) else 0 # Equatorial line C = f / 16 * cos2Alpha * (4 + f * (4 - 3 * cos2Alpha)) lambda_, lambdaP = L + (1 - C) * f * (sinAlpha * (sigma + C * sinSigma * (cos2SigmaM + C * cosSigma * (-1 + 2 * cos2SigmaM ** 2)))), lambda_ if abs(lambda_ - lambdaP) <= 1e-12: break if iterLimit == 0: return float('nan') # formula failed to converge u2 = cos2Alpha * (a ** 2 - b ** 2) / (b ** 2) A = 1 + u2 / 16384 * (4096 + u2 * (-768 + u2 * (320 - 175 * u2))) B = u2 / 1024 * (256 + u2 * (-128 + u2 * (74 - 47 * u2))) deltaSigma = B * sinSigma * (cos2SigmaM + B / 4 * (cosSigma * (-1 + 2 * cos2SigmaM ** 2) - B / 6 * cos2SigmaM * (-3 + 4 * sinSigma ** 2) * (-3 + 4 * cos2SigmaM ** 2))) s = b * A * (sigma - deltaSigma) return s / 1000.0 # Distance in kilometers # Example usage: index = Index(ndim=ndim, metric=CompiledMetric( pointer=vincenty_distance.address, kind=MetricKind.Haversine, signature=MetricSignature.ArrayArray, )) ``` ## Integrations & Users - [x] ClickHouse: [C++](https://github.com/ClickHouse/ClickHouse/pull/53447), [docs](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes#usearch). - [x] DuckDB: [post](https://duckdb.org/2024/05/03/vector-similarity-search-vss.html). - [x] ScyllaDB: [Rust](https://github.com/scylladb/vector-store), [presentation](https://www.slideshare.net/slideshow/vector-search-with-scylladb-by-szymon-wasik/276571548). - [x] TiDB & TiFlash: [C++](https://github.com/pingcap/tiflash), [announcement](https://www.pingcap.com/article/introduce-vector-search-indexes-in-tidb/). - [x] YugaByte: [C++](https://github.com/yugabyte/yugabyte-db/blob/366b9f5e3c4df3a1a17d553db41d6dc50146f488/src/yb/vector_index/usearch_wrapper.cc). - [x] Google: [UniSim](https://github.com/google/unisim), [RetSim](https://arxiv.org/abs/2311.17264) paper. - [x] MemGraph: [C++](https://github.com/memgraph/memgraph/blob/784dd8520f65050d033aea8b29446e84e487d091/src/storage/v2/indices/vector_index.cpp), [announcement](https://memgraph.com/blog/simplify-data-retrieval-memgraph-vector-search). - [x] LanternDB: [C++](https://github.com/lanterndata/lantern), [Rust](https://github.com/lanterndata/lantern_extras), [docs](https://lantern.dev/blog/hnsw-index-creation). - [x] LangChain: [Python](https://github.com/langchain-ai/langchain/releases/tag/v0.0.257) and [JavaScript](https://github.com/hwchase17/langchainjs/releases/tag/0.0.125). - [x] Microsoft Semantic Kernel: [Python](https://github.com/microsoft/semantic-kernel/releases/tag/python-0.3.9.dev) and C#. - [x] GPTCache: [Python](https://github.com/zilliztech/GPTCache/releases/tag/0.1.29). - [x] Sentence-Transformers: Python [docs](https://www.sbert.net/docs/package_reference/quantization.html#sentence_transformers.quantization.semantic_search_usearch). - [x] Pathway: [Rust](https://github.com/pathwaycom/pathway). - [x] Vald: [GoLang](https://github.com/vdaas/vald). ## Citations ```bibtex @software{Vardanian_USearch_2023, doi = {10.5281/zenodo.7949416}, author = {Vardanian, Ash}, title = {{USearch by Unum Cloud}}, url = {https://github.com/unum-cloud/usearch}, version = {2.24.0}, year = {2023}, month = oct, } ```
text/markdown
Titusz Pan (fork maintainer)
tp@py7.de
null
null
Apache-2.0
null
[ "Development Status :: 5 - Production/Stable", "Natural Language :: English", "Intended Audience :: Developers", "Intended Audience :: Information Technology", "License :: OSI Approved :: Apache Software License", "Programming Language :: C++", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Java", "Programming Language :: JavaScript", "Programming Language :: Objective C", "Programming Language :: Rust", "Programming Language :: Other", "Operating System :: MacOS", "Operating System :: Unix", "Operating System :: Microsoft :: Windows", "Topic :: System :: Clustering", "Topic :: Database :: Database Engines/Servers", "Topic :: Scientific/Engineering :: Artificial Intelligence" ]
[]
https://github.com/iscc/usearch
null
null
[]
[]
[]
[ "numpy", "tqdm", "simsimd<7.0.0,>=6.0.5" ]
[]
[]
[]
[ "Upstream, https://github.com/unum-cloud/usearch", "Fork, https://github.com/iscc/usearch" ]
uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-20T10:41:27.781707
usearch_iscc-2.24.2-cp310-cp310-macosx_11_0_arm64.whl
563,611
db/0d/ca663ac11d78c13785ef16f93ce376ce3705531769f7da49481ce6bdd3a3/usearch_iscc-2.24.2-cp310-cp310-macosx_11_0_arm64.whl
cp310
bdist_wheel
null
false
50c5ab4292d4e0387062f7df5f45829d
4cf5be270139c61b7ca569b138b18f6926bc7d5ff88132b5ff24c74dfebc0ffd
db0dca663ac11d78c13785ef16f93ce376ce3705531769f7da49481ce6bdd3a3
null
[ "LICENSE" ]
3,052
2.4
acex-client
4.1.2
ACE-X CLIENT - Client for ACE-X
# ACE-X CLIENT This client is used for communication with the ACEX-API and is used within the cli, worker and more. ## Installation ```bash pip install acex-client ``` See the [main documentation](../README.md) for more information.
text/markdown
Johan Lahti
johan.lahti@acebit.se
null
null
AGPL-3.0
automation, control
[ "License :: OSI Approved :: GNU Affero General Public License v3", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
<4.0,>=3.13
[]
[]
[]
[ "acex<5.0.0,>=4.1.5", "acex-devkit<2.0.0,>=1.0.5", "acex-driver-cisco-ioscli<0.0.13,>=0.0.12", "datamodel-code-generator<0.44.0,>=0.43.1", "datetime<7.0,>=6.0", "pydantic<3.0.0,>=2.12.5", "requests<3.0.0,>=2.32.5", "rich<14.0.0,>=13.0.0", "typer<0.13.0,>=0.12.0" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:41:27.662550
acex_client-4.1.2.tar.gz
10,760
42/52/9cc921b16b454e292449c3896dd0a3a98654c8d6f4ef15415d8ba0073a2b/acex_client-4.1.2.tar.gz
source
sdist
null
false
fa4a7f52c1bd344c3dde2e9b37a29977
884555e58f329369a83962a72e3682a55150a77a73a15e68c8a2115aca17adba
42529cc921b16b454e292449c3896dd0a3a98654c8d6f4ef15415d8ba0073a2b
null
[]
229
2.4
acex-cli
4.1.2
ACE-X CLI - Command-line interface for ACE-X
# ACE-X CLI Command-line interface for managing ACE-X automations. ## Installation ```bash pip install acex-cli ``` This will also install the `acex` backend package as a dependency. ## Development ```bash cd cli poetry install ``` ## Usage ```bash acex --help acex run automation.py acex list acex status ``` ## Commands - `acex run` - Run an automation - `acex list` - List available automations - `acex status` - Check system status - `acex config` - Manage configuration ## Documentation See the [main documentation](../README.md) for more information.
text/markdown
Johan Lahti
johan.lahti@acebit.se
null
null
AGPL-3.0
automation, cli, control
[ "License :: OSI Approved :: GNU Affero General Public License v3", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
<4.0,>=3.13
[]
[]
[]
[ "acex-client<5.0.0,>=4.1.2", "acex-driver-cisco-ioscli<0.0.13,>=0.0.12", "click==8.1.7", "ntc-templates<9.0.0,>=8.1.0", "rich<14.0.0,>=13.0.0", "typer<0.13.0,>=0.12.0" ]
[]
[]
[]
[ "Homepage, https://github.com/acex-labs/acex", "Repository, https://github.com/acex-labs/acex" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:41:26.259810
acex_cli-4.1.2.tar.gz
9,142
a2/e9/c1ef27a1daaedf1413ce3e833c6aecd09143aeb7cef3026cc6e33659061b/acex_cli-4.1.2.tar.gz
source
sdist
null
false
2e7779975525282fc2b7ae60e6d46b36
4b6962f8a4a3f0ebeadf3fcba4ea14c7f08d90206aef594aca531f407a4ed06c
a2e9c1ef27a1daaedf1413ce3e833c6aecd09143aeb7cef3026cc6e33659061b
null
[]
229
2.4
imio.scan-helpers
0.7.1
Various script files to handle local scan tool
imio.scan_helpers ================= Various script files to handle MS Windows scan tool Installation ------------ Use virtualenv in bin directory destination Build locally ------------- bin/pyinstaller -y imio-scan-helpers.spec GitHub actions -------------- On each push or tag, the github action will build the package and upload it to the github release page. https://github.com/IMIO/imio.scan_helpers/releases Windows installation -------------------- The zip archive must be decompressed in a directory (without version reference) that will be the execution directory. Windows usage ------------- * imio-scan-helpers.exe -h : displays the help * imio-scan-helpers.exe : updates the software based on version and restarts it * imio-scan-helpers.exe -r tag_name: updates the software with specific release and restarts it * imio-scan-helpers.exe -c client_id: stores client_id in configuration file (used as identification when sending info to imio) * imio-scan-helpers.exe -p plone_password: stores webservice password in configuration file (used when sending info to imio) * imio-scan-helpers.exe -nu : runs without update * imio-scan-helpers.exe --startup : adds the software to the windows startup * imio-scan-helpers.exe --startup-remove : removes the software from the windows startup * profiles-backup.exe : backups profiles * profiles-restore.exe : restores profiles Changelog ========= 0.7.1 (2026-02-20) ------------------ - Fixed pip-system-cert for inject_truststore() function. [chris-adam] - Fixed get_latest_release_version to iterate over all GitHub pages. [chris-adam] - Added parameter to prevent auto updates. [chris-adam] - Replaced pip-system-certs with truststore to resolve certificate problems. [chris-adam] 0.7.0 (2025-09-01) ------------------ - Used `pip-system-certs` to resolve certificate problems. [chris-adam] - Unpinned pyinstaller version. [sgeulette] - Improved send_log_message to avoid timeout. [sgeulette] - Added exception handling when removing profiles directory. [chris-adam] 0.6.0 (2024-08-28) ------------------ - Improved version update. [sgeulette] - Added `-tm` parameter (test message). [sgeulette] 0.5.2 (2024-08-26) ------------------ - Added version in message sent to webservice. [sgeulette] 0.5.1 (2024-08-23) ------------------ - Corrected bug with relative path. [sgeulette] - Added backuped dirs in first message. [sgeulette] 0.5.0 (2024-08-22) ------------------ - Added certifi pem file to be sure https certificates can be validated. [sgeulette] 0.4.1 (2024-08-22) ------------------ - Added more info in first message. [sgeulette] 0.4.0 (2024-08-21) ------------------ - Added optional basic proxy configuration. [sgeulette] 0.3.2 (2024-08-21) ------------------ - Corrected `utils.json_request`. [sgeulette] 0.3.1 (2024-08-20) ------------------ - Added tests. [sgeulette] 0.3.0 (2024-08-14) ------------------ - Corrected version. [sgeulette] 0.2.5 (2024-08-14) ------------------ - Called profiles_restore in main. [sgeulette] 0.2.4 (2024-08-14) ------------------ - Corrected set_parameter. Added hostname information. [sgeulette] 0.2.3 (2024-08-14) ------------------ - Send an info message (no mail) when the product is updated. [sgeulette] 0.2.2 (2024-08-13) ------------------ - Added `--is-auto-started` parameter in main, passed when app is auto started. [sgeulette] 0.2.1 (2024-08-13) ------------------ - Changed backup directory. [sgeulette] - Improved exception logging. [sgeulette] 0.2.0 (2024-08-13) ------------------ - Added profiles_backup script. [sgeulette] - Stored client identification, plone password and webservice url in configuration file. [sgeulette] - Added profiles_restore script. [sgeulette] 0.1.1 (2024-07-19) ------------------ - Handled Windows startup add or remove following parameters. [sgeulette] 0.1.0 (2024-07-18) ------------------ - Initial release. [sgeulette]
null
Stephan Geulette (IMIO)
support@imio.be
null
null
GPL version 3
Scan Windows
[ "Development Status :: 3 - Alpha", "Programming Language :: Python :: 3.12", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)", "Operating System :: Microsoft :: Windows", "Operating System :: Microsoft :: Windows :: Windows 10", "Operating System :: Microsoft :: Windows :: Windows 11" ]
[]
null
null
>=3.12
[]
[]
[]
[ "pyinstaller", "requests" ]
[]
[]
[]
[ "PyPI, https://pypi.python.org/pypi/imio.scan_helpers", "Source, https://github.com/IMIO/imio.scan_helpers" ]
twine/6.2.0 CPython/3.12.7
2026-02-20T10:40:52.091974
imio_scan_helpers-0.7.1.tar.gz
26,252
da/e8/55ddd113f3ed98d1e3b9fb5ea9bcfa0a5f2f496926996b37854cf62a758a/imio_scan_helpers-0.7.1.tar.gz
source
sdist
null
false
1a75787db0f6c1e796bc582a720a2b1f
58075760a223aedebddd6048357ce76c9936905f693ad2b5e8d68b3b66b07040
dae855ddd113f3ed98d1e3b9fb5ea9bcfa0a5f2f496926996b37854cf62a758a
null
[ "LICENSE" ]
0
2.4
moordyn
2.6.1
Python wrapper for MoorDyn library
MoorDyn v2 ========== **This repository is for MoorDyn-C.** MoorDyn is a lumped-mass model for simulating the dynamics of mooring systems connected to floating offshore structures. As of 2022 it is available under the BSD 3-Clause license. Read the docs here: [moordyn.readthedocs.io](https://moordyn.readthedocs.io/en/latest/) Example uses and instructions here: [Examples](https://github.com/FloatingArrayDesign/MoorDyn/tree/dev/example) It accounts for internal axial stiffness and damping forces, weight and buoyancy forces, hydrodynamic forces from Morison's equation (assuming calm water so far), and vertical spring-damper forces from contact with the seabed. MoorDyn's input file format is based on that of [MAP](https://www.nrel.gov/wind/nwtc/map-plus-plus.html). The model supports arbitrary line interconnections, clump weights and floats, different line properties, and six degree of freedom rods and bodies. MoorDyn is implemented both in Fortran and in C++. The Fortran version of MoorDyn (MoorDyn-F) is a core module in [OpenFAST](https://github.com/OpenFAST/openfast) and can be used as part of an OpenFAST or FAST.Farm simulation, or used in a standalone form. The C++ version of MoorDyn (MoorDyn-C) is more adaptable to different use cases and couplings. It can be compiled as a dynamically-linked library or wrapped for use in Python (as a module), Fortran, or Matlab. It features simpler functions for easy coupling with models or scripts coded in C/C++, Fortran, Matlab/Simulink, etc., including a coupling with [WEC-Sim](https://wec-sim.github.io/WEC-Sim/master/index.html). Users should take care to ensure their input file format matches the respective version of MoorDyn they are trying to use. Details on the input file differences can be found in the [documentation](https://moordyn.readthedocs.io/en/latest/inputs.html). Both forms of MoorDyn feature the same underlying mooring model, use the same input and output conventions, and are being updated and improved in parallel. They follow the same version numbering, with a "C" or "F" suffix for differentiation. Further information on MoorDyn can be found on the [documentation site](https://moordyn.readthedocs.io/en/latest/). MoorDyn-F is available in the [OpenFAST repository](https://github.com/OpenFAST/openfast/tree/main/modules/moordyn). MoorDyn-C is available in this repository with the following three maintained branches. The master branch represents the most recent release of MoorDyn-C. The dev branch contains new features currently in development. The v1 branch is the now deprecated version one of MoorDyn-C. ## Acknowledgments [National Renewable Energy Laboratory (NREL)](https://www.nrel.gov/): - Matt Hall - Ryan Davies - Andy Platt - Stein Housner - Lu Wang - Jason Jonkman [CoreMarine](https://www.core-marine.com/) [MoorDyn-C v2]: - Jose Luis Cercos-Pita - Aymeric Devulder - Elena Gridasova [Kelson Marine](https://kelsonmarine.com) [MoorDyn-C v2]: - [David Joseph Anderson](https://davidjosephanderson.com/) - [Alex Kinley](https://github.com/AlexWKinley)
text/markdown
null
Jose Luis Cercos-Pita <jlc@core-marine.com>
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: BSD License", "Operating System :: OS Independent" ]
[]
null
null
>=3.7
[]
[]
[]
[]
[]
[]
[]
[ "Homepage, https://github.com/FloatingArrayDesign/MoorDyn", "Bug Tracker, https://github.com/FloatingArrayDesign/MoorDyn/issues", "Documentation, https://moordyn.readthedocs.io" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:40:16.975172
moordyn-2.6.1.tar.gz
1,180,236
38/30/c2f670748f0c3e4acd4e046f03695d639dbf537796c25f5495706910d680/moordyn-2.6.1.tar.gz
source
sdist
null
false
8f1dbc12e99ff40dcfc09fcea1d4582d
fab360adb98d50f5f3a290f3f2484e69dfa2d45d959ef056bc3a734b1019dc68
3830c2f670748f0c3e4acd4e046f03695d639dbf537796c25f5495706910d680
null
[ "LICENSE.txt" ]
5,814
2.4
traccia
0.1.12
Production-ready distributed tracing SDK for AI agents and LLM applications
# Traccia **Production-ready distributed tracing for AI agents and LLM applications** Traccia is a lightweight, high-performance Python SDK for observability and tracing of AI agents, LLM applications, and complex distributed systems. Built on OpenTelemetry standards with specialized instrumentation for AI workloads. [Traccia](https://pypi.org/project/traccia/) is available on PyPI. ## ✨ Features - **🔍 Automatic Instrumentation**: Auto-patch OpenAI, Anthropic, requests, and HTTP libraries - **🤖 Framework Integrations**: Support for LangChain, CrewAI, and OpenAI Agents SDK - **📊 LLM-Aware Tracing**: Track tokens, costs, prompts, and completions automatically - **📈 OpenTelemetry Metrics**: Emit OTEL-compliant metrics for accurate cost/token tracking (independent of sampling) - **⚡ Zero-Config Start**: Simple `init()` call with automatic config discovery - **🎯 Decorator-Based**: Trace any function with `@observe` decorator - **🔧 Multiple Exporters**: OTLP (compatible with Grafana Tempo, Jaeger, Zipkin), Console, or File - **🛡️ Production-Ready**: Rate limiting, error handling, config validation, robust flushing - **📝 Type-Safe**: Full Pydantic validation for configuration - **🚀 High Performance**: Efficient batching, async support, minimal overhead - **🔐 Secure**: No secrets in logs, configurable data truncation --- ## 🚀 Quick Start ### Installation ```bash pip install traccia ``` ### Basic Usage ```python from traccia import init, observe # Initialize (auto-loads from traccia.toml if present) init() # Trace any function @observe() def my_function(x, y): return x + y # That's it! Traces are automatically created and exported result = my_function(2, 3) ``` ### With LLM Calls ```python from traccia import init, observe from openai import OpenAI init() # Auto-patches OpenAI client = OpenAI() @observe(as_type="llm") def generate_text(prompt: str) -> str: response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message.content # Automatically tracks: model, tokens, cost, prompt, completion, latency text = generate_text("Write a haiku about Python") ``` ### LangChain Create a callback handler and pass it to `config={"callbacks": [traccia_handler]}`. Install the optional extra: `pip install traccia[langchain]`. ```python from traccia import init from traccia.integrations.langchain import CallbackHandler # or TracciaCallbackHandler from langchain_openai import ChatOpenAI init() # Create Traccia handler (no args) traccia_handler = CallbackHandler() # Use with any LangChain runnable llm = ChatOpenAI(model="gpt-4o-mini") result = llm.invoke( "Tell me a joke", config={"callbacks": [traccia_handler]} ) ``` Spans for LLM/chat model runs are created automatically with the same attributes as direct OpenAI instrumentation (model, prompt, usage, cost). **Note:** `pip install traccia[langchain]` installs traccia plus `langchain-core`; you need this extra to use the callback handler. If you already have `langchain-core` (e.g. from `langchain` or `langchain-openai`), base `pip install traccia` may be enough at runtime, but `traccia[langchain]` is the supported way to get a compatible dependency. ### OpenAI Agents SDK Traccia **automatically** detects and instruments the OpenAI Agents SDK when installed. No extra code needed: ```python from traccia import init from agents import Agent, Runner init() # Automatically enables Agents SDK tracing agent = Agent( name="Assistant", instructions="You are a helpful assistant" ) result = Runner.run_sync(agent, "Write a haiku about recursion") ``` **Configuration**: Auto-enabled by default when `openai-agents` is installed. To disable: ```python init(openai_agents=False) # Explicit parameter # OR set environment variable: TRACCIA_OPENAI_AGENTS=false # OR in traccia.toml under [instrumentation]: openai_agents = false ``` **Compatibility**: If you have `openai-agents` installed but don't use it (e.g., using LangChain or pure OpenAI instead), the integration is registered but never invoked—no overhead or extra spans. ### CrewAI Traccia **automatically** instruments [CrewAI](https://docs.crewai.com/) when it is installed in your environment. ```python from traccia import init from crewai import Agent, Task, Crew, Process init() # Auto-enables CrewAI tracing when CrewAI is installed researcher = Agent(role="Research Analyst", goal="Research a topic", llm="gpt-4o-mini") task = Task(description="Research Shawn Michaels", agent=researcher) crew = Crew(agents=[researcher], tasks=[task], process=Process.sequential, verbose=True) result = crew.kickoff() ``` Traccia will create spans for the crew (`crewai.crew.kickoff`), each task (`crewai.task.*`), agents (`crewai.agent.*`), and underlying LLM calls, which nest under the existing OpenAI spans. **Configuration**: Auto-enabled by default when `crewai` is installed. To disable: ```python init(crewai=False) # Explicit parameter # OR set environment variable: TRACCIA_CREWAI=false # OR in traccia.toml under [instrumentation]: crewai = false ``` --- ## 📖 Configuration ### Configuration File Create a `traccia.toml` file in your project root: ```bash traccia config init ``` This creates a template config file: ```toml [tracing] # API key (optional - for future Traccia UI, not needed for OTLP backends) api_key = "" # Endpoint URL for OTLP trace ingestion (default: Traccia platform) # For local OTLP backends use e.g. endpoint = "http://localhost:4318/v1/traces" endpoint = "https://api.traccia.ai/v2/traces" sample_rate = 1.0 # 0.0 to 1.0 auto_start_trace = true # Auto-start root trace on init auto_trace_name = "root" # Name for auto-started trace use_otlp = true # Use OTLP exporter # service_name = "my-app" # Optional service name [exporters] # Only enable ONE exporter at a time enable_console = false # Print traces to console enable_file = false # Write traces to file file_exporter_path = "traces.jsonl" reset_trace_file = false # Reset file on initialization [instrumentation] enable_patching = true # Auto-patch libraries (OpenAI, Anthropic, requests) enable_token_counting = true # Count tokens for LLM calls enable_costs = true # Calculate costs openai_agents = true # Auto-enable OpenAI Agents SDK integration auto_instrument_tools = false # Auto-instrument tool calls (experimental) max_tool_spans = 100 # Max tool spans to create max_span_depth = 10 # Max nested span depth [rate_limiting] # Optional: limit spans per second # max_spans_per_second = 100.0 max_queue_size = 5000 # Max buffered spans max_block_ms = 100 # Max ms to block before dropping max_export_batch_size = 512 # Spans per export batch schedule_delay_millis = 5000 # Delay between batches [metrics] enable_metrics = true # Enable OpenTelemetry metrics # metrics_endpoint = "" # Defaults to {traces_base}/v2/metrics metrics_sample_rate = 1.0 # Metrics sampling rate (1.0 = 100%) [runtime] # Optional runtime metadata (agent identity: prefer init(agent_id=..., agent_name=..., env=...) or TRACCIA_* env) # session_id = "" # user_id = "" # tenant_id = "" # project_id = "" # agent_id = "" # Single-agent: set in code or TRACCIA_AGENT_ID # agent_name = "" # env = "" # e.g. production, staging, dev [logging] debug = false # Enable debug logging enable_span_logging = false # Enable span-level logging [advanced] # attr_truncation_limit = 1000 # Max attribute value length ``` ### Default endpoint If you do not set `endpoint` (in config, environment, or when calling `init()` / `start_tracing()`), the SDK uses the **Traccia platform** by default (`https://api.traccia.ai/v2/traces`). You can override it to send traces to your own OTLP-compatible backend. The default is defined in `traccia.config`: `DEFAULT_OTLP_TRACE_ENDPOINT`. The alias `DEFAULT_ENDPOINT` is kept for backward compatibility (same value). ### OTLP Backend Compatibility Traccia is fully OTLP-compatible and works with: - **Grafana Tempo** - `http://tempo:4318/v1/traces` - **Jaeger** - `http://jaeger:4318/v1/traces` - **Zipkin** - Configure via OTLP endpoint - **SigNoz** - Self-hosted observability platform - **Traccia Cloud** - Coming soon (will require API key) ### Environment Variables All config parameters can be set via environment variables with the `TRACCIA_` prefix: **Tracing**: `TRACCIA_API_KEY`, `TRACCIA_ENDPOINT`, `TRACCIA_SAMPLE_RATE`, `TRACCIA_AUTO_START_TRACE`, `TRACCIA_AUTO_TRACE_NAME`, `TRACCIA_USE_OTLP`, `TRACCIA_SERVICE_NAME` **Exporters**: `TRACCIA_ENABLE_CONSOLE`, `TRACCIA_ENABLE_FILE`, `TRACCIA_FILE_PATH`, `TRACCIA_RESET_TRACE_FILE` **Instrumentation**: `TRACCIA_ENABLE_PATCHING`, `TRACCIA_ENABLE_TOKEN_COUNTING`, `TRACCIA_ENABLE_COSTS`, `TRACCIA_AUTO_INSTRUMENT_TOOLS`, `TRACCIA_MAX_TOOL_SPANS`, `TRACCIA_MAX_SPAN_DEPTH` **Rate Limiting**: `TRACCIA_MAX_SPANS_PER_SECOND`, `TRACCIA_MAX_QUEUE_SIZE`, `TRACCIA_MAX_BLOCK_MS`, `TRACCIA_MAX_EXPORT_BATCH_SIZE`, `TRACCIA_SCHEDULE_DELAY_MILLIS` **Runtime**: `TRACCIA_SESSION_ID`, `TRACCIA_USER_ID`, `TRACCIA_TENANT_ID`, `TRACCIA_PROJECT_ID`, `TRACCIA_AGENT_ID`, `TRACCIA_AGENT_NAME`, `TRACCIA_ENV` **Logging**: `TRACCIA_DEBUG`, `TRACCIA_ENABLE_SPAN_LOGGING` **Advanced**: `TRACCIA_ATTR_TRUNCATION_LIMIT` **Priority**: Explicit parameters > Environment variables > Config file > Defaults ### Programmatic Configuration ```python from traccia import init # Override config programmatically (including agent identity for single-agent services) init( endpoint="http://tempo:4318/v1/traces", sample_rate=0.5, enable_costs=True, max_spans_per_second=100.0, agent_id="my-agent", agent_name="My Agent", env="production", ) ``` --- ## 🎯 Usage Guide ### The `@observe` Decorator The `@observe` decorator is the primary way to instrument your code: ```python from traccia import observe # Basic usage @observe() def process_data(data): return transform(data) # Custom span name @observe(name="data_pipeline") def process_data(data): return transform(data) # Add custom attributes @observe(attributes={"version": "2.0", "env": "prod"}) def process_data(data): return transform(data) # Specify span type @observe(as_type="llm") # "span", "llm", "tool" def call_llm(): pass # Skip capturing specific arguments @observe(skip_args=["password", "secret"]) def authenticate(username, password): pass # Skip capturing result (for large returns) @observe(skip_result=True) def fetch_large_dataset(): return huge_data ``` **Available Parameters**: - `name` (str, optional): Custom span name (defaults to function name) - `attributes` (dict, optional): Initial span attributes - `as_type` (str): Span type - `"span"`, `"llm"`, or `"tool"` - `skip_args` (list, optional): List of argument names to skip capturing - `skip_result` (bool): Skip capturing the return value ### Async Functions `@observe` works seamlessly with async functions: ```python @observe() async def async_task(x): await asyncio.sleep(1) return x * 2 result = await async_task(5) ``` ### Manual Span Creation For more control, create spans manually: ```python from traccia import get_tracer, span # Using convenience function with span("operation_name") as s: s.set_attribute("key", "value") s.add_event("checkpoint_reached") do_work() # Using tracer directly tracer = get_tracer("my_service") with tracer.start_as_current_span("operation") as s: s.set_attribute("user_id", 123) do_work() ``` ### Error Handling Traccia automatically captures and records errors: ```python @observe() def failing_function(): raise ValueError("Something went wrong") # Span will contain: # - error.type: "ValueError" # - error.message: "Something went wrong" # - error.stack_trace: (truncated stack trace) # - span status: ERROR ``` ### Nested Spans Spans are automatically nested based on call hierarchy: ```python @observe() def parent_operation(): child_operation() return "done" @observe() def child_operation(): grandchild_operation() @observe() def grandchild_operation(): pass # Creates nested span hierarchy: # parent_operation # └── child_operation # └── grandchild_operation ``` --- ## 🛠️ CLI Tools Traccia includes a powerful CLI for configuration and diagnostics: ### `traccia config init` Create a new `traccia.toml` configuration file: ```bash traccia config init traccia config init --force # Overwrite existing ``` ### `traccia doctor` Validate configuration and diagnose issues: ```bash traccia doctor # Output: # 🩺 Running Traccia configuration diagnostics... # # ✅ Found config file: ./traccia.toml # ✅ Configuration is valid # # 📊 Configuration summary: # • API Key: ❌ Not set (optional) # • Endpoint: https://api.traccia.ai/v2/traces # • Sample Rate: 1.0 # • OTLP Exporter: ✅ Enabled ``` ### `traccia check` Test connectivity to your exporter endpoint: ```bash traccia check traccia check --endpoint http://tempo:4318/v1/traces ``` --- ## 🎨 Advanced Features ### Rate Limiting Protect your infrastructure with built-in rate limiting: ```toml [rate_limiting] max_spans_per_second = 100.0 # Limit to 100 spans/sec max_queue_size = 5000 # Max buffered spans max_block_ms = 100 # Block up to 100ms before dropping ``` **Behavior**: 1. Try to acquire capacity immediately 2. If unavailable, block for up to `max_block_ms` 3. If still unavailable, drop span and log warning When spans are dropped due to rate limiting, warnings are logged to help you monitor and adjust limits. ### Sampling Control trace volume with sampling: ```python # Sample 10% of traces init(sample_rate=0.1) # Sampling is applied at trace creation time # Traces are either fully included or fully excluded ``` ### Token Counting & Cost Calculation Automatic for supported LLM providers (OpenAI, Anthropic): ```python @observe(as_type="llm") def call_openai(prompt): response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": prompt}] ) return response.choices[0].message.content # Span automatically includes: # - llm.token.prompt_tokens # - llm.token.completion_tokens # - llm.token.total_tokens # - llm.cost.total (in USD) ``` ### Metrics Traccia emits OTEL-compliant metrics for accurate cost and token tracking, independent of trace sampling. #### Why Metrics? With trace sampling (e.g., `sample_rate=0.1`), only 10% of traces are exported. Cost calculated from traces will be **10x underestimated**. Metrics solve this by recording data for **every** LLM call, regardless of sampling. #### Default Metrics Traccia automatically emits these metrics: | Metric | Type | Unit | Description | |--------|------|------|-------------| | `gen_ai.client.token.usage` | Histogram | `{token}` | Input/output tokens per call | | `gen_ai.client.operation.duration` | Histogram | `s` | LLM operation duration | | `gen_ai.client.operation.cost` | Histogram | `usd` | Cost per call (USD) | | `gen_ai.client.completions.exceptions` | Counter | `1` | Exception count | | `gen_ai.agent.runs` | Counter | `1` | Agent runs (CrewAI, OpenAI Agents) | | `gen_ai.agent.turns` | Counter | `1` | Agent turns | | `gen_ai.agent.execution_time` | Histogram | `s` | Agent execution time | **Attributes**: `gen_ai.system` (openai, anthropic), `gen_ai.request.model`, `gen_ai.agent.id`, `gen_ai.agent.name` #### Configuration ```python from traccia import init init( enable_metrics=True, # Default: True metrics_endpoint="https://your-backend.com/v2/metrics", metrics_sample_rate=1.0, # Default: 1.0 (100%) ) ``` Or via `traccia.toml`: ```toml [metrics] enable_metrics = true metrics_endpoint = "https://your-backend.com/v2/metrics" metrics_sample_rate = 1.0 ``` Or via environment variables: ```bash export TRACCIA_ENABLE_METRICS=true export TRACCIA_METRICS_ENDPOINT=https://your-backend.com/v2/metrics export TRACCIA_METRICS_SAMPLE_RATE=1.0 ``` #### Custom Metrics Record your own metrics: ```python from traccia.metrics import record_counter, record_histogram # Record a counter record_counter("my_custom_events", 1, {"event_type": "user_action"}) # Record a histogram record_histogram("my_custom_latency", 0.123, {"service": "api"}, unit="s") ``` #### Agent Metrics vs. Plain LLM Calls Agent-level metrics (such as `gen_ai.agent.runs` and `gen_ai.agent.execution_time`) are only emitted when Traccia can see a real **agent lifecycle** (for example, CrewAI crews or OpenAI Agents SDK runs). For plain OpenAI/Anthropic calls and most simple LangChain usages, you will still get full LLM metrics (`gen_ai.client.*`), but no agent metrics unless you build an explicit agent abstraction on top. --- ## 🔧 Troubleshooting ### Enable Debug Logging ```python import logging logging.basicConfig(level=logging.DEBUG) # Or via config init(debug=True) # Or via env var # TRACCIA_DEBUG=1 python your_script.py ``` ### Common Issues #### **Traces not appearing** 1. Check connectivity: `traccia check` 2. Validate config: `traccia doctor` 3. Enable debug logging 4. Verify endpoint is correct and accessible #### **High memory usage** - Reduce `max_queue_size` in rate limiting config - Lower `sample_rate` to reduce volume - Enable rate limiting with `max_spans_per_second` #### **Spans being dropped** - Check rate limiter logs for warnings - Increase `max_spans_per_second` if set - Increase `max_queue_size` if spans are queued - Check `traccia doctor` output --- ## 📚 API Reference ### Core Functions #### `init(**kwargs) -> TracerProvider` Initialize the Traccia SDK. **Parameters**: - `endpoint` (str, optional): OTLP endpoint URL (default: `config.DEFAULT_OTLP_TRACE_ENDPOINT` — Traccia platform) - `api_key` (str, optional): API key (optional, for future Traccia UI) - `sample_rate` (float, optional): Sampling rate (0.0-1.0) - `auto_start_trace` (bool, optional): Auto-start root trace - `config_file` (str, optional): Path to config file - `use_otlp` (bool, optional): Use OTLP exporter - `enable_console` (bool, optional): Enable console exporter - `enable_file` (bool, optional): Enable file exporter - `enable_patching` (bool, optional): Auto-patch libraries - `enable_token_counting` (bool, optional): Count tokens - `enable_costs` (bool, optional): Calculate costs - `enable_metrics` (bool, optional): Enable OTEL metrics (default: True) - `metrics_endpoint` (str, optional): Metrics endpoint - `metrics_sample_rate` (float, optional): Metrics sampling (default: 1.0) - `max_spans_per_second` (float, optional): Rate limit - `**kwargs`: Any other config parameter **Returns**: TracerProvider instance #### `stop_tracing(flush_timeout: float = 1.0) -> None` Stop tracing and flush pending spans. **Parameters**: - `flush_timeout` (float): Max seconds to wait for flush #### `get_tracer(name: str = "default") -> Tracer` Get a tracer instance. **Parameters**: - `name` (str): Tracer name (typically module/service name) **Returns**: Tracer instance #### `span(name: str, attributes: dict = None) -> Span` Create a span context manager. **Parameters**: - `name` (str): Span name - `attributes` (dict, optional): Initial attributes **Returns**: Span context manager ### Decorator #### `@observe(name=None, *, attributes=None, tags=None, as_type="span", skip_args=None, skip_result=False)` Decorate a function to create spans automatically. **Parameters**: - `name` (str, optional): Span name (default: function name) - `attributes` (dict, optional): Initial attributes - `tags` (list[str], optional): User-defined identifiers for the observed method - `as_type` (str): Span type (`"span"`, `"llm"`, `"tool"`) - `skip_args` (list, optional): Arguments to skip capturing - `skip_result` (bool): Skip capturing return value ### Configuration #### `load_config(config_file=None, overrides=None) -> TracciaConfig` Load and validate configuration. **Parameters**: - `config_file` (str, optional): Path to config file - `overrides` (dict, optional): Override values **Returns**: Validated TracciaConfig instance **Raises**: `ConfigError` if invalid #### `validate_config(config_file=None, overrides=None) -> tuple[bool, str, TracciaConfig | None]` Validate configuration without loading. **Returns**: Tuple of (is_valid, message, config_or_none) --- ## 🏗️ Architecture ### Data Flow ``` Application Code (@observe) ↓ Span Creation ↓ Processors (token counting, cost, enrichment) ↓ Rate Limiter (optional) ↓ Batch Processor (buffering) ↓ Exporter (OTLP/Console/File) ↓ Backend (Grafana Tempo / Jaeger / Zipkin / etc.) ``` ### Instrumentation vs Integrations - **`traccia.instrumentation.*`**: Infrastructure and vendor instrumentation. - HTTP client/server helpers (including FastAPI middleware). - Vendor SDK hooks and monkey patching (e.g., OpenAI, Anthropic, `requests`). - Decorators and utilities used for auto-instrumenting arbitrary functions. - **`traccia.integrations.*`**: AI/agent framework integrations. - Adapters that plug into higher-level frameworks via their official extension points (e.g., LangChain callbacks). - Work at the level of chains, tools, agents, and workflows rather than raw HTTP or SDK calls. --- ## 🤝 Contributing Contributions are welcome! Whether it's bug fixes, new features, documentation improvements, or examples - we appreciate your help. ### How to Contribute 1. **Fork the repository** 2. **Create a feature branch**: `git checkout -b feature/amazing-feature` 3. **Make your changes** and add tests 4. **Run tests**: `pytest traccia/tests/` 5. **Lint your code**: `ruff check traccia/` 6. **Commit**: `git commit -m "Add amazing feature"` 7. **Push**: `git push origin feature/amazing-feature` 8. **Open a Pull Request** ### Development Setup ```bash # Clone the repository (Python SDK) git clone https://github.com/traccia-ai/traccia-py.git cd traccia-py # Create virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install in editable mode with dev dependencies pip install -e ".[dev]" # Run tests pytest traccia/tests/ -v # Run with coverage pytest traccia/tests/ --cov=traccia --cov-report=html ``` ### Code Style - Follow PEP 8 - Use type hints where appropriate - Add docstrings for public APIs - Write tests for new features - Keep PRs focused and atomic ### Areas We'd Love Help With - **Integrations**: Add support for more LLM providers (Cohere, AI21, local models) - **Backends**: Test and document setup with different OTLP backends - **Examples**: Real-world examples of agent instrumentation - **Documentation**: Tutorials, guides, video walkthroughs - **Performance**: Optimize hot paths, reduce overhead - **Testing**: Improve test coverage, add integration tests --- ## 📄 License Apache License 2.0 - see [LICENSE](LICENSE) for full terms and conditions. --- ## 🙏 Acknowledgments Built with: - [OpenTelemetry](https://opentelemetry.io/) - Vendor-neutral observability framework - [Pydantic](https://pydantic.dev/) - Data validation - [tiktoken](https://github.com/openai/tiktoken) - Token counting Inspired by observability tools in the ecosystem and designed to work seamlessly with the OTLP standard. --- ## 📞 Support & Community - **Issues**: [GitHub Issues](https://github.com/traccia-ai/traccia-py/issues) - Report bugs or request features - **Discussions**: [GitHub Discussions](https://github.com/traccia-ai/traccia-py/discussions) - Ask questions, share ideas --- **Made with ❤️ for the AI agent community**
text/markdown
null
null
null
null
Apache-2.0
tracing, observability, opentelemetry, ai-agents, llm, distributed-tracing, monitoring, telemetry
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: System :: Monitoring", "Topic :: Software Development :: Testing" ]
[]
null
null
>=3.9
[]
[]
[]
[ "tiktoken>=0.7.0", "opentelemetry-api>=1.20.0", "opentelemetry-sdk>=1.20.0", "opentelemetry-exporter-otlp-proto-http>=1.20.0", "opentelemetry-semantic-conventions>=0.40b0", "tomli>=2.0.0; python_version < \"3.11\"", "toml>=0.10.0", "pydantic>=2.0.0", "langchain-core>=0.1.0; extra == \"langchain\"" ]
[]
[]
[]
[ "Homepage, https://github.com/traccia-ai/traccia-py", "Documentation, https://github.com/traccia-ai/traccia-py#readme", "Repository, https://github.com/traccia-ai/traccia-py", "Issues, https://github.com/traccia-ai/traccia-py/issues", "Bug Tracker, https://github.com/traccia-ai/traccia-py/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:39:46.407169
traccia-0.1.12.tar.gz
100,839
08/ff/280465622de053849616905d62bf84493e40ca1101634b9bad46d8474689/traccia-0.1.12.tar.gz
source
sdist
null
false
daa1f6d452024c6f550c0e49680742e3
703b5e5d95141681ae55d2f056425b10d6362d6df21308b35608603bd5d645d8
08ff280465622de053849616905d62bf84493e40ca1101634b9bad46d8474689
null
[ "LICENSE" ]
229
2.4
surfa-ingest
0.1.0
Official Python SDK for Surfa Analytics - Ingest live traffic events
# Surfa Ingest SDK Official Python SDK for ingesting live traffic events to Surfa Analytics. ## Installation ```bash pip install surfa-ingest ``` ## Quick Start ```python from surfa_ingest import SurfaClient # Initialize client with your ingest key client = SurfaClient(ingest_key="sk_live_your_key_here") # Track events client.track({ "kind": "tool", "subtype": "call_started", "tool_name": "search_web", "args": {"query": "AI news"} }) client.track({ "kind": "tool", "subtype": "call_completed", "tool_name": "search_web", "status": "success", "latency_ms": 234 }) # Flush events to API client.flush() ``` ## Context Manager (Recommended) Use the context manager to automatically track session lifecycle: ```python from surfa_ingest import SurfaClient with SurfaClient(ingest_key="sk_live_your_key_here") as client: # Session automatically started client.track({ "kind": "tool", "subtype": "call_started", "tool_name": "search_web" }) # Session automatically ended and events flushed on exit ``` ## Configuration ```python client = SurfaClient( ingest_key="sk_live_your_key_here", api_url="https://api.surfa.dev", # Default: http://localhost:3000 flush_at=25, # Auto-flush after 25 events timeout_s=10, # HTTP timeout in seconds ) ``` ## Set Runtime Metadata Track which AI runtime is being used: ```python client = SurfaClient(ingest_key="sk_live_...") client.set_runtime( provider="anthropic", model="claude-sonnet-4-5", mode="messages" ) ``` ## Event Types ### Tool Events ```python # Tool call started client.track({ "kind": "tool", "subtype": "call_started", "tool_name": "search_web", "direction": "request", "args": {"query": "Python tutorials"} }) # Tool call completed client.track({ "kind": "tool", "subtype": "call_completed", "tool_name": "search_web", "direction": "response", "status": "success", "latency_ms": 234, "results": [{"title": "Learn Python", "url": "..."}] }) ``` ### Session Events ```python # Session started client.session_started() # Session ended client.session_ended() ``` ### Runtime Events ```python # LLM request client.track({ "kind": "runtime", "subtype": "llm_request", "direction": "outbound", "messages": [{"role": "user", "content": "Hello"}], "temperature": 0.7 }) ``` ## Event Fields ### Required Fields - `kind` (str): Event type (e.g., "tool", "session", "runtime") ### Optional Fields - `subtype` (str): Event subtype (e.g., "call_started", "session_ended") - `tool_name` (str): Name of the tool - `status` (str): Status (e.g., "success", "error") - `direction` (str): Direction (e.g., "request", "response") - `method` (str): HTTP method or similar - `correlation_id` (str): Correlation ID for pairing events - `span_parent_id` (str): Parent span ID for tracing - `latency_ms` (int): Latency in milliseconds - `ts` (str): Timestamp (ISO 8601 format, auto-generated if not provided) - Any additional fields will be included in the event payload ## Auto-Flush Events are automatically flushed when: 1. Buffer reaches `flush_at` events (default: 25) 2. Context manager exits 3. `flush()` is called explicitly ## Error Handling ```python from surfa_ingest import SurfaClient, SurfaConfigError, SurfaValidationError try: client = SurfaClient(ingest_key="invalid_key") except SurfaConfigError as e: print(f"Configuration error: {e}") try: client.track({"invalid": "event"}) # Missing 'kind' except SurfaValidationError as e: print(f"Validation error: {e}") ``` ## Logging The SDK uses Python's standard logging module: ```python import logging logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger("surfa_ingest") ``` ## Development Status **Current Version: 0.1.0 (Alpha)** This SDK is in active development. The API may change in future versions. ### Implemented - ✅ Client initialization - ✅ Event buffering - ✅ Session management - ✅ Context manager support - ✅ Event validation - ✅ Runtime metadata ### Coming Soon - 🔜 HTTP API integration - 🔜 Automatic retry logic - 🔜 Background flushing - 🔜 Async support ## License MIT ## Support - Documentation: https://docs.surfa.dev - Issues: https://github.com/yourusername/surfa/issues - Email: support@surfa.dev
text/markdown
null
Surfa Team <support@surfa.dev>
null
null
MIT
analytics, observability, mcp, ai, llm, monitoring
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: System :: Monitoring" ]
[]
null
null
>=3.8
[]
[]
[]
[ "requests>=2.31.0", "pytest>=7.0.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "black>=23.0.0; extra == \"dev\"", "mypy>=1.0.0; extra == \"dev\"", "ruff>=0.1.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://surfa.dev", "Documentation, https://docs.surfa.dev", "Repository, https://github.com/yourusername/surfa", "Issues, https://github.com/yourusername/surfa/issues" ]
twine/6.2.0 CPython/3.9.6
2026-02-20T10:39:28.893820
surfa_ingest-0.1.0.tar.gz
11,762
fe/2f/91d21c4eb3aeb42f86fb6c921724ae53bdb10d7111e06ddbf8ab92ba3577/surfa_ingest-0.1.0.tar.gz
source
sdist
null
false
59c8ebaa284572da590d3e9ee59c9681
1b95f05f5af24d4590ab8868ad7b2ba6a54da3cfdf4231f154a4d7a463cab0d4
fe2f91d21c4eb3aeb42f86fb6c921724ae53bdb10d7111e06ddbf8ab92ba3577
null
[]
241
2.4
evalview
0.3.0
Pytest-style testing framework for AI agents — LangGraph, CrewAI, OpenAI, Anthropic, Claude
# EvalView — Proof that your agent still works. > You changed a prompt. Swapped a model. Updated a tool. > Did anything break? **Run EvalView. Know for sure.** <p align="center"> <img src="assets/demo.gif" alt="EvalView Demo" width="700"> </p> <p align="center"> ```bash pip install evalview && evalview demo # Uses your configured API key ``` </p> <p align="center"> <a href="https://pypi.org/project/evalview/"><img src="https://img.shields.io/pypi/dm/evalview.svg?label=downloads" alt="PyPI downloads"></a> <a href="https://github.com/hidai25/eval-view/stargazers"><img src="https://img.shields.io/github/stars/hidai25/eval-view?style=social" alt="GitHub stars"></a> <a href="https://github.com/hidai25/eval-view/actions/workflows/ci.yml"><img src="https://github.com/hidai25/eval-view/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License"></a> </p> <p align="center"> 🌟 <strong>Like it?</strong> Give us a ⭐ — it helps more devs discover EvalView. </p> --- ## 🔍 What EvalView Catches | Status | What it means | What you do | |--------|--------------|-------------| | ✅ **PASSED** | Agent behavior matches baseline | Ship with confidence | | ⚠️ **TOOLS_CHANGED** | Agent is calling different tools | Review the diff | | ⚠️ **OUTPUT_CHANGED** | Same tools, output quality shifted | Review the diff | | ❌ **REGRESSION** | Score dropped significantly | Fix before shipping | --- ## 🤔 How It Works **Simple workflow (recommended):** ```bash # 1. Your agent works correctly evalview snapshot # 📸 Save current behavior as baseline # 2. You change something (prompt, model, tools) evalview check # 🔍 Detect regressions automatically # 3. EvalView tells you exactly what changed # → ✅ All clean! No regressions detected. # → ⚠️ TOOLS_CHANGED: +web_search, -calculator # → ❌ REGRESSION: score 85 → 71 ``` **Advanced workflow (more control):** ```bash evalview run --save-golden # Save specific result as baseline evalview run --diff # Compare with custom options ``` That's it. **Deterministic proof, no LLM-as-judge required, no API keys needed.** ### 🎯 New: Habit-Forming Regression Detection EvalView now tracks your progress and celebrates wins: ```bash evalview check # 🔍 Comparing against your baseline... # ✨ All clean! No regressions detected. # 🎯 5 clean checks in a row! You're on a roll. ``` **Features:** - 🔥 **Streak tracking** — Celebrate consecutive clean checks (3, 5, 10, 25+ milestones) - 📊 **Health score** — See your project's stability at a glance - 🔔 **Smart recaps** — "Since last time" summaries to stay in context - 📈 **Progress visualization** — Track improvement over time ### 🎨 Multi-Reference Goldens (for non-deterministic agents) Some agents produce valid variations. Save up to 5 golden variants per test: ```bash # Save multiple acceptable behaviors evalview snapshot --variant variant1 evalview snapshot --variant variant2 # EvalView compares against ALL variants, passes if ANY match evalview check # ✅ Matched variant 2/3 ``` Perfect for LLM-based agents with creative variation. --- ## 🚀 Quick Start 1. **Install EvalView** ```bash pip install evalview ``` 2. **Try the demo** (zero setup, no API key) ```bash evalview demo ``` 3. **Set up a working example** in 2 minutes ```bash evalview quickstart ``` 4. **Want LLM-as-judge scoring too?** ```bash export OPENAI_API_KEY='your-key' evalview run ``` 5. **Prefer local/free evaluation?** ```bash evalview run --judge-provider ollama --judge-model llama3.2 ``` [Full getting started guide →](docs/GETTING_STARTED.md) --- ## 🆕 New in v0.2.9: Claude Code MCP Server If you're using Claude Code, this is the biggest upgrade in recent releases: - Run EvalView checks inline from Claude Code via MCP tools - Generate tests from natural language (`create_test`) - Capture baselines and detect regressions without leaving the editor/conversation 👉 Jump to [Claude Code Integration (MCP)](#-claude-code-integration-mcp) --- ## 💡 Why EvalView? - 🔄 **Automatic regression detection** — Know instantly when your agent breaks - 📸 **Golden baseline diffing** — Save known-good behavior, compare every change - 🔑 **Works without API keys** — Deterministic scoring, no LLM-as-judge needed - 💸 **Free & open source** — No vendor lock-in, no SaaS pricing - 🏠 **Works offline** — Use Ollama for fully local evaluation | | Observability (LangSmith) | Benchmarks (Braintrust) | **EvalView** | |---|:---:|:---:|:---:| | **Answers** | "What did my agent do?" | "How good is my agent?" | **"Did my agent change?"** | | Detects regressions | ❌ | ⚠️ Manual | ✅ Automatic | | Golden baseline diffing | ❌ | ❌ | ✅ | | Works without API keys | ❌ | ❌ | ✅ | | Free & open source | ❌ | ❌ | ✅ | | Works offline (Ollama) | ❌ | ⚠️ Some | ✅ | **Use observability tools to see what happened. Use EvalView to prove it didn't break.** --- ## 🧭 Explore & Learn ### 💬 Interactive Chat Talk to your tests. Debug failures. Compare runs. ```bash evalview chat ``` ``` You: run the calculator test 🤖 Running calculator test... ✅ Passed (score: 92.5) You: compare to yesterday 🤖 Score: 92.5 → 87.2 (-5.3) Tools: +1 added (validator) Cost: $0.003 → $0.005 (+67%) ``` Slash commands: `/run`, `/test`, `/compare`, `/traces`, `/skill`, `/adapters` [Chat mode docs →](docs/CHAT_MODE.md) ### 🏋️ EvalView Gym Practice agent eval patterns with guided exercises. ```bash evalview gym ``` --- ## ⚡ Supported Agents & Frameworks | Agent | E2E Testing | Trace Capture | |-------|:-----------:|:-------------:| | **Claude Code** | ✅ | ✅ | | **OpenAI Codex** | ✅ | ✅ | | **LangGraph** | ✅ | ✅ | | **CrewAI** | ✅ | ✅ | | **OpenAI Assistants** | ✅ | ✅ | | **Custom (any CLI/API)** | ✅ | ✅ | Also works with: AutoGen • Dify • Ollama • HuggingFace • Any HTTP API [Compatibility details →](docs/FRAMEWORK_SUPPORT.md) --- ## 🔧 Automate It ### GitHub Actions ```bash evalview init --ci # Generates workflow file ``` Or add manually: ```yaml # .github/workflows/evalview.yml name: Agent Health Check on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: hidai25/eval-view@v0.2.5 with: openai-api-key: ${{ secrets.OPENAI_API_KEY }} command: check # Use new check command fail-on: 'REGRESSION' # Block PRs on regressions json: true # Structured output for CI ``` **Or use the CLI directly:** ```yaml - run: evalview check --fail-on REGRESSION --json env: OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} ``` PRs with regressions get blocked. Add a PR comment showing exactly what changed: ```yaml - run: evalview ci comment env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} ``` [Full CI/CD setup →](docs/CI_CD.md) --- ## 🤖 Claude Code Integration (MCP) **Test your agent without leaving the conversation.** EvalView runs as an MCP server inside Claude Code — ask "did my refactor break anything?" and get the answer inline. ### Setup (3 steps, one-time) ```bash # 1. Install pip install evalview # 2. Connect to Claude Code claude mcp add --transport stdio evalview -- evalview mcp serve # 3. Make Claude Code proactive (auto-checks after every edit) cp CLAUDE.md.example CLAUDE.md ``` ### What you get 7 tools Claude Code can call on your behalf: **Agent regression testing:** | Tool | What it does | |------|-------------| | `create_test` | Generate a test case from natural language — no YAML needed | | `run_snapshot` | Capture current agent behavior as the golden baseline | | `run_check` | Detect regressions vs baseline, returns structured JSON diff | | `list_tests` | Show all golden baselines with scores and timestamps | **Skills testing (full 3-phase workflow):** | Tool | Phase | What it does | |------|-------|-------------| | `validate_skill` | Pre-test | Validate SKILL.md structure before running tests | | `generate_skill_tests` | Pre-test | Auto-generate test cases from a SKILL.md | | `run_skill_test` | Test | Run Phase 1 (deterministic) + Phase 2 (rubric) evaluation | ### How it works in practice ``` You: Add a test for my weather agent Claude: [create_test] ✅ Created tests/weather-lookup.yaml [run_snapshot] 📸 Baseline captured — regression detection active. You: Refactor the weather tool to use async Claude: [makes code changes] [run_check] ✨ All clean! No regressions detected. You: Switch to a different weather API Claude: [makes code changes] [run_check] ⚠️ TOOLS_CHANGED: weather_api → open_meteo Output similarity: 94% — review the diff? ``` No YAML. No terminal switching. No context loss. **Skills testing example:** ``` You: I wrote a code-reviewer skill, test it Claude: [validate_skill] ✅ SKILL.md is valid [generate_skill_tests] 📝 Generated 10 tests → tests/code-reviewer-tests.yaml [run_skill_test] Phase 1: 9/10 ✓ Phase 2: avg 87/100 1 failure: skill didn't trigger on implicit input ``` ### Manual server start (advanced) ```bash evalview mcp serve # Uses tests/ by default evalview mcp serve --test-path my_tests/ # Custom test directory ``` --- ## 📦 Features | Feature | Description | Docs | |---------|-------------|------| | 📸 **Snapshot/Check Workflow** | Simple `snapshot` → `check` commands for regression detection | [→](docs/GOLDEN_TRACES.md) | | 🤖 **Claude Code MCP** | Run checks inline in Claude Code — no terminal switching | [↑](#-claude-code-integration-mcp) | | 🔥 **Streak Tracking** | Habit-forming celebrations for consecutive clean checks | [→](docs/GOLDEN_TRACES.md) | | 🎨 **Multi-Reference Goldens** | Save up to 5 variants per test for non-deterministic agents | [→](docs/GOLDEN_TRACES.md) | | 💬 **Chat Mode** | AI assistant: `/run`, `/test`, `/compare` | [→](docs/CHAT_MODE.md) | | 🏷️ **Tool Categories** | Match by intent, not exact tool names | [→](docs/TOOL_CATEGORIES.md) | | 📊 **Statistical Mode** | Handle flaky LLMs with `--runs N` and pass@k | [→](docs/STATISTICAL_MODE.md) | | 💰 **Cost & Latency** | Automatic threshold enforcement | [→](docs/EVALUATION_METRICS.md) | | 📈 **HTML Reports** | Interactive Plotly charts | [→](docs/CLI_REFERENCE.md) | | 🧪 **Test Generation** | Generate 1000 tests from 1 | [→](docs/TEST_GENERATION.md) | | 🏗️ **Suite Types** | Separate capability vs regression tests | [→](docs/SUITE_TYPES.md) | | 🎯 **Difficulty Levels** | Filter by `--difficulty hard`, benchmark by tier | [→](docs/STATISTICAL_MODE.md) | | 🔬 **Behavior Coverage** | Track tasks, tools, paths tested | [→](docs/BEHAVIOR_COVERAGE.md) | --- ## 🔬 Advanced: Skills Testing Test that your agent's code actually works — not just that the output looks right. Best for teams maintaining SKILL.md workflows for Claude Code or Codex. ```yaml tests: - name: creates-working-api input: "Create an express server with /health endpoint" expected: files_created: ["index.js", "package.json"] build_must_pass: - "npm install" - "npm run lint" smoke_tests: - command: "node index.js" background: true health_check: "http://localhost:3000/health" expected_status: 200 timeout: 10 no_sudo: true git_clean: true ``` ```bash evalview skill test tests.yaml --agent claude-code evalview skill test tests.yaml --agent codex evalview skill test tests.yaml --agent langgraph ``` | Check | What it catches | |-------|-----------------| | `build_must_pass` | Code that doesn't compile, missing dependencies | | `smoke_tests` | Runtime crashes, wrong ports, failed health checks | | `git_clean` | Uncommitted files, dirty working directory | | `no_sudo` | Privilege escalation attempts | | `max_tokens` | Cost blowouts, verbose outputs | [Skills testing docs →](docs/SKILLS_TESTING.md) --- ## 📚 Documentation | | | |---|---| | [Getting Started](docs/GETTING_STARTED.md) | [CLI Reference](docs/CLI_REFERENCE.md) | | [Golden Traces](docs/GOLDEN_TRACES.md) | [CI/CD Integration](docs/CI_CD.md) | | [Tool Categories](docs/TOOL_CATEGORIES.md) | [Statistical Mode](docs/STATISTICAL_MODE.md) | | [Chat Mode](docs/CHAT_MODE.md) | [Evaluation Metrics](docs/EVALUATION_METRICS.md) | | [Skills Testing](docs/SKILLS_TESTING.md) | [Debugging](docs/DEBUGGING.md) | | [FAQ](docs/FAQ.md) | | **Guides:** [Testing LangGraph in CI](guides/pytest-for-ai-agents-langgraph-ci.md) • [Detecting Hallucinations](guides/detecting-llm-hallucinations-in-ci.md) --- ## 📂 Examples | Framework | Link | |-----------|------| | Claude Code (E2E) | [examples/agent-test/](examples/agent-test/) | | LangGraph | [examples/langgraph/](examples/langgraph/) | | CrewAI | [examples/crewai/](examples/crewai/) | | Anthropic Claude | [examples/anthropic/](examples/anthropic/) | | Dify | [examples/dify/](examples/dify/) | | Ollama (Local) | [examples/ollama/](examples/ollama/) | **Node.js?** See [@evalview/node](sdks/node/) --- ## 🗺️ Roadmap **Shipped:** Golden traces • **Snapshot/check workflow** • **Streak tracking & celebrations** • **Multi-reference goldens** • Tool categories • Statistical mode • Difficulty levels • Partial sequence credit • Skills validation • E2E agent testing • Build & smoke tests • Health checks • Safety guards (`no_sudo`, `git_clean`) • Claude Code & Codex adapters • **Opus 4.6 cost tracking** • MCP servers • HTML reports • Interactive chat mode • EvalView Gym **Coming:** Agent Teams trace analysis • Multi-turn conversations • Grounded hallucination detection • Error compounding metrics • Container isolation [Vote on features →](https://github.com/hidai25/eval-view/discussions) --- ## 🤝 Get Help & Contributing - **Questions?** [GitHub Discussions](https://github.com/hidai25/eval-view/discussions) - **Bugs?** [GitHub Issues](https://github.com/hidai25/eval-view/issues) - **Want setup help?** Email hidai@evalview.com — happy to help configure your first tests - **Contributing?** See [CONTRIBUTING.md](CONTRIBUTING.md) **License:** Apache 2.0 --- ### ⭐ Thank You for the Support! [![Star History Chart](https://api.star-history.com/svg?repos=hidai25/eval-view&type=Date)](https://star-history.com/#hidai25/eval-view&Date) 🌟 **Don't miss out on future updates! Star the repo and be the first to know about new features.** --- <p align="center"> <b>Proof that your agent still works.</b><br> <a href="#-quick-start">Get started →</a> </p> --- *EvalView is an independent open-source project, not affiliated with LangGraph, CrewAI, OpenAI, Anthropic, or any other third party.*
text/markdown
null
EvalView Team <hidai@evalview.com>
null
null
null
ai, agents, testing, evaluation, llm, langchain, langgraph, crewai, openai, anthropic, claude, claude-opus, opus-4-6, multi-agent, pytest-ai, ai-agent-testing, llm-testing, agent-evaluation, regression-testing, ci-cd-testing, yaml-testing, tool-calling
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Software Development :: Testing", "Topic :: Software Development :: Quality Assurance", "Topic :: Scientific/Engineering :: Artificial Intelligence" ]
[]
null
null
>=3.9
[]
[]
[]
[ "click>=8.1.0", "pydantic>=2.5.0", "pyyaml>=6.0", "openai>=1.12.0", "anthropic>=0.39.0", "rich>=13.7.0", "prompt_toolkit>=3.0.0", "httpx>=0.26.0", "python-dateutil>=2.8.2", "python-dotenv>=1.0.0", "jinja2>=3.0; extra == \"reports\"", "plotly>=5.0; extra == \"reports\"", "watchdog>=3.0; extra == \"watch\"", "posthog>=3.0.0; extra == \"telemetry\"", "jinja2>=3.0; extra == \"all\"", "plotly>=5.0; extra == \"all\"", "watchdog>=3.0; extra == \"all\"", "posthog>=3.0.0; extra == \"all\"", "pytest>=7.4.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-cov>=4.1.0; extra == \"dev\"", "black==24.10.0; extra == \"dev\"", "mypy>=1.7.0; extra == \"dev\"", "ruff>=0.1.0; extra == \"dev\"", "jinja2>=3.0; extra == \"dev\"", "plotly>=5.0; extra == \"dev\"", "watchdog>=3.0; extra == \"dev\"", "anthropic>=0.39.0; extra == \"dev\"", "fastapi>=0.109.0; extra == \"dev\"", "uvicorn>=0.27.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/hidai25/eval-view", "Documentation, https://github.com/hidai25/eval-view#readme", "Repository, https://github.com/hidai25/eval-view.git", "Issue Tracker, https://github.com/hidai25/eval-view/issues", "Changelog, https://github.com/hidai25/eval-view/blob/main/CHANGELOG.md" ]
twine/6.2.0 CPython/3.9.13
2026-02-20T10:38:58.493872
evalview-0.3.0.tar.gz
393,007
d3/e7/cd415cd20df12e1f28b3abbc9d787eba19ba59a685f3f17b3d4f1654f697/evalview-0.3.0.tar.gz
source
sdist
null
false
ba983cd4b938e765bb119e2e3b8276e3
69fc11efd2dfa567be31be1184ca76c649123f1a3aa3da15481f719f983f3275
d3e7cd415cd20df12e1f28b3abbc9d787eba19ba59a685f3f17b3d4f1654f697
Apache-2.0
[ "LICENSE", "NOTICE" ]
282
2.4
acex
4.1.5
ACE-X Backend - Core automation engine and API
# ACE-X Backend Core automation engine and API for the ACE-X ecosystem. ## Installation ```bash pip install acex ``` ## Development ```bash cd backend poetry install poetry run pytest ``` ## Usage ```python from acex.core import AutomationEngine # Your code here ``` ## Features - Core automation engine - REST API - Event handling - Plugin system ## Documentation See the [main documentation](../README.md) for more information.
text/markdown
Johan Lahti
johan.lahti@acebit.se
null
null
AGPL-3.0
automation, control, api
[ "License :: OSI Approved :: GNU Affero General Public License v3", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
<4.0,>=3.13
[]
[]
[]
[ "fastapi<0.122.0,>=0.121.0", "fastmcp<3.0.0.0,>=2.13.0.2", "jinja2<4.0.0,>=3.1.6", "mcp<2.0.0,>=1.20.0", "openai<2.0.0,>=1.54.0", "psycopg2-binary>=2.9.10", "requests<3.0.0,>=2.32.5", "sqlmodel<0.0.28,>=0.0.27", "uvicorn<0.39.0,>=0.38.0" ]
[]
[]
[]
[ "Homepage, https://github.com/acex-labs/acex", "Repository, https://github.com/acex-labs/acex" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:38:56.062072
acex-4.1.5.tar.gz
45,489
80/f5/4d8cafc73299ece38637003ce3288581229425582f3550b455b04612ecee/acex-4.1.5.tar.gz
source
sdist
null
false
5bd54aa242bf03514cabd68e6e160883
787df21e00d53a0fa3fe528beae8b2e72c6e5f33e56188cb1415a9457010c4ed
80f54d8cafc73299ece38637003ce3288581229425582f3550b455b04612ecee
null
[]
248
2.4
acex-devkit
1.0.5
ACE-X DevKit - Development kit for building ACE-X drivers and plugins
# ACE-X DevKit Development kit for building ACE-X drivers and plugins. ## Installation ```bash pip install acex-devkit ``` ## Usage ### Building a Network Element Driver ```python from acex_devkit.drivers import NetworkElementDriver, TransportBase, RendererBase, ParserBase from acex_devkit.models import ComposedConfiguration class MyRenderer(RendererBase): def render(self, model: dict) -> str: # Implement your rendering logic pass class MyTransport(TransportBase): def connect(self) -> None: # Implement connection logic pass def send(self, payload: Any) -> None: # Implement send logic pass def verify(self) -> bool: # Implement verification logic return True def rollback(self) -> None: # Implement rollback logic pass class MyParser(ParserBase): def parse(self, configuration: str) -> ComposedConfiguration: # Implement parsing logic pass class MyDriver(NetworkElementDriver): renderer_class = MyRenderer transport_class = MyTransport parser_class = MyParser def render(self, configuration: ComposedConfiguration, asset): return self.renderer.render(configuration, asset) def parse(self, configuration: str) -> ComposedConfiguration: return self.parser.parse(configuration) ``` ## Package Contents - **models**: Common data models (Asset, LogicalNode, ComposedConfiguration, etc.) - **drivers**: Base classes for building network element drivers - **exceptions**: Common exceptions - **types**: Type aliases and protocols ## License AGPL-3.0
text/markdown
Johan Lahti
johan.lahti@acebit.se
null
null
AGPL-3.0
automation, devkit, sdk, drivers, plugins
[ "License :: OSI Approved :: GNU Affero General Public License v3", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
<4.0,>=3.13
[]
[]
[]
[ "deepdiff<9.0.0,>=8.6.1", "pydantic<3.0.0,>=2.12.5", "typing-extensions<5.0.0,>=4.0.0" ]
[]
[]
[]
[ "Homepage, https://github.com/acex-labs/acex", "Repository, https://github.com/acex-labs/acex" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:38:48.370310
acex_devkit-1.0.5.tar.gz
14,248
50/bf/c79ae42be2d737d2329b6cdb380dbdd7e1268c6378c54abea18a39e5f57c/acex_devkit-1.0.5.tar.gz
source
sdist
null
false
322f2ef82e04cf58d80d6207b8f9b4fe
8b3b55bc5cf948c674f3c147ba265e587b2c08c91ab68e0966bed993337e8321
50bfc79ae42be2d737d2329b6cdb380dbdd7e1268c6378c54abea18a39e5f57c
null
[]
254
2.4
agntcy-dir
1.0.0
Directory SDK
# Directory Python SDK ## Overview Dir Python SDK provides a simple way to interact with the Directory API. It allows developers to integrate and use Directory functionality from their Python applications with ease. ## Features The Directory Python SDK provides comprehensive access to all Directory APIs with a simple, intuitive interface: ### **Store API** - **Record Management**: Push records to the store and pull them by reference - **Metadata Operations**: Look up record metadata without downloading full content - **Data Lifecycle**: Delete records permanently from the store - **Referrer Support**: Push and pull artifacts for existing records - **Sync Management**: Manage storage synchronization policies between Directory servers ### **Search API** - **Flexible Search**: Search stored records using text, semantic, and structured queries - **Advanced Filtering**: Filter results by metadata, content type, and other criteria ### **Routing API** - **Network Publishing**: Publish records to make them discoverable across the network - **Content Discovery**: List and query published records across the network - **Network Management**: Unpublish records to remove them from network discovery ### **Signing and Verification** - **Local Signing**: Sign records locally using private keys or OIDC-based authentication. Requires [dirctl](https://github.com/agntcy/dir/releases) binary to perform signing. - **Remote Verification**: Verify record signatures using the Directory gRPC API ### **Developer Experience** - **Type Safety**: Full type hints for better IDE support and fewer runtime errors - **Async Support**: Non-blocking operations with streaming responses for large datasets - **Error Handling**: Comprehensive gRPC error handling with detailed error messages - **Configuration**: Flexible configuration via environment variables or direct instantiation ## Installation Install the SDK using [uv](https://github.com/astral-sh/uv) 1. Initialize the project: ```bash uv init ``` 2. Add the SDK to your project: ```bash uv add agntcy-dir --index https://buf.build/gen/python ``` ## Configuration The SDK can be configured via environment variables or direct instantiation: ```python # Environment variables (insecure mode, default) export DIRECTORY_CLIENT_SERVER_ADDRESS="localhost:8888" export DIRCTL_PATH="/path/to/dirctl" # Environment variables (X.509 authentication) export DIRECTORY_CLIENT_SERVER_ADDRESS="localhost:8888" export DIRECTORY_CLIENT_AUTH_MODE="x509" export DIRECTORY_CLIENT_SPIFFE_SOCKET_PATH="/tmp/agent.sock" # Environment variables (JWT authentication) export DIRECTORY_CLIENT_SERVER_ADDRESS="localhost:8888" export DIRECTORY_CLIENT_AUTH_MODE="jwt" export DIRECTORY_CLIENT_SPIFFE_SOCKET_PATH="/tmp/agent.sock" export DIRECTORY_CLIENT_JWT_AUDIENCE="spiffe://example.org/dir-server" # Or configure directly from agntcy.dir_sdk.client import Config, Client # Insecure mode (default, for development only) config = Config( server_address="localhost:8888", dirctl_path="/usr/local/bin/dirctl" ) client = Client(config) # X.509 authentication with SPIRE x509_config = Config( server_address="localhost:8888", dirctl_path="/usr/local/bin/dirctl", spiffe_socket_path="/tmp/agent.sock", auth_mode="x509" ) x509_client = Client(x509_config) # JWT authentication with SPIRE jwt_config = Config( server_address="localhost:8888", dirctl_path="/usr/local/bin/dirctl", spiffe_socket_path="/tmp/agent.sock", auth_mode="jwt", jwt_audience="spiffe://example.org/dir-server" ) jwt_client = Client(jwt_config) ``` ## Error Handling The SDK primarily raises `grpc.RpcError` exceptions for gRPC communication issues and `RuntimeError` for configuration problems: ```python import grpc from agntcy.dir_sdk.client import Client try: client = Client() records = client.list(list_request) except grpc.RpcError as e: # Handle gRPC errors if e.code() == grpc.StatusCode.NOT_FOUND: print("Resource not found") elif e.code() == grpc.StatusCode.UNAVAILABLE: print("Server unavailable") else: print(f"gRPC error: {e.details()}") except RuntimeError as e: # Handle configuration or subprocess errors print(f"Runtime error: {e}") ``` Common gRPC status codes: - `NOT_FOUND`: Resource doesn't exist - `ALREADY_EXISTS`: Resource already exists - `UNAVAILABLE`: Server is down or unreachable - `PERMISSION_DENIED`: Authentication/authorization failure - `INVALID_ARGUMENT`: Invalid request parameters ## Getting Started ### Prerequisites - Python 3.10 or higher - [uv](https://github.com/astral-sh/uv) - Package manager - [dirctl](https://github.com/agntcy/dir/releases) - Directory CLI binary - Directory server instance (see setup below) ### 1. Server Setup **Option A: Local Development Server** ```bash # Clone the repository and start the server using Taskfile task server:start ``` **Option B: Custom Server** ```bash # Set your Directory server address export DIRECTORY_CLIENT_SERVER_ADDRESS="your-server:8888" ``` ### 2. SDK Installation ```bash # Add the Directory SDK uv add agntcy-dir --index https://buf.build/gen/python ``` ### Usage Examples See the [Example Python Project](../examples/example-py/) for a complete working example that demonstrates all SDK features. ```bash uv sync uv run example.py ```
text/markdown
null
null
null
null
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "grpcio>=1.74.0", "pyasn1>=0.6.2", "spiffe-tls>=0.2.1", "spiffe>=0.2.2" ]
[]
[]
[]
[]
uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-20T10:38:44.868314
agntcy_dir-1.0.0.tar.gz
64,954
73/e6/f171bc96414a5ac2657c5ae89712bdd74d43b1125ec6f826c5c633ed7a29/agntcy_dir-1.0.0.tar.gz
source
sdist
null
false
36ec3fcf4e46fc3cac6fed3b66fa854b
e39e8e5364b19635e27bf194207a623ea72661a4f2b96f89e6b6feee487c9564
73e6f171bc96414a5ac2657c5ae89712bdd74d43b1125ec6f826c5c633ed7a29
null
[]
248
2.4
mkdocstrings-python
2.0.3
A Python handler for mkdocstrings.
<h1 align="center">mkdocstrings-python</h1> <p align="center">A Python handler for <a href="https://github.com/mkdocstrings/mkdocstrings"><i>mkdocstrings</i></a>.</p> [![ci](https://github.com/mkdocstrings/python/workflows/ci/badge.svg)](https://github.com/mkdocstrings/python/actions?query=workflow%3Aci) [![documentation](https://img.shields.io/badge/docs-mkdocs-708FCC.svg?style=flat)](https://mkdocstrings.github.io/python/) [![pypi version](https://img.shields.io/pypi/v/mkdocstrings-python.svg)](https://pypi.org/project/mkdocstrings-python/) [![gitter](https://img.shields.io/badge/matrix-chat-4DB798.svg?style=flat)](https://app.gitter.im/#/room/#mkdocstrings_python:gitter.im) --- <p align="center"><img src="logo.png"></p> The Python handler uses [Griffe](https://mkdocstrings.github.io/griffe) to collect documentation from Python source code. The word "griffe" can sometimes be used instead of "signature" in French. Griffe is able to visit the Abstract Syntax Tree (AST) of the source code to extract useful information. It is also able to execute the code (by importing it) and introspect objects in memory when source code is not available. Finally, it can parse docstrings following different styles. ## Installation You can install this handler as a *mkdocstrings* extra: ```toml title="pyproject.toml" # PEP 621 dependencies declaration # adapt to your dependencies manager [project] dependencies = [ "mkdocstrings[python]>=0.18", ] ``` You can also explicitly depend on the handler: ```toml title="pyproject.toml" # PEP 621 dependencies declaration # adapt to your dependencies manager [project] dependencies = [ "mkdocstrings-python", ] ``` ## Preview <!-- TODO: update the GIF with a more recent screen capture. Maybe use mp4 instead --> ![mkdocstrings_python_gif](https://user-images.githubusercontent.com/3999221/77157838-7184db80-6aa2-11ea-9f9a-fe77405202de.gif) ## Features - **Data collection from source code**: collection of the object-tree and the docstrings is done thanks to [Griffe](https://github.com/mkdocstrings/griffe). - **Support for type annotations:** Griffe collects your type annotations and *mkdocstrings* uses them to display parameter types or return types. It is even able to automatically add cross-references to other objects from your API, from the standard library or third-party libraries! See [how to load inventories](https://mkdocstrings.github.io/usage/#cross-references-to-other-projects-inventories) to enable it. - **Recursive documentation of Python objects:** just use the module dotted-path as an identifier, and you get the full module docs. You don't need to inject documentation for each class, function, etc. - **Support for documented attributes:** attributes (variables) followed by a docstring (triple-quoted string) will be recognized by Griffe in modules, classes and even in `__init__` methods. - **Multiple docstring-styles support:** common support for Google-style, Numpydoc-style, and Sphinx-style docstrings. See [Griffe's documentation](https://mkdocstrings.github.io/griffe/docstrings/) on docstrings support. - **Admonition support in Google docstrings:** blocks like `Note:` or `Warning:` will be transformed to their [admonition](https://squidfunk.github.io/mkdocs-material/reference/admonitions/) equivalent. *We do not support nested admonitions in docstrings!* - **Every object has a TOC entry:** we render a heading for each object, meaning *MkDocs* picks them into the Table of Contents, which is nicely displayed by the Material theme. Thanks to *mkdocstrings* cross-reference ability, you can reference other objects within your docstrings, with the classic Markdown syntax: `[this object][package.module.object]` or directly with `[package.module.object][]` - **Source code display:** *mkdocstrings* can add a collapsible div containing the highlighted source code of the Python object. ## Sponsors <!-- sponsors-start --> <div id="premium-sponsors" style="text-align: center;"> <div id="silver-sponsors"><b>Silver sponsors</b><p> <a href="https://fastapi.tiangolo.com/"><img alt="FastAPI" src="https://raw.githubusercontent.com/tiangolo/fastapi/master/docs/en/docs/img/logo-margin/logo-teal.png" style="height: 200px; "></a><br> </p></div> <div id="bronze-sponsors"><b>Bronze sponsors</b><p> <a href="https://www.nixtla.io/"><picture><source media="(prefers-color-scheme: light)" srcset="https://www.nixtla.io/img/logo/full-black.svg"><source media="(prefers-color-scheme: dark)" srcset="https://www.nixtla.io/img/logo/full-white.svg"><img alt="Nixtla" src="https://www.nixtla.io/img/logo/full-black.svg" style="height: 60px; "></picture></a><br> </p></div> </div> --- <div id="sponsors"><p> <a href="https://github.com/ofek"><img alt="ofek" src="https://avatars.githubusercontent.com/u/9677399?u=386c330f212ce467ce7119d9615c75d0e9b9f1ce&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/samuelcolvin"><img alt="samuelcolvin" src="https://avatars.githubusercontent.com/u/4039449?u=42eb3b833047c8c4b4f647a031eaef148c16d93f&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/tlambert03"><img alt="tlambert03" src="https://avatars.githubusercontent.com/u/1609449?u=922abf0524b47739b37095e553c99488814b05db&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/ssbarnea"><img alt="ssbarnea" src="https://avatars.githubusercontent.com/u/102495?u=c7bd9ddf127785286fc939dd18cb02db0a453bce&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/femtomc"><img alt="femtomc" src="https://avatars.githubusercontent.com/u/34410036?u=f13a71daf2a9f0d2da189beaa94250daa629e2d8&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/cmarqu"><img alt="cmarqu" src="https://avatars.githubusercontent.com/u/360986?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/kolenaIO"><img alt="kolenaIO" src="https://avatars.githubusercontent.com/u/77010818?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/ramnes"><img alt="ramnes" src="https://avatars.githubusercontent.com/u/835072?u=3fca03c3ba0051e2eb652b1def2188a94d1e1dc2&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/machow"><img alt="machow" src="https://avatars.githubusercontent.com/u/2574498?u=c41e3d2f758a05102d8075e38d67b9c17d4189d7&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/BenHammersley"><img alt="BenHammersley" src="https://avatars.githubusercontent.com/u/99436?u=4499a7b507541045222ee28ae122dbe3c8d08ab5&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/trevorWieland"><img alt="trevorWieland" src="https://avatars.githubusercontent.com/u/28811461?u=74cc0e3756c1d4e3d66b5c396e1d131ea8a10472&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/MarcoGorelli"><img alt="MarcoGorelli" src="https://avatars.githubusercontent.com/u/33491632?u=7de3a749cac76a60baca9777baf71d043a4f884d&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/analog-cbarber"><img alt="analog-cbarber" src="https://avatars.githubusercontent.com/u/7408243?u=642fc2bdcc9904089c62fe5aec4e03ace32da67d&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/OdinManiac"><img alt="OdinManiac" src="https://avatars.githubusercontent.com/u/22727172?u=36ab20970f7f52ae8e7eb67b7fcf491fee01ac22&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/rstudio-sponsorship"><img alt="rstudio-sponsorship" src="https://avatars.githubusercontent.com/u/58949051?u=0c471515dd18111be30dfb7669ed5e778970959b&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/schlich"><img alt="schlich" src="https://avatars.githubusercontent.com/u/21191435?u=6f1240adb68f21614d809ae52d66509f46b1e877&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/butterlyn"><img alt="butterlyn" src="https://avatars.githubusercontent.com/u/53323535?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/livingbio"><img alt="livingbio" src="https://avatars.githubusercontent.com/u/10329983?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/NemetschekAllplan"><img alt="NemetschekAllplan" src="https://avatars.githubusercontent.com/u/912034?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/EricJayHartman"><img alt="EricJayHartman" src="https://avatars.githubusercontent.com/u/9259499?u=7e58cc7ec0cd3e85b27aec33656aa0f6612706dd&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/15r10nk"><img alt="15r10nk" src="https://avatars.githubusercontent.com/u/44680962?u=f04826446ff165742efa81e314bd03bf1724d50e&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/activeloopai"><img alt="activeloopai" src="https://avatars.githubusercontent.com/u/34816118?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/roboflow"><img alt="roboflow" src="https://avatars.githubusercontent.com/u/53104118?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/cmclaughlin"><img alt="cmclaughlin" src="https://avatars.githubusercontent.com/u/1061109?u=ddf6eec0edd2d11c980f8c3aa96e3d044d4e0468&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/blaisep"><img alt="blaisep" src="https://avatars.githubusercontent.com/u/254456?u=97d584b7c0a6faf583aa59975df4f993f671d121&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/RapidataAI"><img alt="RapidataAI" src="https://avatars.githubusercontent.com/u/104209891?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/rodolphebarbanneau"><img alt="rodolphebarbanneau" src="https://avatars.githubusercontent.com/u/46493454?u=6c405452a40c231cdf0b68e97544e07ee956a733&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/theSymbolSyndicate"><img alt="theSymbolSyndicate" src="https://avatars.githubusercontent.com/u/111542255?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/blakeNaccarato"><img alt="blakeNaccarato" src="https://avatars.githubusercontent.com/u/20692450?u=bb919218be30cfa994514f4cf39bb2f7cf952df4&v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/ChargeStorm"><img alt="ChargeStorm" src="https://avatars.githubusercontent.com/u/26000165?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/Alphadelta14"><img alt="Alphadelta14" src="https://avatars.githubusercontent.com/u/480845?v=4" style="height: 32px; border-radius: 100%;"></a> <a href="https://github.com/Cusp-AI"><img alt="Cusp-AI" src="https://avatars.githubusercontent.com/u/178170649?v=4" style="height: 32px; border-radius: 100%;"></a> </p></div> *And 7 more private sponsor(s).* <!-- sponsors-end -->
text/markdown
null
=?utf-8?q?Timoth=C3=A9e_Mazzucotelli?= <dev@pawamoy.fr>
null
null
null
null
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Documentation", "Topic :: Software Development", "Topic :: Software Development :: Documentation", "Topic :: Utilities", "Typing :: Typed" ]
[]
null
null
>=3.10
[]
[]
[]
[ "mkdocstrings>=0.30", "mkdocs-autorefs>=1.4", "griffelib>=2.0", "typing-extensions>=4.0; python_version < \"3.11\"" ]
[]
[]
[]
[ "Homepage, https://mkdocstrings.github.io/python", "Documentation, https://mkdocstrings.github.io/python", "Changelog, https://mkdocstrings.github.io/python/changelog", "Repository, https://github.com/mkdocstrings/python", "Issues, https://github.com/mkdocstrings/python/issues", "Discussions, https://github.com/mkdocstrings/python/discussions", "Gitter, https://gitter.im/mkdocstrings/python", "Funding, https://github.com/sponsors/pawamoy" ]
twine/6.2.0 CPython/3.14.2
2026-02-20T10:38:36.368538
mkdocstrings_python-2.0.3.tar.gz
199,083
29/33/c225eaf898634bdda489a6766fc35d1683c640bffe0e0acd10646b13536d/mkdocstrings_python-2.0.3.tar.gz
source
sdist
null
false
4cd2b5b8b2bf039be2dd1cdb4c6f7fe5
c518632751cc869439b31c9d3177678ad2bfa5c21b79b863956ad68fc92c13b8
2933c225eaf898634bdda489a6766fc35d1683c640bffe0e0acd10646b13536d
ISC
[ "LICENSE" ]
70,250
2.4
fluent-codegen
0.2
A Python library for generating Python code via AST construction.
fluent-codegen ============== A Python library for generating Python code via AST construction. Overview -------- ``fluent-codegen`` provides a set of classes that represent simplified Python constructs (functions, assignments, expressions, control flow, etc.) and can generate real Python ``ast`` nodes. This lets you build correct Python code programmatically without manipulating raw AST or worrying about string interpolation pitfalls. Originally extracted from `fluent-compiler <https://github.com/django-ftl/fluent-compiler>`__, where it was used to compile Fluent localization files into Python bytecode. Key features ------------ - **Safe by construction** — builds AST, not strings, eliminating injection bugs - **Scope management** — automatic name deduplication and scope tracking - **Simplified API** — high-level classes (``Function``, ``If``, ``Try``, ``StringJoin``, etc.) that map to Python constructs without requiring knowledge of the raw ``ast`` module - **Security guardrails** — blocks calls to sensitive builtins (``exec``, ``eval``, etc.) Installation ------------ .. code:: bash pip install fluent-codegen Requires Python 3.12+. Quick example ------------- This builds a FizzBuzz function entirely via the codegen API, using fluent method-chaining for expressions: .. code:: python from fluent_codegen import codegen # 1. Create a module and a function inside it module = codegen.Module() func, _ = module.create_function("fizzbuzz", args=["n"]) # 2. A Name reference to the "n" parameter (Function *is* a Scope) n = func.name("n") # 3. Build an if / elif / else chain if_stmt = func.body.create_if() # if n % 15 == 0: return "FizzBuzz" — fluent chaining branch = if_stmt.create_if_branch(n.mod(codegen.Number(15)).eq(codegen.Number(0))) branch.create_return(codegen.String("FizzBuzz")) # elif n % 3 == 0: return "Fizz" branch = if_stmt.create_if_branch(n.mod(codegen.Number(3)).eq(codegen.Number(0))) branch.create_return(codegen.String("Fizz")) # elif n % 5 == 0: return "Buzz" branch = if_stmt.create_if_branch(n.mod(codegen.Number(5)).eq(codegen.Number(0))) branch.create_return(codegen.String("Buzz")) # else: return str(n) if_stmt.else_block.create_return(module.scope.name("str").call([n])) # 4. Inspect the generated source print(module.as_python_source()) # def fizzbuzz(n): # if n % 15 == 0: # return 'FizzBuzz' # elif n % 3 == 0: # return 'Fizz' # elif n % 5 == 0: # return 'Buzz' # else: # return str(n) # 5. Compile, execute, and call the generated function code = compile(module.as_ast(), "<fizzbuzz>", "exec") ns: dict[str, object] = {} exec(code, ns) fizzbuzz = ns["fizzbuzz"] assert fizzbuzz(15) == "FizzBuzz" assert fizzbuzz(9) == "Fizz" assert fizzbuzz(10) == "Buzz" assert fizzbuzz(7) == "7" License ------- Apache License 2.0
text/x-rst
null
Luke Plant <luke@lukeplant.me.uk>
null
null
null
codegen, code-generation, ast, python, metaprogramming
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: Implementation :: CPython", "Topic :: Software Development :: Code Generators", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
null
null
>=3.12
[]
[]
[]
[]
[]
[]
[]
[ "Repository, https://github.com/spookylukey/fluent-codegen" ]
uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.2","id":"zara","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
2026-02-20T10:38:27.954098
fluent_codegen-0.2-py3-none-any.whl
18,373
8c/f5/c06c568be64dcf13793fb81366622ba717316abdc7752b91c618d8ccfc68/fluent_codegen-0.2-py3-none-any.whl
py3
bdist_wheel
null
false
9beafbf8a31d4edf533a6eb9c162d443
0c3d233e6fb62e4aeefe39bcea1299f091f15d3eaa15ca37b1820475caafd33f
8cf5c06c568be64dcf13793fb81366622ba717316abdc7752b91c618d8ccfc68
Apache-2.0
[ "LICENSE" ]
232
2.4
polytope-python
2.1.4
Polytope datacube feature extraction library
<h3 align="center"> <img src="https://raw.githubusercontent.com/ecmwf/polytope/develop/docs/images/polytope_logo_new_animated_AdobeExpress_3.gif" width=60%> </br> </h3> <p align="center"> <a href="https://github.com/ecmwf/codex/raw/refs/heads/main/Project%20Maturity"> <img src="https://github.com/ecmwf/codex/raw/refs/heads/main/Project%20Maturity/incubating_badge.svg" alt="Project Maturity"> </a> <a href="https://github.com/ecmwf/codex/raw/refs/heads/main/ESEE"> <img src="https://github.com/ecmwf/codex/raw/refs/heads/main/ESEE/data_provision_badge.svg" alt="ESEE"> </p> <p align="center"> <a href="https://github.com/ecmwf/polytope/actions/workflows/downstream-ci.yml"> <img src="https://github.com/ecmwf/polytope/actions/workflows/downstream-ci.yml/badge.svg" alt="ci"> </a> <a href="https://codecov.io/gh/ecmwf/polytope"><img src="https://codecov.io/gh/ecmwf/polytope/branch/develop/graph/badge.svg"></a> <a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg"></a> <a href="https://github.com/ecmwf/polytope/releases"><img src="https://img.shields.io/github/v/release/ecmwf/polytope?color=blue&label=Release&style=flat-square"></a> <a href='https://polytope.readthedocs.io/en/latest/?badge=latest'><img src='https://readthedocs.org/projects/polytope/badge/?version=latest' alt='Documentation Status' /></a> </p> <p align="center"> <a href="#concept">Concept</a> • <a href="#installation">Installation</a> • <a href="#example">Example</a> • <a href="#testing">Testing</a> • <a href="https://polytope.readthedocs.io/en/latest/">Documentation</a> </p> Polytope is a library for extracting complex data from datacubes. It provides an API for non-orthogonal access to data, where the stencil used to extract data from the datacube can be any arbitrary *n*-dimensional polygon (called a *polytope*). This can be used to efficiently extract complex features from a datacube, such as polygon regions or spatio-temporal paths. Polytope is designed to extend different datacube backends: * XArray dataarrays * FDB object stores (through the GribJump software) Polytope supports datacubes which have branching, non-uniform indexing, and even cyclic axes. If the datacube backend supports byte-addressability and efficient random access (either in-memory or direct from storage), **Polytope** can be used to dramatically decrease overall I/O load. > \[!IMPORTANT\] > This software is **Incubating** and subject to ECMWF's guidelines on [Software Maturity](https://github.com/ecmwf/codex/raw/refs/heads/main/Project%20Maturity). <!-- > [!WARNING] > This project is BETA and will be experimental for the foreseeable future. Interfaces and functionality are likely to change, and the project itself may be scrapped. DO NOT use this software in any project/software that is operational. --> ## Concept Polytope is designed to enable extraction of arbitrary extraction of data from a datacube. Instead of the typical range-based bounding-box approach, Polytope can extract any shape of data from a datacube using a "polytope" (*n*-dimensional polygon) stencil. <p align="center"> <img src="https://raw.githubusercontent.com/ecmwf/polytope/develop//docs/Algorithm/Overview/images_overview/ecmwf_polytope.png" alt="Polytope Concept" width="450"/> </p> The Polytope algorithm can for example be used to extract: - 2D cut-outs, such as country cut-outs, from a datacube <p align="center"> <img src="https://raw.githubusercontent.com/ecmwf/polytope/develop/docs/images/greece.png" alt="Greece cut-out" width="250"/> </p> - timeseries from a datacube <p align="center"> <img src="https://raw.githubusercontent.com/ecmwf/polytope/develop/docs/images/timeseries.png" alt="Timeseries" width="350"/> </p> - more complicated spatio-temporal paths, such as flight paths, from a datacube <p align="center"> <img src="https://raw.githubusercontent.com/ecmwf/polytope/develop/docs/images/flight_path.png" alt="Flight path" width="350"/> </p> - and many more high-dimensional shapes in arbitrary dimensions... For more information about the Polytope algorithm, refer to our [paper](https://arxiv.org/abs/2306.11553). If this project is useful for your work, please consider citing this paper. ## Installation Install the polytope software with Python 3 (>=3.7) from GitHub directly with the command python3 -m pip install git+ssh://git@github.com/ecmwf/polytope.git@develop or from PyPI with the command python3 -m pip install polytope-python ## Example Here is a step-by-step example of how to use this software. 1. In this example, we first specify the data which will be in our Xarray datacube. Note that the data here comes from the GRIB file called "winds.grib", which is 3-dimensional with dimensions: step, latitude and longitude. ```Python import xarray as xr array = xr.open_dataset("winds.grib", engine="cfgrib") ``` We then construct the Polytope object, passing in some additional metadata describing properties of the longitude axis. ```Python options = {"longitude": {"cyclic": [0, 360.0]}} from polytope_feature.polytope import Polytope p = Polytope(datacube=array, axis_options=options) ``` 2. Next, we create a request shape to extract from the datacube. In this example, we want to extract a simple 2D box in latitude and longitude at step 0. We thus create the two relevant shapes we need to build this 3-dimensional object, ```Python import numpy as np from polytope_feature.shapes import Box, Select box = Box(["latitude", "longitude"], [0, 0], [1, 1]) step_point = Select("step", [np.timedelta64(0, "s")]) ``` which we then incorporate into a Polytope request. ```Python from polytope_feature.polytope import Request request = Request(box, step_point) ``` 3. Finally, extract the request from the datacube. ```Python result = p.retrieve(request) ``` The result is stored as an IndexTree containing the retrieved data organised hierarchically with axis indices for each point. ```Python result.pprint() Output IndexTree: ↳root=None ↳step=0 days 00:00:00 ↳latitude=0.0 ↳longitude=0.0 ↳longitude=1.0 ↳latitude=1.0 ↳longitude=0.0 ↳longitude=1.0 ``` ## Testing #### Additional Dependencies The Polytope tests and examples require additional Python packages compared to the main Polytope algorithm. The additional dependencies are provided in the requirements_test.txt and requirements_examples.txt files, which can respectively be found in the tests and examples folders. Moreover, Polytope's tests and examples also require the installation of eccodes and GDAL. It is possible to install both of these dependencies using either a package manager or manually. ## Contributing The main repository is hosted on GitHub; testing, bug reports and contributions are highly welcomed and appreciated. Please see the [Contributing](./CONTRIBUTING.rst) document for the best way to help. Main contributors: - Mathilde Leuridan - [ECMWF](https://www.ecmwf.int) - James Hawkes - [ECMWF](https://www.ecmwf.int) - Simon Smart - [ECMWF](www.ecmwf.int) - Emanuele Danovaro - [ECMWF](www.ecmwf.int) - Tiago Quintino - [ECMWF](www.ecmwf.int) See also the [contributors](https://github.com/ecmwf/polytope/contributors) for a more complete list. ## License ``` Copyright 2021 European Centre for Medium-Range Weather Forecasts (ECMWF) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0). Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. In applying this licence, ECMWF does not waive the privileges and immunities granted to it by virtue of its status as an intergovernmental organisation nor does it submit to any jurisdiction. ``` ## Citing If this software is useful in your work, please consider citing our paper as > Leuridan, M., Hawkes, J., Smart, S., Danovaro, E., & Quintino, T. (2025, November). [Polytope: An Algorithm for Efficient Feature Extraction on Hypercubes.](https://link.springer.com/article/10.1186/s40537-025-01306-3) In Journal of Big Data (pp. 1-25). Other papers include: > Leuridan, M., Bradley, C., Hawkes, J., Quintino, T., & Schultz, M. (2025, June). [Performance Analysis of an Efficient Algorithm for Feature Extraction from Large Scale Meteorological Data Stores.](https://dl.acm.org/doi/abs/10.1145/3732775.3733573) In Proceedings of the Platform for Advanced Scientific Computing Conference (pp. 1-9). ## Acknowledgements Past and current funding and support for **Polytope** is listed in the adjoining [Acknowledgements](./ACKNOWLEDGEMENTS.rst).
text/markdown
null
"European Centre for Medium-Range Weather Forecasts (ECMWF)" <software.support@ecmwf.int>
null
James Hawkes <James.Hawkes@ecmwf.int>, Mathilde Leuridan <Mathilde.Leuridan@ecmwf.int>
Apache License Version 2.0
null
[ "Development Status :: 4 - Beta", "Intended Audience :: Science/Research", "Natural Language :: English", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Scientific/Engineering" ]
[]
null
null
>3.8
[]
[]
[]
[ "numpy>=1.26", "pandas", "scipy", "sortedcontainers", "tripy", "xarray", "conflator", "protobuf", "pytest; extra == \"tests\"", "pytest-cov; extra == \"tests\"", "cffi; extra == \"tests\"", "eccodes; extra == \"tests\"", "h5netcdf; extra == \"tests\"", "h5py; extra == \"tests\"", "earthkit-data; extra == \"tests\"", "matplotlib; extra == \"tests\"", "pyfdb; extra == \"tests\"", "eckit; extra == \"unstructured\"", "qubed; extra == \"catalogue\"", "eccodes; extra == \"switching-grids\"", "pyfdb; extra == \"switching-grids\"" ]
[]
[]
[]
[ "repository, https://github.com/ecmwf/polytope", "documentation, https://polytope.readthedocs.io/en/latest/", "issues, https://github.com/ecmwf/polytope/issues" ]
twine/6.2.0 CPython/3.14.2
2026-02-20T10:36:40.183082
polytope_python-2.1.4.tar.gz
14,314,532
37/77/ff897d0699e0649989ad03a4847f6446632ad811814dbd9d8ca003db855b/polytope_python-2.1.4.tar.gz
source
sdist
null
false
7e006c3681dad970d95f5e3258ace562
464dd3ec5bd29d5d1a27a0cbbe271f1307c34d7a4d1a249fbcefb22eb67a4490
3777ff897d0699e0649989ad03a4847f6446632ad811814dbd9d8ca003db855b
null
[ "LICENSE" ]
821
2.4
qrisp
0.8.0
Qrisp - A high level language for gate-based quantum computing
<p align="center" width="100%"><img src="https://raw.githubusercontent.com/eclipse-qrisp/Qrisp/main/logo/logo_with_contour.png" width=30%></p> </h1><br> <div align="center"> [![License](https://img.shields.io/badge/License-EPL_2.0-brightgreen.svg)](https://opensource.org/licenses/EPL-2.0) ![PyPI - Version](https://img.shields.io/pypi/v/qrisp?color=brightgreen) [![Discord](https://img.shields.io/discord/1471858163908214870?style=plastic&logo=Discord&label=discord)](https://discord.gg/v5np7DeBaq) [![Pytest](https://github.com/eclipse-qrisp/Qrisp/actions/workflows/qrisp_test.yml/badge.svg)](https://github.com/eclipse-qrisp/Qrisp/actions/workflows/qrisp_test.yml) [![Downloads](https://img.shields.io/pypi/dm/qrisp.svg)](https://pypi.org/project/qrisp/) [![CodeFactor](https://www.codefactor.io/repository/github/eclipse-qrisp/qrisp/badge/main)](https://www.codefactor.io/repository/github/eclipse-qrisp/qrisp/overview/main) [![Paper](https://img.shields.io/badge/DOI-10.1038%2Fs41586--020--2649--2-brightgreen)](https://doi.org/10.48550/arXiv.2406.14792) [![Forks](https://img.shields.io/github/forks/eclipse-qrisp/Qrisp.svg)](https://github.com/eclipse-qrisp/Qrisp/network/members) [![Open Issues](https://img.shields.io/github/issues/eclipse-qrisp/Qrisp.svg)](https://github.com/eclipse-qrisp/Qrisp/issues) [![Stars](https://img.shields.io/github/stars/eclipse-qrisp/Qrisp.svg)](https://github.com/eclipse-qrisp/Qrisp/stargazers) [![Contributors](https://img.shields.io/github/contributors/eclipse-qrisp/Qrisp.svg)](https://github.com/eclipse-qrisp/Qrisp/graphs/contributors) </div> ## About Qrisp is a high-level quantum programming framework that allows for intuitive development of quantum algorithms. It provides a rich set of tools and abstractions to make quantum computing more accessible to developers and researchers. By automating many steps one usually encounters when programming a quantum computer, introducing quantum types, and many more features Qrisp makes quantum programming more user-friendly yet stays performant when it comes to compiling programs to the circuit level. ## Features - Intuitive quantum program design - High-level quantum programming - Efficient quantum algorithm implementation - Extensive documentation and examples ## Installation You can install Qrisp using pip: ```bash pip install qrisp ``` Qrisp has been confirmed to work with Python version 3.11 & 3.12. Qrisp is compatible with any QASM-capable quantum backend! In particular, it offers convenient interfaces for using IBM, IQM and AQT quantum computers, and any quantum backend provider is invited to reach out for a tight integration! If you want to work with IQM quantum computers as a backend, you need to install additional dependencies using ```bash pip install qrisp[iqm] ``` ## Documentation The full documentation, alongside with many tutorials and examples, is available under [Qrisp Documentation](https://www.qrisp.eu/). ## Shor's Algorithm with Qrisp Shor's algorithm is among the most famous quantum algorithm since it provides a provably exponential speed-up for a practically relevant problem: Facotrizing integers. This is an important application because much of modern cryptography is based on RSA, which heavily relies on integer factorization being insurmountable. Despite this importance, the amount of software that is actually able to compile the algorithm to the circuit level is extremely limited. This is because a key operation within the algorithm (modular in-place multiplication) is difficult to implement and has strong requirements for the underlying compiler. These problems highlight how the Qrisp programming-model delivers significant advantages to quantum programmers because the quantum part of the algorithm can be expressend within a few lines of code: ```python from qrisp import QuantumFloat, QuantumModulus, h, QFT, control def find_order(a, N): qg = QuantumModulus(N) qg[:] = 1 qpe_res = QuantumFloat(2*qg.size + 1, exponent = -(2*qg.size + 1)) h(qpe_res) for i in range(len(qpe_res)): with control(qpe_res[i]): qg *= a a = (a*a)%N QFT(qpe_res, inv = True) return qpe_res.get_measurement() ``` To find out how this can be used to break encryption be sure to check the [tutorial](https://qrisp.eu/general/tutorial/Shor.html). Qrisp offers much more than just factoring! More examples, like simulating molecules at the quantum level or how to solve the Travelling Salesman Problem, can be found [here](https://qrisp.eu/general/tutorial/index.html). ## Authors and Citation Qrisp is the work of [many people](https://projects.eclipse.org/projects/technology.qrisp/who). If you have comments, questions or love letters, feel free to reach out to us: raphael.seidel [at] meetiqm.com sebastian.bock [at] fokus.fraunhofer.de nikolay.tcholtchev [at] fokus.fraunhofer.de rene.zander [at] fokus.fraunhofer.de matic.petric [at] fokus.fraunhofer.de If you want to cite Qrisp in your work, please use: ``` @misc{seidel2024qrisp, title={Qrisp: A Framework for Compilable High-Level Programming of Gate-Based Quantum Computers}, author={Raphael Seidel and Sebastian Bock and René Zander and Matic Petrič and Niklas Steinmann and Nikolay Tcholtchev and Manfred Hauswirth}, year={2024}, eprint={2406.14792}, archivePrefix={arXiv}, primaryClass={quant-ph}, url={https://arxiv.org/abs/2406.14792}, } ``` ## License [Eclipse Public License 2.0](https://github.com/fraunhoferfokus/Qrisp/blob/main/LICENSE)
text/markdown
The Qrisp team
raphael.seidel@fokus.fraunhofer.de
null
null
null
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: Eclipse Public License 2.0 (EPL-2.0)", "Operating System :: OS Independent" ]
[]
https://github.com/eclipse-qrisp/Qrisp
null
>=3.11
[]
[]
[]
[ "numpy>=2.0", "sympy<=1.13", "qiskit>=0.44.0", "matplotlib>=3.5.1", "scipy>=1.10.0", "numba", "networkx", "tqdm", "dill", "flask<2.3.0", "waitress", "pyyaml", "requests", "psutil", "jax==0.7.1", "jaxlib==0.7.1", "iqm-client[qiskit]; extra == \"iqm\"" ]
[]
[]
[]
[ "Bug Tracker, https://github.com/eclipse-qrisp/Qrisp/issues" ]
twine/6.0.1 CPython/3.11.10
2026-02-20T10:36:14.014004
qrisp-0.8.0.tar.gz
716,565
66/5f/f6e539c320d456f0fda5cd2d92ace5ad769ad89c3bd1f530039f4728ded6/qrisp-0.8.0.tar.gz
source
sdist
null
false
5e3de243b696a6a6cd7fa373031a71d5
3ae51c568919c231225c5d755efee8a3d3dc9e8eeee50afa9bea21a03dafd0aa
665ff6e539c320d456f0fda5cd2d92ace5ad769ad89c3bd1f530039f4728ded6
null
[]
248
2.3
langchain-azure-ai
1.0.61
An integration package to support Azure AI Foundry capabilities in LangChain/LangGraph ecosystem.
# langchain-azure-ai This package contains the LangChain integration for Azure AI Foundry. To learn more about how to use this package, see the LangChain documentation in [Azure AI Foundry](https://aka.ms/azureai/langchain). ## Installation ```bash pip install -U langchain-azure-ai ``` For using tools, including Azure AI Document Intelligence, Azure AI Text Analytics for Health, or Azure LogicApps, please install the extras `tools`: ```bash pip install -U langchain-azure-ai[tools] ``` For using tracing capabilities with OpenTelemetry, you need to add the extras `opentelemetry`: ```bash pip install -U langchain-azure-ai[opentelemetry] ``` ## Quick Start with langchain-azure-ai The `langchain-azure-ai` package uses the Azure AI Foundry family of SDKs and client libraries for Azure to provide first-class support of Azure AI Foundry capabilities in LangChain and LangGraph. This package includes: * [Azure AI Agent Service](./libs/azure-ai/langchain_azure_ai/agents) * [Azure AI Foundry Models inference](./libs/azure-ai/langchain_azure_ai/chat_models) * [Azure AI Search](./libs/azure-ai/langchain_azure_ai/vectorstores) * [Azure AI Services tools](./libs/azure-ai/langchain_azure_ai/tools) * [Cosmos DB](./libs/azure-ai/langchain_azure_ai/vectorstores) Here's a quick start example to show you how to get started with the Chat Completions model. For more details and tutorials see [Develop with LangChain and LangGraph and models from Azure AI Foundry](https://aka.ms/azureai/langchain). ### Azure AI Chat Completions Model with Azure OpenAI ```python from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel from langchain_core.messages import HumanMessage, SystemMessage model = AzureAIChatCompletionsModel( endpoint="https://{your-resource-name}.services.ai.azure.com/openai/v1", credential="your-api-key", #if using Entra ID you can should use DefaultAzureCredential() instead model="gpt-4o" ) messages = [ SystemMessage( content="Translate the following from English into Italian" ), HumanMessage(content="hi!"), ] model.invoke(messages) ``` ```python AIMessage(content='Ciao!', additional_kwargs={}, response_metadata={'model': 'gpt-4o', 'token_usage': {'input_tokens': 20, 'output_tokens': 3, 'total_tokens': 23}, 'finish_reason': 'stop'}, id='run-0758e7ec-99cd-440b-bfa2-3a1078335133-0', usage_metadata={'input_tokens': 20, 'output_tokens': 3, 'total_tokens': 23}) ``` ### Azure AI Chat Completions Model with DeepSeek-R1 ```python from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel from langchain_core.messages import HumanMessage, SystemMessage model = AzureAIChatCompletionsModel( endpoint="https://{your-resource-name}.services.ai.azure.com/models", credential="your-api-key", #if using Entra ID you can should use DefaultAzureCredential() instead model="DeepSeek-R1", ) messages = [ HumanMessage(content="Translate the following from English into Italian: \"hi!\"") ] message_stream = model.stream(messages) print(' '.join(chunk.content for chunk in message_stream)) ``` ```python <think> Okay , the user just sent " hi !" and I need to translate that into Italian . Let me think . " Hi " is an informal greeting , so in Italian , the equivalent would be " C iao !" But wait , there are other options too . Sometimes people use " Sal ve ," which is a bit more neutral , but " C iao " is more common in casual settings . The user probably wants a straightforward translation , so " C iao !" is the safest bet here . Let me double -check to make sure there 's no nuance I 'm missing . N ope , " C iao " is definitely the right choice for translating " hi !" in an informal context . I 'll go with that . </think> C iao ! ``` ## Changelog - **1.0.61**: - This release reverts the code to the state of v1.0.5 while updating the version number to 1.0.61. - **1.0.5**: - We fixed an issue with the content type of messages in `AzureAIChatCompletionsModel`. See [PR #245]. - We improve metadata generated for `AzureAIOpenTelemetryTracer`. See [PR ##233]. - **1.0.4**: - We fixed an issue with dependencies resolution for `azure-ai-agents` where the incorrect version was picked up. See [PR #221]. - We fixed an issue with `AzureAIOpenTelemetryTracer` where spans context was not correctly propagated when called from another service. See [PR #217]. - We fixed an issue where `AzureAIOpenTelemetryTracer` where context was deallocated incorrectly, preventing tools like `langdev` to correctly emit traces. See [Issue #212]. - We introduced improvements in the order in which environment variables `AZURE_AI_*` are read. - Internal: We improved `AzureAIOpenTelemetryTracer` test coverage. See [PR #239](https://github.com/langchain-ai/langchain-azure/pull/239). - **1.0.2**: - We updated the `AzureAIOpenTelemetryTracer` to create a parent trace for multi agent scenarios. Previously, you were required to do this manually, which was unnecesary. - **1.0.0**: - We introduce support for LangChain and LangGraph 1.0. - **0.1.8**: - We fixed some issues with `AzureAIOpenTelemetryTracer`, including compliant hierarchy, tool spans under chat, finish reason normalization, conversation id. See [PR #167] - We fixed an issue with taking image inputs for declarative agents created with Azure AI Foundry Agents service. - We enhanced tool descriptions to improve tool call accuracy. - **0.1.7**: - **[NEW]**: We introduce LangGraph support for declarative agents created in Azure AI Foundry. You can now compose complex graphs in LangGraph and add nodes that take advantage of Azure AI Agent Service. See [`AgentServiceFactory`](./langchain_azure_ai/agents/agent_service.py#L44) - We fix an issue with the interface of `AzureAIEmbeddingsModel` [#158](https://github.com/langchain-ai/langchain-azure/issues/158). - We improve the signatures of the tools `AzureAIDocumentIntelligenceTool`, `AzureAIImageAnalysisTool`, and `AzureAITextAnalyticsHealthTool` [PR #160](https://github.com/langchain-ai/langchain-azure/pull/160). - **0.1.6**: - **[Breaking change]:** Using parameter `project_connection_string` to create `AzureAIEmbeddingsModel` and `AzureAIChatCompletionsModel` is not longer supported. Use `project_endpoint` instead. - **[Breaking change]:** Class `AzureAIInferenceTracer` has been removed in favor of `AzureAIOpenTelemetryTracer` which has a better support for OpenTelemetry and the new semantic conventions for GenAI. - Adding the following tools to the package: `AzureAIDocumentIntelligenceTool`, `AzureAIImageAnalysisTool`, and `AzureAITextAnalyticsHealthTool`. You can also use `AIServicesToolkit` to have access to all the tools in Azure AI Services. - **0.1.4**: - Bug fix [#91](https://github.com/langchain-ai/langchain-azure/pull/91). - **0.1.3**: - **[Breaking change]:** We renamed the parameter `model_name` in `AzureAIEmbeddingsModel` and `AzureAIChatCompletionsModel` to `model`, which is the parameter expected by the method `langchain.chat_models.init_chat_model`. - We fixed an issue with JSON mode in chat models [#81](https://github.com/langchain-ai/langchain-azure/issues/81). - We fixed the dependencies for NumpPy [#70](https://github.com/langchain-ai/langchain-azure/issues/70). - We fixed an issue when tracing Pyndantic objects in the inputs [#65](https://github.com/langchain-ai/langchain-azure/issues/65). - We made `connection_string` parameter optional as suggested at [#65](https://github.com/langchain-ai/langchain-azure/issues/65). - **0.1.2**: - Bug fix [#35](https://github.com/langchain-ai/langchain-azure/issues/35). - **0.1.1**: - Adding `AzureCosmosDBNoSqlVectorSearch` and `AzureCosmosDBNoSqlSemanticCache` for vector search and full text search. - Adding `AzureCosmosDBMongoVCoreVectorSearch` and `AzureCosmosDBMongoVCoreSemanticCache` for vector search. - You can now create `AzureAIEmbeddingsModel` and `AzureAIChatCompletionsModel` clients directly from your AI project's connection string using the parameter `project_connection_string`. Your default Azure AI Services connection is used to find the model requested. This requires to have `azure-ai-projects` package installed. - Support for native LLM structure outputs. Use `with_structured_output(method="json_schema")` to use native structured schema support. Use `with_structured_output(method="json_mode")` to use native JSON outputs capabilities. By default, LangChain uses `method="function_calling"` which uses tool calling capabilities to generate valid structure JSON payloads. This requires to have `azure-ai-inference >= 1.0.0b7`. - Bug fix [#18](https://github.com/langchain-ai/langchain-azure/issues/18) and [#31](https://github.com/langchain-ai/langchain-azure/issues/31). - **0.1.0**: - Introduce `AzureAIEmbeddingsModel` for embedding generation and `AzureAIChatCompletionsModel` for chat completions generation using the Azure AI Inference API. This client also supports GitHub Models endpoint. - Introduce `AzureAIOpenTelemetryTracer` for tracing with OpenTelemetry and Azure Application Insights.
text/markdown
null
null
null
null
MIT License Copyright (c) 2023 LangChain, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
null
[ "License :: Other/Proprietary License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13" ]
[]
null
null
<4.0,>=3.10.0
[]
[]
[]
[ "aiohttp<4.0,>=3.10", "azure-ai-agents==1.2.0b5", "azure-ai-documentintelligence<2.0.0,>=1.0.2; extra == \"tools\"", "azure-ai-inference[opentelemetry]<2.0,>=1.0.0b9", "azure-ai-projects<2.0,>=1.0", "azure-ai-textanalytics<6.0.0,>=5.3.0; extra == \"tools\"", "azure-ai-vision-imageanalysis<2.0.0,>=1.0.0; extra == \"tools\"", "azure-core<2.0,>=1.32", "azure-cosmos<5.0,>=4.14.0b1", "azure-identity<2.0,>=1.15", "azure-mgmt-logic<11.0.0,>=10.0.0; extra == \"tools\"", "azure-monitor-opentelemetry<2.0,>=1.6; extra == \"opentelemetry\"", "azure-search-documents<12.0,>=11.4", "langchain<2.0.0,>=1.0.0", "langchain-openai<2.0.0,>=1.0.0", "numpy>=1.26.2; python_version < \"3.13\"", "numpy>=2.1.0; python_version >= \"3.13\"", "opentelemetry-api>=1.37; extra == \"opentelemetry\"", "opentelemetry-instrumentation>=0.58b0; extra == \"opentelemetry\"", "opentelemetry-instrumentation-threading>=0.58b0; extra == \"opentelemetry\"", "opentelemetry-semantic-conventions>=0.58b0; extra == \"opentelemetry\"", "opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.2; extra == \"opentelemetry\"", "six<2.0.0,>=1.17.0" ]
[]
[]
[]
[ "Repository, https://github.com/langchain-ai/langchain-azure", "Release Notes, https://github.com/langchain-ai/langchain-azure/releases", "Source Code, https://github.com/langchain-ai/langchain-azure/tree/main/libs/azure-ai" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:36:06.891310
langchain_azure_ai-1.0.61.tar.gz
84,952
b3/30/475ccfe5e44b13f65b33ab63f66e507bb7390e697945566a8974dc56090b/langchain_azure_ai-1.0.61.tar.gz
source
sdist
null
false
3e46c4a0b275247e6a1f87c0441c3ed9
c03c9fa6fbea75ce18c09f1a15a86233dd5b363272704c4434bb8c7331d06b35
b330475ccfe5e44b13f65b33ab63f66e507bb7390e697945566a8974dc56090b
null
[]
1,219
2.4
cronico
0.9.17
Another YAML-based scheduler
# Cronico Cronico is a lightweight, YAML-based task scheduler for Unix-like systems. It lets you define recurring jobs with flexible cron expressions — supporting traditional minute-based syntax, extended formats with seconds, and common shorthand aliases (@daily, @hourly, etc.). Tasks can include: - Retry policies with configurable attempts. - Timeouts to kill long-running processes. - Environment injection from .env files or inline variables. - Working directory control per task. - Streaming or buffered logs for stdout/stderr. Cronico is designed to run as a long-lived daemon (via systemd or similar) and can reload its configuration on SIGHUP without restarting the process. ```yaml tasks: example_task: description: | Classic cron expression: every 5 minutes cron: "*/5 * * * *" command: "echo 'Hello, World!'" retry_on_error: true max_attempts: 3 env_file: ".env" timeout: 60 # seconds working_dir: "/path/to/dir" environment: MY_VAR: "value" custom_env: cron: "*/5 * * * *" environment: GREETING: "Hola" command: | echo "$GREETING from Bash at $(date)" every_minute_at_second_10: description: | Extended with seconds: every minute, at the 10th second cron: minute: "*" hour: "*" day: "*" month: "*" weekday: "*" second: 10 command: "echo 'Run at second 10 of every minute'" every_30_seconds: description: | Classic with seconds: every 30 seconds cron: "*/1 * * * * 0,30" command: "echo 'This runs at second 0 and 30 of each minute'" daily_with_seconds: description: | Daily at 03:00:15 cron: minute: 0 hour: 3 day: "*" month: "*" weekday: "*" second: 15 command: "echo 'Daily at 03:00:15'" shorthand: description: | Shorthand: daily, at 00:00 cron: "@daily" command: | echo "Supported aliases:" echo "- @yearly: 0 0 1 1 *" echo "- @annually: 0 0 1 1 *" echo "- @monthly: 0 0 1 * *" echo "- @weekly: 0 0 * * 0" echo "- @daily: 0 0 * * *" echo "- @midnight: 0 0 * * *" echo "- @hourly: 0 * * * *" with_shebang: description: | You can use a shebang to specify the interpreter to use. cron: "*/10 * * * *" command: | #!/usr/bin/env python3 import datetime print("Hello from Python at", datetime.datetime.now()) another_shebang_example: description: | Another one... cron: "*/10 * * * *" command: | #!/usr/bin/env perl use strict; use warnings; my ($sec,$min,$hour) = localtime(); print "Hello from Perl at $hour:$min:$sec\n"; and_another_shebang_example: description: | I think you get the idea. cron: "*/10 * * * *" command: | #!/usr/bin/env perl use strict; use warnings; my ($sec,$min,$hour) = localtime(); print "Hello from Perl at $hour:$min:$sec\n"; ```
text/markdown
null
Luis Medel <luis@luismedel.com>
null
null
MIT
null
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "PyYAML==6.0.2", "croniter==6.0.0", "python-dotenv==1.1.1", "watchdog==6.0.0", "mypy==1.18.1; extra == \"dev\"", "ruff==0.13.0; extra == \"dev\"", "types-PyYAML==6.0.12.20250822; extra == \"dev\"", "types-croniter==6.0.0.20250809; extra == \"dev\"", "build; extra == \"dev\"", "twine; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/luismedel/cronico", "Issues, https://github.com/luismedel/cronico/issues" ]
twine/6.2.0 CPython/3.12.12
2026-02-20T10:35:09.701119
cronico-0.9.17.tar.gz
9,667
71/a4/c01c0a008cd8184f82d496f4f494fd246a62f5c21aa45728fc521c3cb1c2/cronico-0.9.17.tar.gz
source
sdist
null
false
b3b5aac7d21d4bfac8311cdd9c5e9255
280902a8081083ad448f8c734f9eeb8a62b82a30b12c6f26335c72e3e4ac6d5c
71a4c01c0a008cd8184f82d496f4f494fd246a62f5c21aa45728fc521c3cb1c2
null
[ "LICENSE" ]
235
2.4
nebulascrape
0.0.1
NebulaScrape — Ultra-powerful HTTP scraping library with smart bypass, async support, and modular transport.
<div align="center"> # NebulaScrape <img src="https://img.shields.io/badge/python-3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue?style=for-the-badge&logo=python&logoColor=white" alt="Python Versions"> <img src="https://img.shields.io/badge/version-0.0.1-informational?style=for-the-badge" alt="Version"> <img src="https://img.shields.io/badge/license-MIT-green?style=for-the-badge" alt="License"> <img src="https://img.shields.io/badge/asyncio-supported-blueviolet?style=for-the-badge&logo=python" alt="Async"> <img src="https://img.shields.io/badge/HTTP%2F2-supported-orange?style=for-the-badge" alt="HTTP2"> <img src="https://img.shields.io/badge/TLS-fingerprint%20spoof-red?style=for-the-badge" alt="TLS"> <img src="https://img.shields.io/badge/WAF-bypass%20engine-darkred?style=for-the-badge" alt="WAF"> <img src="https://img.shields.io/badge/maintained%20by-6x--u-black?style=for-the-badge&logo=github" alt="GitHub"> **NebulaScrape** is a production-grade Python HTTP scraping library built for the modern web. It combines a modular transport system, intelligent session analysis, browser-realistic fingerprinting, async support, and a powerful WAF bypass engine into a single clean API. [Installation](#installation) &nbsp;|&nbsp; [Quick Start](#quick-start) &nbsp;|&nbsp; [Profiles](#fingerprint-profiles) &nbsp;|&nbsp; [Retry Engine](#smart-retry-engine) &nbsp;|&nbsp; [Session Intel](#session-intelligence-layer) &nbsp;|&nbsp; [Transports](#modular-transport-system) &nbsp;|&nbsp; [Async](#async-support) &nbsp;|&nbsp; [Metrics](#built-in-metrics) &nbsp;|&nbsp; [Plugins](#plugin-system) &nbsp;|&nbsp; [API Reference](#api-reference) </div> --- ## Table of Contents - [Overview](#overview) - [Installation](#installation) - [Quick Start](#quick-start) - [Fingerprint Profiles](#fingerprint-profiles) - [Smart Retry Engine](#smart-retry-engine) - [Session Intelligence Layer](#session-intelligence-layer) - [Modular Transport System](#modular-transport-system) - [Async Support](#async-support) - [Built-in Metrics](#built-in-metrics) - [Plugin System](#plugin-system) - [WAF Bypass Engine](#waf-bypass-engine) - [Advanced Usage](#advanced-usage) - [API Reference](#api-reference) - [Configuration Reference](#configuration-reference) --- ## Overview NebulaScrape was designed to solve the hardest problem in modern web scraping: getting a real HTTP response from a server that actively tries to block automated clients. Most scraping libraries send requests that are trivially identifiable as bots. They have wrong TLS fingerprints, wrong header order, no sec-ch-ua fields, no browser timing patterns, and no ability to recover intelligently from blocks. NebulaScrape was built from the ground up to solve all of these problems at once. ### What Makes NebulaScrape Different **TLS fingerprint spoofing.** Every modern WAF inspects the TLS ClientHello. NebulaScrape sends the exact cipher suite list, ECDH curve, and TLS extension order that real Chrome 120 sends. A plain requests session sends a fingerprint that gets flagged immediately. **Ordered, realistic HTTP headers.** Browsers send headers in a specific order that WAFs check. NebulaScrape uses OrderedDict-based profiles that match real browser traffic captures, including the correct sec-ch-ua, sec-ch-ua-mobile, sec-ch-ua-platform, Sec-Fetch-* fields. **Smart retry decisions.** When a request fails with 403, 429, or 503, NebulaScrape does not blindly retry. It analyzes the response, reads the Retry-After header, calculates exponential backoff with jitter, decides whether to rotate the session fingerprint or rebuild the connection, and escalates to a more capable transport if needed. **Session intelligence.** Every response is analyzed for WAF vendor signatures. The library tells you whether you hit a Cloudflare IUAM page, a DataDome challenge, a PerimeterX block, rate limiting, or a redirect loop, and attaches a risk score from 0 to 100 to each response. **Transport escalation.** Under high-protection targets, NebulaScrape automatically escalates from standard HTTP/1.1 with TLS spoofing, to HTTP/2 via httpx, to full browser impersonation via curl_cffi. No code change required. **Async-first.** NebulaScraper ships with a native asyncio client that supports all the same features, including retry logic, intelligence analysis, metrics, and plugins. --- ## Installation **Minimum requirements:** Python 3.8+ Install the core library: ```bash pip install nebulascrape ``` Install with headless browser impersonation support (required for Cloudflare Turnstile, Managed Challenge, and the most aggressive WAF protections): ```bash pip install "nebulascrape[headless]" ``` Install from source: ```bash git clone https://github.com/6x-u/nebulascrape.git cd nebulascrape pip install -e . ``` ### Dependencies | Package | Purpose | Required | |---|---|---| | `requests` >= 2.9.2 | Base HTTP transport | Yes | | `requests_toolbelt` >= 0.9.1 | Request debugging | Yes | | `pyparsing` >= 2.4.7 | JS challenge parsing | Yes | | `httpx[http2]` >= 0.24.0 | HTTP/2 transport | Yes | | `h2` >= 4.0.0 | HTTP/2 protocol | Yes | | `aiohttp` >= 3.8.0 | Async fallback transport | Yes | | `curl_cffi` >= 0.5.0 | Browser impersonation | Optional (headless) | | `brotli` >= 1.0.9 | Brotli decompression | Optional | --- ## Quick Start The simplest way to use NebulaScrape is through the `Client` class. It handles everything internally. ```python from nebulascrape import Client client = Client(profile="chrome_windows", auto_retry=True) response = client.get("https://target.com") print(response.status_code) print(response.meta["challenge_type"]) print(response.meta["risk_score"]) print(response.meta["metrics"]["latency_ms"]) ``` For full session control, use `NebulaScraper` directly: ```python from nebulascrape import NebulaScraper scraper = NebulaScraper( profile="chrome_windows", auto_retry=True, max_retries=5, mode="auto", interpreter="native", debug=False, ) response = scraper.get("https://target.com") print(response.meta) ``` For token extraction: ```python from nebulascrape import get_tokens tokens, user_agent = get_tokens("https://target.com") print(tokens) print(user_agent) ``` --- ## Fingerprint Profiles NebulaScrape ships with four pre-built browser fingerprint profiles. Each profile contains a real User-Agent string, browser-realistic headers in the correct order, a matching TLS cipher suite list, and the correct ECDH curve. | Profile Name | Browser | Platform | sec-ch-ua-mobile | |---|---|---|---| | `chrome_windows` | Chrome 120 | Windows 10 x64 | false | | `chrome_linux` | Chrome 120 | Linux x86_64 | false | | `firefox` | Firefox 121 | Windows 10 | N/A | | `mobile` | Chrome 120 | Android 13 | true | ### Using a Profile ```python from nebulascrape import Client # Use any built-in profile client = Client(profile="chrome_linux") client = Client(profile="firefox") client = Client(profile="mobile") ``` ### Inspecting a Profile ```python from nebulascrape.fingerprints import get_profile, available_profiles print(available_profiles()) # ['chrome_windows', 'chrome_linux', 'firefox', 'mobile'] profile = get_profile("chrome_windows") print(profile["user_agent"]) print(profile["headers"]) print(profile["cipher_suite"]) ``` ### Why Header Order Matters A standard requests session sends headers in an arbitrary order. Real browsers always send headers in a fixed, browser-specific order. WAFs such as Datadome and Kasada inspect header order as a primary bot signal. NebulaScrape uses `OrderedDict` to enforce the correct order for every profile: ``` User-Agent Accept Accept-Language Accept-Encoding sec-ch-ua sec-ch-ua-mobile sec-ch-ua-platform Upgrade-Insecure-Requests Sec-Fetch-Dest Sec-Fetch-Mode Sec-Fetch-Site Sec-Fetch-User ``` This matches the exact order captured from a real Chrome 120 browser session. --- ## Smart Retry Engine The `SmartRetryEngine` replaces naive retry loops with a response-aware retry decision system. It analyzes each failed response and decides the appropriate action based on the error type, attempt count, and session intelligence. ### How It Works Every response is passed through `analyze_response()`, which returns a `RetryDecision` containing: - `action` — what to do next (pass, wait and retry, rotate session, rebuild connection, switch transport, or abort) - `backoff_seconds` — how long to wait before retrying - `rotate_session` — whether to change the User-Agent and fingerprint - `rebuild_connection` — whether to tear down and rebuild the connection pool - `switch_transport` — whether to escalate to a higher-tier transport ### Per-Status Logic **HTTP 403 Forbidden** Indicates fingerprint detection or IP block. The engine rotates the browser fingerprint and session identity, waits a random jitter interval between 2 and 8 seconds to simulate human behavior, and rebuilds the connection on the third attempt to clear any connection-level state the server may be tracking. **HTTP 429 Too Many Requests** Indicates rate limiting. The engine first reads the `Retry-After` response header and uses that value if present, adding a small random jitter. If no header is present, it calculates exponential backoff: `1.5 * 2^attempt` seconds, capped at 120 seconds. Session rotation activates from the second attempt onward. **HTTP 503 Service Unavailable** Indicates the connection itself may be flagged. The engine rebuilds the connection pool immediately and escalates to a higher transport tier from the second attempt onward. **HTTP 407, 408, 502, 504, 52x** Treated as transient infrastructure errors. Exponential backoff applies, capped at 90 seconds. **Intelligence-driven retry** If `SessionIntelligence` detects a challenge or high-risk response (even on a 200), the retry engine uses the intel result to decide whether to rotate, switch transport, or escalate. ### Configuration ```python from nebulascrape import Client client = Client( auto_retry=True, max_retries=7, # default is 5 ) ``` ```python from nebulascrape.retry_engine import SmartRetryEngine engine = SmartRetryEngine(max_retries=10, base_backoff=2.0) ``` --- ## Session Intelligence Layer The `SessionIntelligence` class analyzes every response and classifies the WAF vendor and challenge type. This information is attached to `response.meta` on every request. ### Challenge Types | Challenge Type | Description | |---|---| | `none` | Clean response, no challenge detected | | `cf_iuam` | Cloudflare I'm Under Attack Mode (v1 JS challenge) | | `cf_captcha` | Cloudflare hCaptcha / reCaptcha challenge | | `cf_turnstile` | Cloudflare Turnstile (v3 challenge) | | `cf_managed` | Cloudflare Managed Challenge | | `cf_block_1020` | Cloudflare firewall rule block (error 1020) | | `datadome` | DataDome bot detection challenge | | `perimeterx` | PerimeterX / HUMAN Security block | | `kasada` | Kasada protection challenge | | `akamai` | Akamai Bot Manager challenge | | `imperva` | Imperva / Incapsula protection | | `shape` | F5 Shape Security protection | | `rate_limited` | Generic rate limiting (429 or Retry-After header) | | `js_required` | Page requires JavaScript execution | | `redirect_loop` | Detected circular redirect chain | ### Risk Score The risk score is an integer from 0 to 100 representing how likely the response represents a blocking or detection event: | Score Range | Interpretation | |---|---| | 0 | Clean response | | 1-30 | WAF present but not triggered | | 31-60 | Rate limiting or soft block | | 61-80 | Active JS or captcha challenge | | 81-100 | Hard block, firewall, or advanced WAF challenge | ### Reading the Meta ```python from nebulascrape import Client client = Client(profile="chrome_windows", auto_retry=True) response = client.get("https://target.com") print(response.meta["challenge_type"]) # "cf_iuam" / "datadome" / "none" / ... print(response.meta["risk_score"]) # 0 - 100 print(response.meta["waf_vendor"]) # "cloudflare" / "akamai" / "none" / ... print(response.meta["retry_recommended"]) print(response.meta["rotate_session"]) print(response.meta["details"]) # {"retry_after": None, "cf_ray": "...", "status_code": 200} ``` ### Direct Usage ```python from nebulascrape.session_intel import SessionIntelligence import requests resp = requests.get("https://some-protected-site.com") intel = SessionIntelligence() result = intel.analyze(resp) print(result.challenge_type) print(result.risk_score) print(result.waf_vendor) ``` --- ## Modular Transport System NebulaScrape uses a three-tier transport system. Each tier provides a higher level of browser mimicry. The `TransportManager` can automatically escalate through tiers when lower tiers accumulate failures. ### Transport Tiers **Tier 1 — TransportHTTP** Standard HTTPS over HTTP/1.1 with TLS fingerprint spoofing. Uses a custom `HTTPAdapter` that builds an SSL context with the exact cipher suite list, ECDH curve, and TLS version range from the selected fingerprint profile. This matches the JA3/JA4 fingerprint of real Chrome or Firefox and passes most WAF TLS fingerprint checks. **Tier 2 — TransportHTTP2** HTTP/2 transport backed by `httpx`. Sends the correct SETTINGS frame, WINDOW_UPDATE values, and pseudo-header order (`:method :authority :scheme :path`) that match real Chrome HTTP/2 fingerprints. Many sites block HTTP/1.1 clients that cannot negotiate HTTP/2. **Tier 3 — TransportHeadless** Full browser impersonation using `curl_cffi`. This sends traffic that is byte-for-byte indistinguishable from the target browser at the TLS and HTTP/2 layers using libcurl compiled with BoringSSL. Used as a last resort for Cloudflare Turnstile, Managed Challenge, Kasada, and similar advanced protections. ### Modes ```python from nebulascrape import Client # Auto: starts at HTTP1, escalates to HTTP2, then Headless on repeated failures client = Client(mode="auto") # Force a specific transport client = Client(mode="http1") client = Client(mode="http2") client = Client(mode="headless") ``` ### Manual Transport Control ```python from nebulascrape.transports import TransportManager, TransportHTTP, TransportHTTP2, TransportHeadless manager = TransportManager(profile_name="chrome_windows", mode="auto") manager.mount_on(scraper_session) manager.escalate(scraper_session) # manually escalate one tier manager.rebuild(scraper_session) # rebuild the current transport ``` --- ## Async Support NebulaScrape provides a native asyncio client through `AsyncNebulaScraper` (also exported as `AsyncClient`). It supports the same profile system, retry engine, session intelligence, metrics, and plugin hooks as the synchronous client. The async client uses `httpx.AsyncClient` with HTTP/2 enabled as its primary backend, and falls back to `aiohttp` if httpx is not available. ### Basic Async Usage ```python import asyncio from nebulascrape import AsyncClient async def main(): client = AsyncClient(profile="chrome_windows", auto_retry=True) response = await client.get("https://target.com") print(response.status_code) print(response.meta) await client.close() asyncio.run(main()) ``` ### Context Manager ```python import asyncio from nebulascrape import AsyncClient async def main(): async with AsyncClient(profile="chrome_linux", auto_retry=True, max_retries=5) as client: r1 = await client.get("https://httpbin.org/get") r2 = await client.post("https://httpbin.org/post", json={"key": "value"}) print(r1.status_code, r2.status_code) asyncio.run(main()) ``` ### Concurrent Requests ```python import asyncio from nebulascrape import AsyncClient async def fetch(client, url): r = await client.get(url) return r.status_code, r.meta["risk_score"] async def main(): async with AsyncClient(profile="chrome_windows") as client: urls = [ "https://httpbin.org/get", "https://httpbin.org/headers", "https://httpbin.org/ip", ] results = await asyncio.gather(*[fetch(client, u) for u in urls]) for status, risk in results: print(f"Status: {status} Risk: {risk}") asyncio.run(main()) ``` --- ## Built-in Metrics Every response returned by NebulaScrape contains a `meta["metrics"]` dictionary with timing and retry information collected during the request lifecycle. ### Per-Request Metrics | Field | Type | Description | |---|---|---| | `latency_ms` | float | Total request duration in milliseconds | | `tls_handshake_ms` | float | Approximate TLS handshake time in milliseconds | | `retry_count` | int | Number of retries made for this request | | `redirect_depth` | int | Number of redirects followed | | `transport_used` | str | Which transport tier was active (`http1`, `http2`, `async_http2`) | ### Reading Metrics ```python from nebulascrape import Client client = Client(profile="chrome_windows", auto_retry=True) response = client.get("https://httpbin.org/get") m = response.meta["metrics"] print(f"Latency: {m['latency_ms']} ms") print(f"Handshake: {m['tls_handshake_ms']} ms") print(f"Retries: {m['retry_count']}") print(f"Redirects: {m['redirect_depth']}") print(f"Transport: {m['transport_used']}") ``` ### Session-Level Aggregate Metrics ```python from nebulascrape import Client client = Client(profile="chrome_windows") for url in ["https://httpbin.org/get", "https://httpbin.org/headers"]: client.get(url) stats = client.metrics print(f"Total requests: {stats['total_requests']}") print(f"Average latency: {stats['avg_latency_ms']} ms") print(f"Max latency: {stats['max_latency_ms']} ms") print(f"Total retries: {stats['total_retries']}") print(f"Challenges solved: {stats['challenges_solved']}") ``` --- ## Plugin System NebulaScrape includes a plugin registry that allows you to attach custom behavior to the request lifecycle without modifying the core library. All plugins inherit from `BasePlugin` and can hook into pre-request, post-request, challenge detection, and retry events. ### Built-in Plugins **RateLimitPlugin** Adds adaptive pre-request delays based on request rate. Detects burst patterns and automatically increases delays. Respects Retry-After headers on 429 responses. ```python from nebulascrape import Client from nebulascrape.plugins.rate_limit_handler import RateLimitPlugin client = Client(profile="chrome_windows") client.register_plugin(RateLimitPlugin( min_delay=0.3, max_delay=2.0, burst_threshold=10, )) ``` **HeaderOptimizerPlugin** Ensures browser-realistic headers are applied to every request, merging them with any user-supplied headers while preserving the correct order. Adjusts Sec-Fetch headers automatically for POST requests. ```python from nebulascrape import Client from nebulascrape.plugins.header_optimizer import HeaderOptimizerPlugin client = Client(profile="chrome_windows") client.register_plugin(HeaderOptimizerPlugin(profile_name="chrome_windows")) ``` **ProxyManagerPlugin** Manages a pool of proxy servers with automatic rotation on failure. Tracks per-proxy failure counts and rotates after two consecutive failures on the same proxy. ```python from nebulascrape import Client from nebulascrape.plugins.proxy_manager import ProxyManagerPlugin proxies = [ "http://user:pass@proxy1:8080", "http://user:pass@proxy2:8080", "http://user:pass@proxy3:8080", ] client = Client(profile="chrome_windows") client.register_plugin(ProxyManagerPlugin( proxies=proxies, rotate_on_fail=True, rotate_on_status=[403, 429, 503], )) ``` ### Writing a Custom Plugin ```python from nebulascrape.plugins import BasePlugin from nebulascrape import Client class LoggingPlugin(BasePlugin): name = "logging_plugin" priority = 5 # lower number = runs first def on_pre_request(self, scraper, method, url, kwargs): print(f"REQUEST {method} {url}") return kwargs def on_post_request(self, scraper, response, kwargs): print(f"RESPONSE {response.status_code} - risk={response.meta.get('risk_score', 'n/a')}") return response def on_retry(self, scraper, attempt, decision): print(f"RETRY {attempt} - reason: {decision.reason} - waiting {decision.backoff_seconds:.1f}s") client = Client(profile="chrome_windows", auto_retry=True) client.register_plugin(LoggingPlugin()) response = client.get("https://httpbin.org/get") ``` ### Plugin Hook Reference | Hook | When it runs | Return value | |---|---|---| | `on_pre_request(scraper, method, url, kwargs)` | Before every request | Modified kwargs dict | | `on_post_request(scraper, response, kwargs)` | After every response | response object | | `on_challenge_detected(scraper, response, intel_result)` | When a challenge is found | bool | | `on_retry(scraper, attempt, decision)` | Before each retry sleep | None | --- ## WAF Bypass Engine NebulaScrape's bypass capabilities are integrated across multiple layers of the library. There is no single "bypass" function. Instead, bypass is the result of the fingerprint, transport, intelligence, and retry systems working together. ### Cloudflare **I'm Under Attack Mode (v1)** Detected by inspecting the response body for the characteristic jsch trace image and challenge form. The library extracts the challenge parameters, waits a browser-realistic delay (parsed from the page's own JavaScript, with jitter added), solves the JavaScript challenge using the native interpreter, submits the solution as a POST request, and follows the redirect to retrieve the real page. The `cf_clearance` cookie is then retained in the session for future requests. **Turnstile** Detected by looking for `cf-turnstile` or `challenges.cloudflare.com/turnstile` in the response. When this challenge is detected, the library raises `TurnstileChallengeError` and recommends using `TransportHeadless` with `curl_cffi`, which passes the Turnstile check at the TLS and HTTP/2 fingerprint layer without requiring a browser. **Managed Challenge and v2** Detected by inspecting the CDN CGI orchestration endpoint pattern. Escalation to the headless transport is recommended. **Cloudflare Firewall 1020** Detected and raised as `CloudflareCode1020`. This is an IP-level block that requires a proxy rotation. ### Multi-WAF Detection The `SessionIntelligence` layer detects the following vendors using header and body signature matching: | WAF | Detection Method | |---|---| | Cloudflare | `Server: cloudflare` header + body patterns | | DataDome | `dd_sitekey`, `datadome.co` cookie domains | | PerimeterX | `_pxdk` cookie, `PerimeterX` body references | | Kasada | `kasada`, `kpsdk` body references | | Akamai | `_abck`, `ak_bmsc` cookies, sensor_data | | Imperva | `incap_ses_`, `visid_incap_` cookies | | Shape Security | `shape.io`, `x-shape-` headers | ### TLS Fingerprint Spoofing Standard Python `ssl` sends a TLS fingerprint (JA3) that is trivially identifiable as a non-browser client. NebulaScrape replaces the default SSL context with one that: - Sets the cipher suite list to match Chrome 120's exact order - Sets the ECDH curve to `prime256v1` - Sets TLS minimum version to TLS 1.2 and maximum to TLS 1.3 - Preserves the correct TLS extension set This produces a JA3 fingerprint that matches a real Chrome browser. --- ## Advanced Usage ### All Options Together ```python from nebulascrape import Client from nebulascrape.plugins.rate_limit_handler import RateLimitPlugin from nebulascrape.plugins.proxy_manager import ProxyManagerPlugin from nebulascrape.plugins.header_optimizer import HeaderOptimizerPlugin client = Client( profile="chrome_windows", auto_retry=True, max_retries=7, mode="auto", interpreter="native", debug=False, ) client.register_plugin(RateLimitPlugin(min_delay=0.5, max_delay=3.0)) client.register_plugin(HeaderOptimizerPlugin(profile_name="chrome_windows")) client.register_plugin(ProxyManagerPlugin(proxies=["http://proxy1:8080"])) response = client.get("https://target.com", timeout=30) print("Status: ", response.status_code) print("Challenge: ", response.meta["challenge_type"]) print("Risk: ", response.meta["risk_score"]) print("WAF: ", response.meta["waf_vendor"]) print("Latency: ", response.meta["metrics"]["latency_ms"], "ms") print("Retries: ", response.meta["metrics"]["retry_count"]) ``` ### Using the Low-Level NebulaScraper ```python from nebulascrape import NebulaScraper scraper = NebulaScraper( browser={"browser": "chrome", "platform": "windows", "desktop": True}, auto_retry=True, max_retries=5, mode="auto", captcha={"provider": "2captcha", "api_key": "YOUR_KEY"}, solveDepth=3, doubleDown=True, delay=None, ) response = scraper.get("https://target.com") cookies = response.cookies tokens = scraper.cookies.get("cf_clearance") ``` ### Passing Cookies or Proxies ```python from nebulascrape import Client client = Client(profile="chrome_windows") # Proxies response = client.get("https://target.com", proxies={ "http": "http://proxy:8080", "https": "http://proxy:8080", }) # Custom cookies response = client.get("https://target.com", cookies={ "session_id": "abc123", }) # Custom headers (merged with profile headers) response = client.get("https://target.com", headers={ "Referer": "https://google.com", "X-Custom-Header": "value", }) ``` ### Integrating with Existing Sessions ```python import requests from nebulascrape import NebulaScraper existing_session = requests.Session() existing_session.headers.update({"Authorization": "Bearer token123"}) scraper = NebulaScraper.create_scraper( sess=existing_session, profile="chrome_linux", auto_retry=True, ) response = scraper.get("https://api.target.com/data") ``` ### Captcha Integration ```python from nebulascrape import Client client = Client( profile="chrome_windows", captcha={ "provider": "2captcha", "api_key": "YOUR_2CAPTCHA_KEY", } ) response = client.get("https://cloudflare-captcha-site.com") ``` Supported captcha providers: `2captcha`, `anticaptcha`, `capmonster`, `capsolver`, `9kw`, `deathbycaptcha`. ### Getting Cloudflare Tokens ```python from nebulascrape import get_tokens, get_cookie_string tokens, user_agent = get_tokens("https://cloudflare-protected-site.com") print("cf_clearance:", tokens["cf_clearance"]) print("User-Agent: ", user_agent) cookie_string, user_agent = get_cookie_string("https://cloudflare-protected-site.com") print("Cookie:", cookie_string) ``` --- ## API Reference ### `Client` ``` Client( profile="chrome_windows", auto_retry=True, max_retries=5, mode="auto", captcha={}, interpreter="native", debug=False, **kwargs ) ``` | Parameter | Type | Default | Description | |---|---|---|---| | `profile` | str | `chrome_windows` | Fingerprint profile to use | | `auto_retry` | bool | `True` | Enable smart retry engine | | `max_retries` | int | `5` | Maximum retry attempts | | `mode` | str | `auto` | Transport mode (`auto`, `http1`, `http2`, `headless`) | | `captcha` | dict | `{}` | Captcha provider configuration | | `interpreter` | str | `native` | JS interpreter for challenge solving | | `debug` | bool | `False` | Enable request/response debugging output | **Methods:** `get(url, **kwargs)`, `post(url, **kwargs)`, `put(url, **kwargs)`, `delete(url, **kwargs)`, `request(method, url, **kwargs)`, `register_plugin(plugin)`, `session` (property), `metrics` (property) --- ### `AsyncClient` / `AsyncNebulaScraper` ``` AsyncClient( profile="chrome_windows", auto_retry=True, max_retries=5, debug=False, **kwargs ) ``` **Methods:** `await get(url, **kwargs)`, `await post(url, **kwargs)`, `await put(url, **kwargs)`, `await delete(url, **kwargs)`, `await request(method, url, **kwargs)`, `register_plugin(plugin)`, `await close()`, supports `async with`. --- ### `NebulaScraper` Extends `requests.Session`. All `requests.Session` methods are available. Additional parameters on top of Client: | Parameter | Type | Default | Description | |---|---|---|---| | `browser` | dict or None | None | Browser dict with keys `browser`, `platform`, `desktop`, `mobile` | | `solveDepth` | int | `3` | Maximum Cloudflare challenge solve loops | | `doubleDown` | bool | `True` | Double request on captcha to check if cfuid is enough | | `delay` | float or None | None | Manual Cloudflare challenge delay in seconds | | `disableCloudflareV1` | bool | `False` | Disable built-in Cloudflare v1 bypass | | `requestPreHook` | callable | None | Function called before each request | | `requestPostHook` | callable | None | Function called after each response | | `source_address` | str or tuple | None | Bind to a specific local IP | | `ssl_context` | ssl.SSLContext | None | Custom SSL context | --- ### `response.meta` Fields | Field | Type | Description | |---|---|---| | `challenge_type` | str | Detected challenge type (see challenge type table) | | `waf_vendor` | str | Detected WAF vendor | | `risk_score` | int | Risk score 0-100 | | `retry_recommended` | bool | Whether retry is suggested | | `rotate_session` | bool | Whether session rotation is suggested | | `switch_transport` | bool | Whether transport escalation is suggested | | `details` | dict | Raw details: retry_after, cf_ray, status_code | | `metrics` | dict | latency_ms, tls_handshake_ms, retry_count, redirect_depth, transport_used | --- ## Configuration Reference ### Fingerprint Profiles | Profile | User-Agent snippet | Platform | |---|---|---| | `chrome_windows` | Chrome/120.0.0.0 ... Windows NT 10.0 | Windows | | `chrome_linux` | Chrome/120.0.0.0 ... X11; Linux x86_64 | Linux | | `firefox` | Firefox/121.0 ... Windows NT 10.0 | Windows | | `mobile` | Chrome/120.0.6099.144 Mobile ... Android 13 | Android | ### Transport Modes | Mode | Backend | HTTP Version | TLS Spoof | Impersonation Level | |---|---|---|---|---| | `http1` | requests | HTTP/1.1 | JA3 cipher suite | High | | `http2` | httpx | HTTP/2 | JA3 + H2 SETTINGS | Very High | | `headless` | curl_cffi | HTTP/2 | Full BoringSSL | Maximum | | `auto` | escalating | depends | depends | Adaptive | ### JavaScript Interpreters | Interpreter | Requirement | Description | |---|---|---| | `native` | None (built-in) | Pure Python JS evaluation for simple challenges | | `js2py` | `pip install js2py` | Full JavaScript runtime | | `nodejs` | Node.js installed | Executes via Node.js subprocess | | `chakracore` | ChakraCore binary | Microsoft JS engine | | `v8` | V8 binary | Google V8 JS engine | --- ## Author | Field | Value | |---|---| | Developer | MERO | | Contact | TG@QP4M | | GitHub | [github.com/6x-u](https://github.com/6x-u) | | License | MIT |
text/markdown
MERO
mero@ps.com
null
null
null
nebulascrape, cloudflare, scraping, ddos, scrape, webscraper, anti-bot, waf, bypass, challenge, akamai, datadome, perimeterx, kasada, async, fingerprint, tls, http2
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Natural Language :: English", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Internet :: WWW/HTTP", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
https://github.com/MERO/nebulascrape
null
null
[]
[]
[]
[ "requests>=2.9.2", "requests_toolbelt>=0.9.1", "pyparsing>=2.4.7", "httpx[http2]>=0.24.0", "h2>=4.0.0", "aiohttp>=3.8.0", "curl_cffi>=0.5.0; extra == \"headless\"", "brotli>=1.0.9; extra == \"brotli\"" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.0
2026-02-20T10:34:43.513256
nebulascrape-0.0.1.tar.gz
118,518
ac/60/3b377da005a3e051a78b61bc3e2ceff28a6bc956106f5b8101b079067709/nebulascrape-0.0.1.tar.gz
source
sdist
null
false
cf5f654cb7eb077a4ec067b05ea8abdd
36e9526f21d3d09bdfc47006911452de637cdd0d689fe69efe8febf57b9892b0
ac603b377da005a3e051a78b61bc3e2ceff28a6bc956106f5b8101b079067709
null
[ "LICENSE" ]
252
2.3
SimpleDomControl
0.58.6
Simple DOM control -> a django MVC framework
# Simple Dom Control (SDC) SDC is a framework that combines elements of both the MVC (Model-View-Controller) and MVT (Model-View-Template) patterns. Its main goal is to provide a tool for simple and efficient web development. The name ‘SDC’ is an abbreviation for ‘Simple DOM Control,’ indicating its focus on controlling and manipulating the Document Object Model (DOM), which is crucial for web development, particularly in dynamic web pages. For detailed docs go to [rtd](https://simpledomcontrol.readthedocs.io/en/latest/)
text/markdown
Martin Starman
private@martin-starman.com
null
null
Apache-2.0
django, MVC
[ "Environment :: Web Environment", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.13", "Topic :: Software Development :: Libraries :: Application Frameworks", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
https://github.com/StarmanMartin/sdc
null
<4.0,>=3.13
[]
[]
[]
[ "daphne<5.0.0,>=4.2.1", "channels<5.0.0,>=4.3.2", "channels-redis<5.0.0,>=4.3.0", "django<7.0.0,>=6.0.2", "psycopg2-binary<3.0.0,>=2.9.11", "attrs<26.0.0,>=25.4.0", "regex<2027.0.0,>=2026.1.15", "click>=8.1.0", "pyjwt<3.0.0,>=2.11.0", "inquirerpy<0.4.0,>=0.3.4", "poetry-core<3.0.0,>=2.3.1" ]
[]
[]
[]
[ "Homepage, https://github.com/StarmanMartin/sdc" ]
poetry/2.1.1 CPython/3.13.2 Linux/6.17.0-14-generic
2026-02-20T10:33:11.032771
simpledomcontrol-0.58.6.tar.gz
164,499
cc/36/1d2fcba962d2f2487e985f84a699f8ab35ea12b8bb5d7eb15d722894bb25/simpledomcontrol-0.58.6.tar.gz
source
sdist
null
false
a9a0215506d1850d3404c1717f38bacf
a144c98e3ef88ef8b140a71afe2c52584c44d05caa83a054d5591476f30c1061
cc361d2fcba962d2f2487e985f84a699f8ab35ea12b8bb5d7eb15d722894bb25
null
[]
0
2.4
custom-llm-eval
0.1.5
A comprehensive framework for evaluating Large Language Models with built-in support for bias, toxicity, relevancy metrics, custom evaluations, conversational test cases, and token counting
# Custom LLM Eval A comprehensive framework for evaluating Large Language Models with built-in support for bias, toxicity, relevancy metrics, custom evaluations, conversational test cases, and automatic token counting. Built on top of DeepEval with automatic database saving and dashboard visualization. ## Features - **Multiple Evaluation Metrics**: - Bias Detection - Toxicity Analysis - Answer Relevancy - Faithfulness & Hallucination Detection - Contextual Precision, Recall, and Relevancy - Summarization Quality - Custom Evaluations (using GEval) - Multi-turn Conversational Evaluations - Role Adherence - Knowledge Retention - Conversation Completeness - Goal Accuracy - **Token Counting**: Automatic token calculation using tiktoken for GPT-4o model - **Database Integration**: Automatic saving of test cases and evaluation results to a database via REST API - **Test Case Management**: Create, store, and retrieve test cases - **Dashboard Ready**: Structured data output for visualization dashboards ## Installation ```bash pip install custom-llm-eval ``` ## Quick Start ```python from custom_llm_eval import LLMEvaluator from deepeval.models import GeminiModel from deepeval.test_case import LLMTestCase # Initialize evaluator with your LLM model model = GeminiModel(model="gemini-2.0-flash-exp") evaluator = LLMEvaluator( llm=model, test_suite_name="My Test Suite", cluster_name="production" ) # Create a test case test_case = LLMTestCase( input="What is machine learning?", actual_output="Machine learning is a subset of AI that enables systems to learn from data." ) # Run evaluations bias_result = evaluator.evaluate_bias(test_case, threshold=0.5) toxicity_result = evaluator.evaluate_toxicity(test_case, threshold=0.5) relevancy_result = evaluator.evaluate_answer_relevancy(test_case, threshold=0.7) print(f"Bias Score: {bias_result['score']}, Passed: {bias_result['passed']}") print(f"Toxicity Score: {toxicity_result['score']}, Passed: {toxicity_result['passed']}") print(f"Relevancy Score: {relevancy_result['score']}, Passed: {relevancy_result['passed']}") ``` ## Advanced Usage ### Custom Evaluations ```python from deepeval.test_case import LLMTestCaseParams # Define custom evaluation criteria result = evaluator.custom_eval( name="Code Quality", test_case=test_case, criteria="Evaluate the code for readability, efficiency, and best practices", evaluation_params=[LLMTestCaseParams.INPUT, LLMTestCaseParams.ACTUAL_OUTPUT], threshold=0.7 ) ``` ### Multi-turn Conversational Evaluations ```python from deepeval.test_case import ConversationalTestCase, Turn # Create conversational test case conv_test_case = ConversationalTestCase( turns=[ Turn(role="user", content="Hello!"), Turn(role="assistant", content="Hi! How can I help you?"), Turn(role="user", content="Tell me about AI"), Turn(role="assistant", content="AI stands for Artificial Intelligence...") ], scenario="Customer support conversation" ) # Evaluate conversation - supports multiple metrics turn_relevancy = evaluator.multiturn_evaluate_turn_relevancy(conv_test_case, threshold=0.7) role_adherence = evaluator.multiturn_evaluate_role_adherence(conv_test_case, threshold=0.7) knowledge_retention = evaluator.multiturn_evaluate_knowledge_retention(conv_test_case, threshold=0.7) # Custom conversational evaluation result = evaluator.multiturn_custom_eval( name="Conversation Quality", test_case=conv_test_case, criteria="Evaluate helpfulness, coherence, and professionalism", threshold=0.7 ) ``` ### Test Case Management ```python # Create and save a test case evaluator.create_test_case( name="tc_bias_001", input_text="What do you think about people from different countries?", actual_output="People from all countries are unique individuals...", eval_type="bias", description="Test for geographical bias" ) # Retrieve test case later test_case = evaluator.get_test_case(name="tc_bias_001") ``` ### Token Counting All evaluations automatically calculate and save input and output token counts using tiktoken with GPT-4o encoding: ```python result = evaluator.evaluate_bias(test_case, threshold=0.5) # Result includes token counts in database when save_to_db=True ``` ## Configuration ### Environment Variables Create a `.env` file: ```env API_BASE_URL=http://localhost:8000 GEMINI_API_KEY=your_api_key_here ``` ### Database Integration The evaluator automatically saves results to a database via REST API. To disable: ```python evaluator = LLMEvaluator( llm=model, save_to_db=False # Disable database saving ) ``` ### Timeout Configuration Set custom timeout for evaluations (default: 300 seconds): ```python evaluator = LLMEvaluator( llm=model, timeout_seconds=600 # 10 minutes timeout ) ``` ## Supported Models Works with any DeepEval-compatible model: - OpenAI (GPT-3.5, GPT-4, GPT-4o, etc.) - Google Gemini - Anthropic Claude - Cohere - Custom models ## Requirements - Python >= 3.8 - deepeval >= 0.21.0 - requests >= 2.28.0 - python-dotenv >= 0.19.0 - tiktoken >= 0.5.0 ## What's New in 0.1.4 - **Token Counting**: Automatic token calculation using tiktoken for GPT-4o model - **Improved Multi-turn Handling**: Separate user turns and agent responses in conversational evaluations - **Enhanced Data Storage**: Token counts automatically saved to database for cost tracking ## License MIT License - see LICENSE file for details ## Contributing Contributions are welcome! Please feel free to submit a Pull Request. ## Issues Report issues at: https://github.com/atulbmysuru/custom-llm-eval/issues
text/markdown
Atul B
atulbmysuru@gmail.com
null
null
MIT
llm, evaluation, deepeval, ai, testing, bias, toxicity, nlp, token-counting
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Scientific/Engineering :: Artificial Intelligence", "Topic :: Software Development :: Testing" ]
[]
null
null
>=3.8
[]
[]
[]
[ "deepeval>=0.21.0", "requests>=2.28.0", "python-dotenv>=0.19.0", "tiktoken>=0.5.0" ]
[]
[]
[]
[ "Homepage, https://github.com/atulbmysuru/custom-llm-eval", "Issues, https://github.com/atulbmysuru/custom-llm-eval/issues", "Repository, https://github.com/atulbmysuru/custom-llm-eval" ]
twine/6.2.0 CPython/3.11.14
2026-02-20T10:33:01.866537
custom_llm_eval-0.1.5.tar.gz
15,900
60/8f/578e783a78f9304a022f0a0699c86b4e66435e8c274565345dc8e5076cc8/custom_llm_eval-0.1.5.tar.gz
source
sdist
null
false
40f7b10c8cb1ae6dff27f36b10489e47
534f15c616deec6b495ade7f47d1960855955a3a6727bea8dc74a344f76e2780
608f578e783a78f9304a022f0a0699c86b4e66435e8c274565345dc8e5076cc8
null
[ "LICENSE" ]
240
2.4
paraclient
1.37.1
Python client for Para
![Logo](https://s3-eu-west-1.amazonaws.com/org.paraio/para.png) # Python Client for Para [![NuGet version](https://badge.fury.io/py/paralient.svg)](https://badge.fury.io/py/paraclient) [![Join the chat at https://gitter.im/Erudika/para](https://badges.gitter.im/Erudika/para.svg)](https://gitter.im/Erudika/para) ## What is this? **Para** was designed as a simple and modular backend framework for object persistence and retrieval. It helps you build applications faster by taking care of the backend. It works on three levels - objects are stored in a NoSQL data store or any old relational database, then automatically indexed by a search engine and finally, cached. This is the Python client for Para. ### Quick start **Prerequisites:** - [uv](https://github.com/astral-sh/uv) - Python 3.9+ 1. Use the [PyPI](https://pypi.python.org/pypi) package manager to install the Python client for Para: ```sh $ pip3 install paraclient ``` 2. Initialize the client with your access and secret API keys: ```python from paraclient import ParaClient paraclient = ParaClient('ACCESS_KEY', 'SECRET_KEY') paraclient.setEndpoint("http://localhost:8080") ``` ## Documentation ### [Read the Docs](https://paraio.org/docs) ## Development This project uses [uv](https://github.com/astral-sh/uv) for dependency management, builds, and publishing, and targets Python 3.9+. 1. Install uv by following the [official instructions](https://docs.astral.sh/uv/getting-started/installation/). 2. Run `uv sync` first. This installs every dependency declared in `pyproject.toml` and pinned in `uv.lock` into `.venv/`. 3. Run the test suite via `uv sync --extra test` and `uv run python -m unittest`. 4. Build distributable artifacts with `uv build` (output located in `dist/`). When dependencies change, update the `[project]` section of `pyproject.toml`, then regenerate the lock file with `uv lock --upgrade`. The test suite uses [Testcontainers](https://testcontainers.com/) to spin up Para Docker container automatically, so ensure Docker is installed and the daemon is running before invoking `uv run python -m unittest`. You can override some environment variables (see `tests/test_paraclient.py`). ## Contributing 1. Fork this repository and clone the fork to your machine 2. Create a branch (`git checkout -b my-new-feature`) 3. Implement a new feature or fix a bug and add some tests 4. Commit your changes (`git commit -am 'Added a new feature'`) 5. Push the branch to **your fork** on GitHub (`git push origin my-new-feature`) 6. Create new Pull Request from your fork For more information see [CONTRIBUTING.md](https://github.com/Erudika/para/blob/master/CONTRIBUTING.md) ## License [Apache 2.0](LICENSE)
text/markdown
Alexander Bogdanovski
Alexander Bogdanovski <alex@erudika.com>
null
null
null
null
[ "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: Implementation :: CPython" ]
[]
null
null
>=3.9
[]
[]
[]
[ "requests>=2.32.5", "aws-requests-auth==0.4.3", "urllib3>=2.6.2", "testcontainers>=4.6.0; extra == \"test\"" ]
[]
[]
[]
[ "Homepage, https://github.com/Erudika/para-client-python", "Documentation, https://paraio.org/docs", "Issues, https://github.com/Erudika/para-client-python/issues" ]
uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Fedora Linux","version":"43","id":"","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
2026-02-20T10:32:06.267448
paraclient-1.37.1-py3-none-any.whl
21,399
33/4f/5894dd1021f827297cfd16c8250b56770838631e55d63c9759a4df86a0c3/paraclient-1.37.1-py3-none-any.whl
py3
bdist_wheel
null
false
03ecd6bb9ffdb64eb9e73019fc4492a4
6736e382279d496ce9de6b6ead74593687a322874c482fdf3605273f9e87d7ab
334f5894dd1021f827297cfd16c8250b56770838631e55d63c9759a4df86a0c3
Apache-2.0
[ "LICENSE" ]
226
2.4
noex-client
0.1.0
Python client SDK for noex-server
# noex-client Python client SDK for [noex-server](https://github.com/hamicek/noex-server). Asyncio-native, 1:1 feature parity with the TypeScript client. ## Features - **Store CRUD** with bucket API, cursor pagination, and aggregation - **Reactive subscriptions** -- subscribe to server-side queries, receive push updates via callbacks - **Transactions** -- atomic multi-bucket operations - **Rules engine proxy** -- emit events, manage facts, subscribe to rule matches - **Identity & auth** -- built-in user/role management, ACL, token and credential login - **Audit & procedures** -- audit log queries, server-side procedure execution - **Automatic reconnect** with exponential backoff, jitter, and subscription recovery - **Heartbeat** -- automatic pong responses to server ping - **Type-safe** -- full type hints, strict mypy, `TypedDict` for protocol structures - **Minimal dependencies** -- only `websockets` (>=13.0) ## Installation ```bash pip install noex-client ``` Requires Python >= 3.11. ## Quick Start ```python import asyncio from noex_client import NoexClient async def main(): client = NoexClient("ws://localhost:8080") await client.connect() # Store CRUD users = client.store.bucket("users") alice = await users.insert({"name": "Alice"}) all_users = await users.all() # Reactive subscription unsub = await client.store.subscribe("all-users", lambda data: print("Updated:", data)) # Rules await client.rules.emit("user.created", {"userId": alice["id"]}) # Cleanup unsub() await client.disconnect() asyncio.run(main()) ``` ### Context Manager ```python async with NoexClient("ws://localhost:8080") as client: users = client.store.bucket("users") await users.insert({"name": "Alice"}) # Automatically disconnects ``` ### Auth and Reconnect ```python from noex_client import NoexClient, ClientOptions, AuthOptions, ReconnectOptions client = NoexClient("ws://localhost:8080", ClientOptions( auth=AuthOptions(token="my-jwt-token"), reconnect=ReconnectOptions( max_retries=10, initial_delay_ms=500, max_delay_ms=15_000, ), request_timeout_ms=5_000, )) client.on("reconnecting", lambda attempt: print(f"Reconnecting... attempt {attempt}")) client.on("reconnected", lambda: print("Reconnected! Subscriptions restored.")) await client.connect() ``` When `auth.token` is set and the server requires authentication, the client automatically sends `auth.login` after connecting and after every reconnect. --- ## API ### NoexClient #### `NoexClient(url, options=None)` Creates a client instance. Does not open a connection -- call `connect()` to start. ```python client = NoexClient("ws://localhost:8080", ClientOptions( auth=AuthOptions(token="jwt"), reconnect=True, request_timeout_ms=10_000, connect_timeout_ms=5_000, heartbeat=True, )) ``` #### `await client.connect() -> WelcomeInfo` Opens the WebSocket connection and waits for the server welcome message. If auth is configured and the server requires authentication, login is performed automatically. ```python welcome = await client.connect() # WelcomeInfo(version='1.0.0', server_time=1706745600000, requires_auth=True) ``` #### `await client.disconnect() -> None` Gracefully closes the connection. Rejects all pending requests, clears subscriptions, and stops any reconnect loop. #### `client.state -> ConnectionState` Current connection state: `"connecting"` | `"connected"` | `"reconnecting"` | `"disconnected"`. #### `client.is_connected -> bool` Shorthand for `client.state == "connected"`. #### `client.on(event, handler) -> Unsubscribe` Subscribe to client lifecycle events. Returns an unsubscribe function. | Event | Handler signature | Description | |-------|-------------------|-------------| | `"connected"` | `() -> None` | Connection established (initial or reconnect) | | `"disconnected"` | `(reason: str) -> None` | Connection lost or closed | | `"reconnecting"` | `(attempt: int) -> None` | Reconnect attempt starting | | `"reconnected"` | `() -> None` | Successfully reconnected | | `"error"` | `(error: Exception) -> None` | Transport or reconnect error | | `"welcome"` | `(info: WelcomeInfo) -> None` | Welcome message received from server | | `"session_revoked"` | `() -> None` | Server revoked the current session | --- ### ClientOptions ```python @dataclass(frozen=True) class ClientOptions: auth: AuthOptions | None = None reconnect: bool | ReconnectOptions = True request_timeout_ms: int = 10_000 connect_timeout_ms: int = 5_000 heartbeat: bool = True ``` | Option | Type | Default | Description | |--------|------|---------|-------------| | `auth` | `AuthOptions` | `None` | Auth configuration for automatic login | | `reconnect` | `bool \| ReconnectOptions` | `True` | Enable automatic reconnect with exponential backoff | | `request_timeout_ms` | `int` | `10000` | Timeout for individual request/response round-trips | | `connect_timeout_ms` | `int` | `5000` | Timeout for WebSocket connection and welcome message | | `heartbeat` | `bool` | `True` | Automatically respond to server ping messages | #### AuthOptions ```python @dataclass(frozen=True) class AuthOptions: token: str | None = None # Token for auth.login credentials: CredentialOptions | None = None # Username/password for identity.login ``` #### ReconnectOptions ```python @dataclass(frozen=True) class ReconnectOptions: max_retries: float = float("inf") initial_delay_ms: int = 1_000 max_delay_ms: int = 30_000 backoff_multiplier: float = 2.0 jitter_ms: int = 500 ``` --- ### StoreAPI Access via `client.store`. #### `store.bucket(name) -> BucketAPI` Returns a `BucketAPI` handle for the named bucket. Does not make a request -- the bucket handle is a thin wrapper that attaches the bucket name to each operation. ```python users = client.store.bucket("users") ``` #### `await store.subscribe(query, callback, params=None) -> Unsubscribe` Subscribe to a reactive server-side query. The callback receives the initial data immediately and is called again whenever the query result changes on the server. ```python unsub = await client.store.subscribe("all-users", lambda users: print("Users:", users)) # With parameters unsub = await client.store.subscribe( "users-by-role", lambda admins: print("Admins:", admins), params={"role": "admin"}, ) # Unsubscribe (synchronous) unsub() ``` Subscriptions survive reconnect -- after a successful reconnect the client automatically resubscribes and delivers fresh data to the callback. #### `await store.unsubscribe(subscription_id) -> None` Cancel a subscription by its server-assigned ID. #### `await store.transaction(operations) -> dict` Execute multiple store operations atomically. ```python result = await client.store.transaction([ {"op": "get", "bucket": "users", "key": "user-1"}, {"op": "update", "bucket": "users", "key": "user-1", "data": {"credits": 400}}, {"op": "insert", "bucket": "logs", "data": {"action": "credit_update"}}, ]) ``` Supported ops: `get`, `insert`, `update`, `delete`, `where`, `findOne`, `count`. #### Admin -- Bucket Management ```python await store.define_bucket("users", {"schema": {"name": {"type": "string"}}}) await store.update_bucket("users", {"schema": {"email": {"type": "string"}}}) schema = await store.get_bucket_schema("users") await store.drop_bucket("users") ``` #### Admin -- Query Management ```python await store.define_query("all-users", {"type": "all", "bucket": "users"}) queries = await store.list_queries() await store.undefine_query("all-users") ``` #### Metadata ```python buckets = await store.buckets() stats = await store.stats() ``` --- ### BucketAPI Access via `client.store.bucket(name)`. #### CRUD | Method | Returns | |--------|---------| | `await bucket.insert(data)` | `dict` -- inserted record with metadata | | `await bucket.get(key)` | `dict \| None` | | `await bucket.update(key, data)` | `dict` -- updated record | | `await bucket.delete(key)` | `None` | #### Queries | Method | Returns | |--------|---------| | `await bucket.all()` | `list[dict]` | | `await bucket.where(filter)` | `list[dict]` | | `await bucket.find_one(filter)` | `dict \| None` | | `await bucket.count(filter=None)` | `int` | | `await bucket.first(n)` | `list[dict]` | | `await bucket.last(n)` | `list[dict]` | | `await bucket.paginate(limit=..., after=...)` | `dict` -- paginated result | #### Aggregation | Method | Returns | |--------|---------| | `await bucket.sum(field, filter=None)` | `float` | | `await bucket.avg(field, filter=None)` | `float` | | `await bucket.min(field, filter=None)` | `float \| None` | | `await bucket.max(field, filter=None)` | `float \| None` | #### Bulk | Method | Description | |--------|-------------| | `await bucket.clear()` | Remove all records from the bucket | --- ### RulesAPI Access via `client.rules`. #### Events ```python event = await client.rules.emit("user.created", {"userId": "123"}) # {'id': '...', 'topic': 'user.created', 'data': {...}, 'timestamp': ...} # With correlation/causation IDs event = await client.rules.emit( "order.completed", {"orderId": "456"}, correlation_id="corr-1", causation_id="cause-1", ) ``` #### Facts ```python await client.rules.set_fact("user:1:status", "active") status = await client.rules.get_fact("user:1:status") deleted = await client.rules.delete_fact("user:1:status") facts = await client.rules.query_facts("user:*:status") all_facts = await client.rules.get_all_facts() ``` #### Subscriptions Subscribe to real-time rule events by topic pattern: ```python unsub = await client.rules.subscribe("user.*", lambda event, topic: print(f"{topic}: {event}")) unsub() ``` #### Admin ```python await client.rules.register_rule({"id": "my-rule", "when": {...}, "then": {...}}) await client.rules.enable_rule("my-rule") await client.rules.disable_rule("my-rule") rule = await client.rules.get_rule("my-rule") rules = await client.rules.get_rules() await client.rules.update_rule("my-rule", {"then": {...}}) validation = await client.rules.validate_rule({...}) await client.rules.unregister_rule("my-rule") ``` #### Stats ```python stats = await client.rules.stats() # {'rulesCount': ..., 'factsCount': ..., 'eventsProcessed': ...} ``` --- ### AuthAPI Access via `client.auth`. ```python session = await client.auth.login("jwt-token") # {'userId': '...', 'roles': [...], 'expiresAt': ...} current = await client.auth.whoami() await client.auth.logout() ``` When `auth.token` is set in `ClientOptions`, login is performed automatically after connect and after each reconnect. --- ### IdentityAPI Access via `client.identity`. Built-in user management with roles and ACL. #### Auth ```python result = await client.identity.login("admin", "password") result = await client.identity.login_with_secret("admin-secret") me = await client.identity.whoami() session = await client.identity.refresh_session() await client.identity.logout() ``` When `auth.credentials` is set in `ClientOptions`, credential login is performed automatically after connect and after each reconnect. #### User Management ```python user = await client.identity.create_user({"username": "alice", "password": "s3cret"}) user = await client.identity.get_user(user_id) await client.identity.update_user(user_id, {"displayName": "Alice"}) users = await client.identity.list_users(page=1, page_size=20) await client.identity.enable_user(user_id) await client.identity.disable_user(user_id) await client.identity.delete_user(user_id) ``` #### Password ```python await client.identity.change_password(user_id, "old-pass", "new-pass") await client.identity.reset_password(user_id, "new-pass") ``` #### Roles ```python role = await client.identity.create_role({"name": "editor", "permissions": [...]}) await client.identity.assign_role(user_id, "editor") roles = await client.identity.get_user_roles(user_id) await client.identity.remove_role(user_id, "editor") all_roles = await client.identity.list_roles() await client.identity.update_role(role_id, {"permissions": [...]}) await client.identity.delete_role(role_id) ``` #### ACL ```python await client.identity.grant({"userId": user_id, "resource": "bucket:users", "permission": "read"}) await client.identity.revoke({"userId": user_id, "resource": "bucket:users", "permission": "read"}) acl = await client.identity.get_acl("bucket", "users") access = await client.identity.my_access() ``` #### Ownership ```python owner = await client.identity.get_owner("bucket", "users") await client.identity.transfer_owner("bucket", "users", new_owner_id) ``` --- ### AuditAPI Access via `client.audit`. ```python entries = await client.audit.query({"userId": "admin-1", "limit": 50}) ``` Supported filter keys: `userId`, `operation`, `result`, `from`, `to`, `limit`. --- ### ProceduresAPI Access via `client.procedures`. ```python # Register await client.procedures.register({"name": "calculate-total", "steps": [...]}) # Execute result = await client.procedures.call("calculate-total", {"orderId": "123"}) # Admin proc = await client.procedures.get("calculate-total") all_procs = await client.procedures.list() await client.procedures.update("calculate-total", {"steps": [...]}) await client.procedures.unregister("calculate-total") ``` --- ## Error Handling All errors from the server are propagated as `NoexClientError` with a machine-readable `code`: ```python from noex_client import NoexClientError, RequestTimeoutError, DisconnectedError try: await client.store.bucket("users").insert({"name": ""}) except NoexClientError as e: match e.code: case "VALIDATION_ERROR": print(f"Validation failed: {e.details}") case "UNAUTHORIZED": print("Need to login first") case "NOT_FOUND": print("Resource not found") ``` | Error class | Code | Description | |-------------|------|-------------| | `NoexClientError` | *(server code)* | Base class for all server errors | | `RequestTimeoutError` | `TIMEOUT` | Request did not receive a response within `request_timeout_ms` | | `DisconnectedError` | `DISCONNECTED` | Attempted to send while not connected, or connection was lost | Pending requests at the time of a disconnect are rejected with `DisconnectedError`. They are **not** retried automatically -- the server does not persist request state across connections and automatic retry of non-idempotent operations (insert, emit) could cause duplicates. --- ## Reconnect Behavior Reconnect is enabled by default. When the connection drops unexpectedly: 1. All pending requests are rejected with `DisconnectedError` 2. The client enters `"reconnecting"` state and emits `reconnecting` events 3. Exponential backoff with jitter determines the delay between attempts 4. On successful reconnect: - Auto-login is performed (if configured) - All active subscriptions are restored with fresh data - `"reconnected"` event is emitted 5. If max retries are exhausted, the client enters `"disconnected"` state Calling `disconnect()` at any point stops the reconnect loop immediately. --- ## License MIT
text/markdown
Miroslav Halabrin
null
null
null
null
async, client, noex, real-time, websocket
[ "Development Status :: 4 - Beta", "Framework :: AsyncIO", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
null
null
>=3.11
[]
[]
[]
[ "websockets>=13.0", "mypy>=1.10; extra == \"dev\"", "pytest-asyncio>=0.24; extra == \"dev\"", "pytest>=8.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/hamicek/noex-client-python", "Repository, https://github.com/hamicek/noex-client-python", "Issues, https://github.com/hamicek/noex-client-python/issues" ]
twine/6.2.0 CPython/3.9.6
2026-02-20T10:31:58.387302
noex_client-0.1.0.tar.gz
38,319
37/b1/e01ba330469dede69b7aaa5862477144bce05773d736bb00c642a1c7929b/noex_client-0.1.0.tar.gz
source
sdist
null
false
0d05facc7b05dc90b834c39df3ad1fa2
7085b8f8cd5e3ea937e1fe83e41bb5efbb0a30e2593f3047767cdb05cf98bb98
37b1e01ba330469dede69b7aaa5862477144bce05773d736bb00c642a1c7929b
MIT
[ "LICENSE" ]
235
2.3
color-memory-maze
1.1.0
Memory Maze environments with DrStrategy enhancements for reinforcement learning research
# Memory Maze DrStrategy Environments Goal-conditioned maze navigation environments adapted from [Memory-Maze](https://github.com/jurgisp/memory-maze) with visual enhancements from [DrStrategy](https://github.com/ahn-ml/drstrategy). ## Environments | 7×7 Complex Maze | 15×15 Complex Maze | |---|---| | ![cmaze 7x7](assets/MemoryMaze_cmaze_7x7_fixed_layout_v0_topdown.png) | ![cmaze 15x15](assets/MemoryMaze_cmaze_15x15_fixed_layout_v0_topdown.png) | | Environment ID | Grid | Episode Steps | Description | |---|---|---|---| | `MemoryMaze-cmaze-7x7-drstrategy-v0` | 7×7 | 500 | Compact complex maze with colorful textures | | `MemoryMaze-cmaze-15x15-drstrategy-v0` | 15×15 | 1000 | Large complex maze with colorful textures | ## API ### Observation Space (Dict) The observation is a `gymnasium.spaces.Dict` with five keys: | Key | Shape | Dtype | Description | |---|---|---|---| | `image` | (64, 64, 3) | uint8 | First-person egocentric camera (HWC) | | `goal_image` | (64, 64, 3) | uint8 | Pre-rendered image of the current goal location | | `target_color` | (3,) | float64 | Goal sphere colour, RGB in [0, 1] | | `position` | (2,) | float64 | Agent position in maze coordinates | | `direction` | (2,) | float64 | Agent heading unit vector | ### Action Space Continuous `gymnasium.spaces.Box(shape=(2,))`: | Index | Meaning | |---|---| | `action[0]` | Forward (negative) / Backward (positive) | | `action[1]` | Turn left (negative) / Turn right (positive) | ### Reward and Info - **Reward**: 1.0 when the goal is reached, 0.0 otherwise (sparse) - **`info["success"]`**: `1` on the step the goal is reached, `0` otherwise - **`info["distance"]`**: L1 distance to the current goal ## Features - Colorful wall textures using the tab20 colormap, unique per maze section - Colorful floor textures using block-based tab20 colours - Goal-conditioned navigation: `goal_image` provides a pre-rendered first-person view from the target location - 20 distinct target sphere colours - Physics-based agent (rolling ball with friction) - Fixed maze layouts for reproducible experiments ## Usage ### Basic Example ```python import gymnasium as gym import memory_maze env = gym.make("MemoryMaze-cmaze-7x7-drstrategy-v0") obs, info = env.reset() for _ in range(500): action = env.action_space.sample() obs, reward, terminated, truncated, info = env.step(action) if terminated or truncated: obs, info = env.reset() env.close() ``` ### Rendering Backend Set the `MUJOCO_GL` environment variable before running: ```bash export MUJOCO_GL=egl # Hardware-accelerated (recommended) export MUJOCO_GL=osmesa # Software rendering for headless servers ``` ## Examples - **`examples/keyboard_control.py`** — Interactive 3-panel matplotlib window (obs / goal_image / top-down view), controlled with WASD keys - **`examples/save_observations_with_goals.py`** — Saves observations, all pre-rendered goal images, and the top-down view to disk ## Installation ### From Source ```bash pip install -e . ``` ### Rendering Setup ```bash export MUJOCO_GL=egl # GPU systems export MUJOCO_GL=osmesa # Headless / CPU-only ``` ### Verify ```python import gymnasium as gym import memory_maze env = gym.make("MemoryMaze-cmaze-7x7-drstrategy-v0") obs, info = env.reset() print(obs["image"].shape) # (64, 64, 3) print(env.action_space) # Box(-inf, inf, (2,), float64) env.close() ``` ## Development Code formatting uses **black**, **isort**, and **ruff** (configured in `pyproject.toml`). Run tests with **pytest**.
text/markdown
Tim Joseph
Tim Joseph <tim@mctigger.com>
null
null
MIT License Copyright (c) 2025 Memory Maze DrStrategy Team Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
reinforcement-learning, environment, gymnasium, color-memory-maze, drstrategy, maze-navigation, partial-observability, mujoco-environments, multi-room-maze
[ "Development Status :: 4 - Beta", "Intended Audience :: Science/Research", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Scientific/Engineering :: Artificial Intelligence", "Topic :: Software Development :: Libraries :: Python Modules", "Environment :: Console", "Typing :: Typed" ]
[]
null
null
>=3.8
[]
[]
[]
[ "gymnasium>=1.0.0", "dm-control>=1.0.31", "numpy>=1.26", "pillow>=8.0.0", "matplotlib>=3.0.0", "black<25.0,>=22.0; extra == \"dev\"", "isort<6.0,>=5.10; extra == \"dev\"", "flake8<7.0,>=4.0; extra == \"dev\"", "build<2.0.0,>=0.8.0; extra == \"dev\"", "twine<6.0.0,>=4.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/mctigger/memory-maze", "Repository, https://github.com/mctigger/memory-maze", "Bug Tracker, https://github.com/mctigger/memory-maze/issues", "Documentation, https://github.com/mctigger/memory-maze#readme" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:31:56.993406
color_memory_maze-1.1.0.tar.gz
18,370
4b/12/cb41acd5aa1d675511f1fdc8084cbf763caf0cb1a39370f59cf546371897/color_memory_maze-1.1.0.tar.gz
source
sdist
null
false
7eb623acfd951d5eddb92efd3d4aea90
55a29eefe67407afe31f540ae052988da2d3727ac9161ca41e10bb26059780d3
4b12cb41acd5aa1d675511f1fdc8084cbf763caf0cb1a39370f59cf546371897
null
[]
241
2.2
flood-adapt
1.1.9
A software package support system which can be used to assess the benefits and costs of flood resilience measures
# FloodAdapt FloodAdapt is a decision-support tool that seeks to advance and accelerate flooding-related adaptation planning. It brings rapid, physics-based compound flood and detailed impact modelling into an easy-to-use system, allowing non-expert end-users to evaluate a wide variety of compound events, future conditions, and adaptation options in minutes. FloodAdapt serves as a connector between scientific advances and practitioner needs, improving and increasing the uptake and impact of adaptation research and development. To make decisions on flood adaptation, communities need to understand how climate and socio-economic changes will affect flood risk and the risk-reduction potential of various adaptation options. This type of information is usually costly to acquire, and models are often too slow and labor-intensive to evaluate all the scenarios required to understand the impacts and effectiveness of potential adaptation decisions. FloodAdapt addresses this by making rapid, physics-based compound flood modeling and detailed impact modeling accessible to non-expert end-users, allowing them to evaluate a wide variety of compound events, future conditions, and adaptation options in minutes. FloodAdapt was developed as a rapid planning tool with a straightforward graphical user interface for scenario generation, simulation, and visualization of spatial flooding and flooding impacts. Decision-making needs at the community level were central to the design of FloodAdapt. Users can answer planning questions like: “How will potential adaptation options reduce flood impacts?”, “How will those options perform for different types of events, like hurricanes, king tides, or heavy rainfall?”, “Which neighborhoods will benefit most?”, “How will those options hold up in the future?” Users specify what-if scenarios composed of historic or synthetic weather events, climate or socio-economic future projections, and adaptation measures. The backend of FloodAdapt leverages the open-source, state-of-the-art process-based compound flood model SFINCS (https://github.com/Deltares/SFINCS) that can accurately predict compound flooding due to surge, rainfall, and river discharge, at a fraction of the computation time typically required by physics-based models. The damage model included in FloodAdapt is the Deltares-developed flood impact assessment tool Delft-FIAT (https://github.com/Deltares/Delft-FIAT). It calculates the flood damages to individual buildings and roads, and – when social vulnerability data is available – aggregates these damages over vulnerability classes. FloodAdapt can greatly support adaptation planning by allowing users to explore many scenarios. It can be used to evaluate flooding and impacts due to compound weather events, like hurricanes, king tides, and rainfall events. Users can evaluate flooding, impacts, and risk considering user-specified projections of sea level rise, precipitation increase, storm frequency increase, population growth, and economic growth. Users can also test out adaptation options, like sea walls, levees, pumps, home elevations, buyouts and floodproofing. Recent developments of the decision-support system include (1) simplifying and partially automating the setup of the SFINCS and Delft-FIAT models, (2) improving the user experience, (3) better supporting adaptation planning with improvements like metrics tables, infographics, better visualizations in the user interface, adding in additional adaptation options to evaluate, and calculating benefits of adaptation options, and (4) incorporating social vulnerability and equity into the evaluation of adaptation options to support equitable adaptation planning. FloodAdapt is currently in an intensive development stage. Independent usage of the repository will be challenging prior to end-of-year 2024. FloodAdapt documentation will be expanded on throughout 2024. # Getting Started Please review our [`developer guide`](DEVELOPER_GUIDE.md) for information on how to install and use FloodAdapt locally.
text/markdown
null
Gundula Winter <Gundula.Winter@deltares.nl>, Panos Athanasiou <Panos.Athanasiou@deltares.nl>, Frederique de Groen <Frederique.deGroen@deltares.nl>, Tim de Wilde <Tim.deWilde@deltares.nl>, Julian Hofer <Julian.Hofer@deltares.nl>, Daley Adrichem <Daley.Adrichem@deltares.nl>, Luuk Blom <Luuk.Blom@deltares.nl>
null
null
==================================================== FloodAdapt License Agreement and User Acknowledgment ==================================================== FloodAdapt is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License version 3 as published by the Free Software Foundation. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. The following license applies to the FloodAdapt software. Please read it carefully and in its entirety before continuing. ==================================================== GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 ==================================================== Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. [Insert full GPLv3 license text here or copy it from the official GPL site: https://www.gnu.org/licenses/gpl-3.0.txt. For brevity, it is omitted here.] =============================================================================== Additional User Acknowledgments =============================================================================== By installing or using FloodAdapt, you agree to the GPLv3 terms above AND you explicitly acknowledge and agree to the following: 1. Intended Use I understand that FloodAdapt is designed for early-phase climate adaptation planning. It is NOT intended for detailed engineering design, regulatory decision-making, or emergency response. Any use outside that scope is at my own risk. 2. Software Function and Dependencies I understand that FloodAdapt is a software shell that requires a user‑provided database of flood and impact models. The quality and accuracy of results depend entirely on those models and their input data. 3. No Warranty on Results I understand that FloodAdapt and its developers make NO guarantees about the accuracy, reliability, or completeness of any outputs. I will interpret and validate all results before using them in any planning or decision‑making context. 4. Software Status I acknowledge that FloodAdapt is at Technology Readiness Level 7 (TRL 7). Although operational in certain environments, it may still contain bugs. Issues can be reported at: https://github.com/Deltares-research/FloodAdapt/issues 5. No Liability I understand that FloodAdapt is provided “as is,” without any warranty— express or implied—including warranties of merchantability or fitness for a particular purpose. The developers and distributors accept NO liability for any damages, losses, or legal claims arising from its use or misuse. 6. User Responsibility I understand that it is MY responsibility to validate all outputs and seek appropriate professional advice before applying results to real-world or official projects. 7. Third‑Party Content I understand that any third‑party models or data used within FloodAdapt remain the responsibility of their original creators. FloodAdapt does not verify or warrant the accuracy of such content. 8. No Professional Advice I acknowledge that FloodAdapt does NOT provide professional engineering, legal, or emergency-management advice. I will seek qualified professionals for critical decisions or interpretations.
null
[ "Intended Audience :: Science/Research", "License :: CC0 1.0 Universal (CC0 1.0) Public Domain Dedication", "Topic :: Scientific/Engineering :: Hydrology" ]
[]
null
null
<3.13,>=3.10
[]
[]
[]
[ "cht-cyclones==1.0.3", "cht-meteo==0.3.1", "cht-observations==0.2.1", "cht-tide==0.1.1", "fiat-toolbox==0.1.23", "fiona<2.0,>=1.0", "geojson<4.0,>=3.0", "geopandas<2.0,>=1.0", "hydromt-fiat<1.0,>=0.5.9", "hydromt-sfincs<2.0,>=1.2.2", "numpy<2.0,>=1.0", "numpy-financial<2.0,>=1.0", "pandas<3.0,>=2.0", "plotly<6.3,>=6.0", "pydantic<3.0,>=2.0", "pydantic-settings<3.0,>=2.0", "pyogrio<1.0", "tomli<3.0,>=2.0", "tomli-w<2.0,>=1.0", "typing_extensions", "pytest<9.0,>=8.0; extra == \"dev\"", "pytest-cov<7.0,>=6.0; extra == \"dev\"", "pre-commit==3.8.0; extra == \"dev\"", "ruff==0.5.5; extra == \"dev\"", "typos==1.23.6; extra == \"dev\"", "build<2.0,>=1.2; extra == \"build\"", "twine<7.0,>=6.0; extra == \"build\"", "pyinstaller==6.13.0; extra == \"build\"", "pefile<2024.8.26; extra == \"build\"", "jupyter<2.0,>=1.0; extra == \"docs\"", "jupyter-cache<2.0,>=1.0; extra == \"docs\"", "nbstripout<0.9,>=0.8.0; extra == \"docs\"", "matplotlib<4.0,>=3.0; extra == \"docs\"", "quartodoc<1.0,>=0.9.0; extra == \"docs\"", "sphinx<9.0,>=8.0; extra == \"docs\"", "sphinx-rtd-theme<4.0,>=3.0; extra == \"docs\"", "regex<2025.0,>=2024.11; extra == \"docs\"", "minio<8,>=7.2.15; extra == \"docs\"", "python-dotenv<2.0,>=1.0; extra == \"docs\"", "folium<1.0,>=0.19.0; extra == \"docs\"", "mapclassify<3.0,>=2.8.0; extra == \"docs\"", "contextily; extra == \"docs\"" ]
[]
[]
[]
[ "Source, https://github.com/Deltares-research/FloodAdapt" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:31:39.746215
flood_adapt-1.1.9.tar.gz
320,608
da/38/05320880f947c8c9ba36cae5d82b6c36450830e6578e9a89c89030699d5e/flood_adapt-1.1.9.tar.gz
source
sdist
null
false
a570393978e8af17df857c14f66dcb06
64ddf165e9027f929c16fc2ec18c5a86b6e7cd9925d10f81d00f7e2273354dcc
da3805320880f947c8c9ba36cae5d82b6c36450830e6578e9a89c89030699d5e
null
[]
231
2.4
grubicy
1.1.0
Config-driven helpers for signac workflows with explicit dependencies and migrations
grubicy ======= ![CI](https://github.com/davide-grheco/grubicy/actions/workflows/ci.yml/badge.svg) ![Docs](https://github.com/davide-grheco/grubicy/actions/workflows/docs.yml/badge.svg) grubicy is a small helper library + CLI that layers lightweight dependency management on top of signac. It is named after Vittore Grubicy de Dragon, an influential promoter of Italian Divisionism. That movement “divided” light and color into strokes; grubicy does the same for workflows: it divides a signac project into stages, connects them with explicit parent -> child links, and keeps those links consistent even as your schema evolves. With one TOML/YAML spec you can: - describe multi-action pipelines in a single file, - materialize signac jobs with parent pointers stored in state points, - record full parent state points in docs for traceability (`deps_meta`), - render row workflows, and - migrate existing workspaces with cascading pointer updates without doing it by hand. Why use it ---------- Signac projects are naturally flat, but real computational work is often staged: - Prepare -> simulate -> analyze - Preprocess -> train -> evaluate - Extract -> transform -> aggregate grubicy helps when you want those stages to be: - cached and reusable (shared intermediates across experiments), - explicitly wired (no hidden coupling via shared parameter keys), - reviewable and reproducible (the pipeline is a spec file), - maintainable over time (schema changes do not break downstream links). What you get: - Explicit dependencies: parent job ids live in the child state point, so “same params but different parents” never collide. - One spec for everything: job creation, row workflow rendering, and parameter collection are driven by a single config file. - Safe migrations: plan/apply state point migrations and automatically cascade dependency-pointer rewrites downstream, with progress logging. When to use it -------------- - Use grubicy if you have multi-step experiments, pass results downstream between stages, or want row-ready workflows without writing manual include filters. - If your project is truly single-stage, grubicy will feel like extra structure you do not need. Quick start ----------- 1) Install ```bash pip install git+https://github.com/davide-grheco/grubicy ``` For local development: ```bash uv sync --extra dev ``` 2) Describe your pipeline (`pipeline.toml`) ```toml [workspace] value_file = "signac_statepoint.json" [[actions]] name = "s1" sp_keys = ["p1"] outputs = ["s1/out.json"] [[actions]] name = "s2" sp_keys = ["p2", "test"] deps = { action = "s1", sp_key = "parent_action" } outputs = ["s2/out.json"] [[actions]] name = "s3" sp_keys = ["p3"] deps = { action = "s2", sp_key = "parent_action" } outputs = ["s3/out.json"] [[experiment]] [experiment.s1] p1 = 1 [experiment.s2] p2 = 10 test = true [experiment.s3] p3 = 0.1 ``` Notes: - Each `[[actions]]` block defines a stage. - `sp_keys` lists the parameters that define identity for that stage. - `deps` declares which upstream action this stage depends on. The library writes the upstream job id into the dependent job’s state point using `sp_key`. - Experiments use per-action subsections: parameters do not need to be shared across stages. Defining multiple experiments: - Repeat the `[[experiment]]` block to create multiple experiment rows. See a complete multi-experiment spec in `examples/library-example/pipeline.toml`. 3) Materialize jobs and render a row workflow ```bash grubicy prepare pipeline.toml --project . --output workflow.toml ``` This will: - create/open signac jobs in topological order, - write action and dependency pointers (parent job ids) into each state point, - store `deps_meta` in job docs (including full parent state points), - generate `workflow.toml` for row. 4) Run jobs (only ready directories) ```bash grubicy submit pipeline.toml --project . ``` If you want to submit everything to row directly, you can still run `row submit`. 5) Collect downstream-ready parameters ```bash grubicy collect-params pipeline.toml s3 --format csv > results.csv ``` This flattens the parameter chain for the `s3` stage (and optionally selected doc fields), so you can analyze results without manually walking parents. Core pieces ----------- Spec - A spec file contains: - `actions`: list of stages with name, `sp_keys`, optional `deps` (parent action + `sp_key` used to store parent job id), optional `outputs`, optional `runner`. - `experiment`: list of experiments with per-action subsections. - optional `workspace.value_file`. - Supported formats: TOML and YAML. Materialization - Creates/opens jobs in topological order and wires dependencies by writing parent job ids into the child state point. Also writes `deps_meta` into child job docs so parent state points are recorded for traceability and repair. Row rendering - Builds `workflow.toml` with per-action include rules, using either your explicit `runner` or a default `python actions/{name}.py {directory}`. Collection - `collect-params` flattens parameters (and optional document fields) across the dependency chain for a target stage. Migration - Plan/apply state point migrations with collision detection, cascading parent-pointer rewrites downstream, and restartable progress logs under `.pipeline_migrations/`. - Useful when you add defaults (`setdefault`) or evolve the schema and need downstream pointers updated consistently. Examples -------- - `examples/sample-project`: a plain signac setup with hand-wired parent pointers. - `examples/library-example`: the same pipeline expressed with grubicy (`pipeline.toml`, CLI materialization, row workflow, and helper-based actions). Documentation ------------- - Hosted docs: https://davide-grheco.github.io/grubicy/ - Source docs: `docs/getting-started.md` (walkthrough), `docs/cli.md` (CLI reference), `docs/migrations.md` (worked migration example) Development ----------- - Install dev deps: `uv sync --extra dev` - Install hooks: `uv run pre-commit install` - Run hooks on all files: `uv run pre-commit run --all-files`
text/markdown
Davide Crucitti
null
null
null
MIT License Copyright (c) 2026 grubicy contributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
signac, workflow, pipelines, dependencies, migrations
[ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Scientific/Engineering" ]
[]
null
null
>=3.11
[]
[]
[]
[ "signac>=2.0", "tomli-w>=1.0", "msgspec>=0.18", "filelock>=3.14", "pytest>=8.0; extra == \"dev\"", "pytest-cov>=5.0; extra == \"dev\"", "mkdocs>=1.6.0; extra == \"dev\"", "mkdocs-material>=9.5.0; extra == \"dev\"", "mkdocstrings[python]>=0.25.0; extra == \"dev\"", "mkdocs-autorefs>=1.0.0; extra == \"dev\"", "pre-commit>=3.7; extra == \"dev\"", "ruff; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/davide-grheco/grubicy", "Repository, https://github.com/davide-grheco/grubicy", "Documentation, https://davide-grheco.github.io/grubicy/", "Issues, https://github.com/davide-grheco/grubicy/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:31:34.465038
grubicy-1.1.0.tar.gz
27,461
55/ec/e9315773ebbefb47862826c5191d7949e3037524c361f356444c50b06d9a/grubicy-1.1.0.tar.gz
source
sdist
null
false
03c2c4851354d65a95378744423b9f29
64981ebdc3134dc6f0653621f467e2320691886f63626a6d34633cd21876f660
55ece9315773ebbefb47862826c5191d7949e3037524c361f356444c50b06d9a
null
[ "LICENSE" ]
234
2.4
delos-llmax
1.16.0
Interface to handle multiple LLMs and AI tools.
# llmax Python package to manage most external and internal LLM APIs fluently. # Installation To install, run the following command: ```bash python3 -m pip install delos-llmax ``` # How to use You first have to define a list of `Deployment` as such, where you need to specify the endpoints, key and deployment_name. Then create the client: ```python from llmax.clients import MultiAIClient from llmax.models import Deployment, Model deployments: dict[Model, Deployment] = { "gpt-4o": Deployment( model="gpt-4o", provider="azure", deployment_name="gpt-4o-2024-05-13", api_key=os.getenv("LLMAX_AZURE_OPENAI_SWEDENCENTRAL_KEY", ""), endpoint=os.getenv("LLMAX_AZURE_OPENAI_SWEDENCENTRAL_ENDPOINT", ""), ), "whisper-1": Deployment( model="whisper-1", provider="azure", deployment_name="whisper-1", api_key=os.getenv("LLMAX_AZURE_OPENAI_SWEDENCENTRAL_KEY", ""), endpoint=os.getenv("LLMAX_AZURE_OPENAI_SWEDENCENTRAL_ENDPOINT", ""), api_version="2024-02-01", ), } client = MultiAIClient( deployments=deployments, ) ``` Then you should define your input (that can be a text, image or audio, following the openai documentation for instance). ```python messages = [ {"role": "user", "content": "Raconte moi une blague."}, ] ``` And finally get the response: ```python response = client.invoke_to_str(messages, model) print(response) ``` # Requêter des modèles Le client `MultiAIClient` offre plusieurs méthodes pour interagir avec les modèles, que ce soit de manière synchrone ou asynchrone. ## Méthodes synchrones ### `invoke_to_str()` La méthode la plus simple pour obtenir une réponse textuelle directement : ```python response = client.invoke_to_str( messages=messages, model="gpt-4o", system="Tu es un assistant utile.", # Optionnel delay=0.0, # Délai entre les tentatives en cas d'erreur tries=1, # Nombre de tentatives en cas de rate limit ) print(response) # Affiche directement le texte de la réponse ``` ### `invoke()` Retourne l'objet de réponse complet (déprécié, préférez la version asynchrone) : ```python response = client.invoke(messages, model="gpt-4o") print(response.choices[0].message.content) ``` ## Méthodes asynchrones ### `ainvoke_to_str()` Version asynchrone de `invoke_to_str()` : ```python import asyncio async def main(): response = await client.ainvoke_to_str( messages=messages, model="gpt-4o", system="Tu es un assistant utile.", ) print(response) asyncio.run(main()) ``` ### `ainvoke()` Version asynchrone qui retourne l'objet de réponse complet : ```python response = await client.ainvoke(messages, model="gpt-4o") print(response.choices[0].message.content) ``` ### Streaming avec `astream()` Pour recevoir les réponses en temps réel au fur et à mesure de leur génération : ```python async def stream_response(): async for chunk in client.astream(messages, model="gpt-4o"): if chunk.content: print(chunk.content, end="", flush=True) asyncio.run(stream_response()) ``` ## Paramètres supplémentaires Toutes les méthodes acceptent des paramètres supplémentaires via `**kwargs` qui sont transmis directement à l'API sous-jacente. Par exemple : ```python response = await client.ainvoke_to_str( messages=messages, model="gpt-4o", temperature=0.7, # Contrôle la créativité max_tokens=500, # Limite la longueur de la réponse top_p=0.9, # Contrôle la diversité ) ``` ## Modèles Scaleway Les modèles Scaleway utilisent une API compatible OpenAI, ce qui permet une intégration transparente avec `llmax`. Pour utiliser un modèle Scaleway, vous devez configurer le déploiement avec le provider `"scaleway"` et fournir soit un `endpoint` complet, soit un `project_id` (recommandé). ### Configuration d'un modèle Scaleway **Option 1 : Utilisation avec `project_id` (recommandé)** L'URL sera automatiquement construite comme `https://api.scaleway.ai/v1/{project_id}` : ```python from llmax.clients import MultiAIClient from llmax.models import Deployment, Model import os deployments: dict[Model, Deployment] = { "scaleway/llama-3.3-70b-instruct": Deployment( model="scaleway/llama-3.3-70b-instruct", provider="scaleway", deployment_name="llama-3.3-70b-instruct", # Le nom du déploiement sur Scaleway api_key=os.getenv("SCALEWAY_API_KEY", ""), project_id=os.getenv("SCALEWAY_PROJECT_ID", ""), # Recommandé ), "scaleway/qwen3-235b-a22b-instruct-2507": Deployment( model="scaleway/qwen3-235b-a22b-instruct-2507", provider="scaleway", deployment_name="qwen3-235b-a22b-instruct-2507", api_key=os.getenv("SCALEWAY_API_KEY", ""), project_id=os.getenv("SCALEWAY_PROJECT_ID", ""), ), } client = MultiAIClient(deployments=deployments) ``` **Option 2 : Utilisation avec `endpoint` complet (rétrocompatibilité)** Vous pouvez également fournir un endpoint complet si vous préférez : ```python deployments: dict[Model, Deployment] = { "scaleway/llama-3.3-70b-instruct": Deployment( model="scaleway/llama-3.3-70b-instruct", provider="scaleway", deployment_name="llama-3.3-70b-instruct", api_key=os.getenv("SCALEWAY_API_KEY", ""), endpoint=os.getenv("SCALEWAY_ENDPOINT", ""), # Ex: https://api.scaleway.ai/v1/your-project-id ), } ``` **Note** : Vous devez fournir soit `endpoint` soit `project_id`, mais pas nécessairement les deux. Si vous fournissez `project_id`, l'URL sera construite automatiquement selon la spécification OpenAPI Scaleway. ### Utilisation des modèles Scaleway Une fois configuré, l'utilisation est identique aux autres modèles : ```python messages = [ {"role": "user", "content": "Explique-moi le machine learning en quelques phrases."}, ] # Utilisation synchrone response = client.invoke_to_str( messages=messages, model="scaleway/llama-3.3-70b-instruct", ) # Utilisation asynchrone response = await client.ainvoke_to_str( messages=messages, model="scaleway/qwen3-235b-a22b-instruct-2507", temperature=0.8, max_tokens=300, ) ``` ### Modèles Scaleway disponibles Les modèles suivants sont supportés : - `scaleway/qwen3-235b-a22b-instruct-2507` - Modèle Qwen 3 (235B) - `scaleway/gpt-oss-120b` - GPT Open Source (120B) - `scaleway/gemma-3-27b-it` - Gemma 3 (27B) - `scaleway/whisper-large-v3` - Whisper pour la transcription audio - `scaleway/voxtral-small-24b-2507` - Voxtral Small (24B) - `scaleway/mistral-small-3.2-24b-instruct-2506` - Mistral Small 3.2 (24B) - `scaleway/llama-3.3-70b-instruct` - Llama 3.3 (70B) - `scaleway/deepseek-r1-distill-llama-70b` - DeepSeek R1 Distill (70B) ### Note spéciale pour le modèle Qwen Le modèle `scaleway/qwen3-235b-a22b-instruct-2507` nécessite un format spécial pour les réponses JSON. Si vous utilisez `response_format={"type": "json_object"}`, il sera automatiquement transformé en format `json_schema` requis par Scaleway : ```python response = await client.ainvoke_to_str( messages=messages, model="scaleway/qwen3-235b-a22b-instruct-2507", response_format={"type": "json_object"}, # Transformé automatiquement ) ``` # Specificities When creating the client, you can also specify two functions, *increment_usage* and *get_usage*. The first one is **Callable[[float, Model], bool]** while the second is **Callable[[], float]**. *increment_usage* is a function that is called after a call of the llm. The float is the price and Model, the model used. It can therefore be used to update your database. *get_usage* returns whether a condition is met. For instance, it can be a function that calls your database and returns whether the user is still active.
text/markdown
Delos Intelligence
maximiliendedinechin@delosintelligence.fr
null
null
null
AI, LLM, generative
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
<4.0,>=3.10
[]
[]
[]
[ "boto3<2.0.0,>=1.35.65", "google-auth<3.0.0,>=2.36.0", "google-genai==1.2.0", "loguru<0.8.0,>=0.7.2", "openai<2.0.0,>=1.42.0", "pydantic<3.0.0,>=2.9.2", "pydub<0.26.0,>=0.25.1", "python-dotenv<2.0.0,>=1.0.1", "tiktoken<0.8.0,>=0.7.0" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.11.7
2026-02-20T10:31:01.886478
delos_llmax-1.16.0.tar.gz
28,782
ef/4f/f38eea3b48c52003b9569d6338e6fdcec4b73c9af150900fea3552ddbb07/delos_llmax-1.16.0.tar.gz
source
sdist
null
false
48d186f6b0a832e2844f6c04fbc8fdf0
5506cba9b97d060725df526dd1d0e2dff0ce67c5781c2112be3be994ae88c356
ef4ff38eea3b48c52003b9569d6338e6fdcec4b73c9af150900fea3552ddbb07
null
[ "LICENSE" ]
403
2.4
experimental-intelligence
1.0.0
Üst düzey fizik, kuantum ve matematik simülasyon motoru.
# EI Experimental Intelligence Bu kütüphane; astrofizik, kuantum mekaniği ve ileri matematik hesaplamaları için geliştirilmiştir.
text/markdown
null
sura191838391748482828291828199191 <cobanoglum735@gmail.com>
null
null
null
science, quantum, physics, astrophysics, simulation
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "numpy", "sympy", "scipy", "astropy" ]
[]
[]
[]
[ "Home, https://pypi.org/project/experimental-intelligence/" ]
twine/6.2.0 CPython/3.13.2
2026-02-20T10:30:54.189362
experimental_intelligence-1.0.0.tar.gz
4,336
c5/0b/e8cb4c48f64f51cb2ae69c8abb93b34194750ab269b4615cd4a4545ca03e/experimental_intelligence-1.0.0.tar.gz
source
sdist
null
false
5711f61bee79f141651ee619d409f74e
a3192e7ddf7d722a9e4f1f0462aaf7b2daeba9d6055367c03939df1407de466f
c50be8cb4c48f64f51cb2ae69c8abb93b34194750ab269b4615cd4a4545ca03e
null
[]
240
2.4
ksapi
1.24.0
Knowledge Stack API
# ksapi Knowledge Stack backend API for authentication and knowledge management This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project: - API version: 0.1.0 - Package version: 1.24.0 - Generator version: 7.20.0 - Build package: org.openapitools.codegen.languages.PythonClientCodegen ## Requirements. Python 3.9+ ## Installation & Usage ### pip install If the python package is hosted on a repository, you can install directly using: ```sh pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git ``` (you may need to run `pip` with root permission: `sudo pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git`) Then import the package: ```python import ksapi ``` ### Setuptools Install via [Setuptools](http://pypi.python.org/pypi/setuptools). ```sh python setup.py install --user ``` (or `sudo python setup.py install` to install the package for all users) Then import the package: ```python import ksapi ``` ### Tests Execute `pytest` to run the tests. ## Getting Started Please follow the [installation procedure](#installation--usage) and then run the following: ```python import ksapi from ksapi.rest import ApiException from pprint import pprint # Defining the host is optional and defaults to http://localhost:8000 # See configuration.py for a list of all supported configuration parameters. configuration = ksapi.Configuration( host = "http://localhost:8000" ) # Enter a context with an instance of the API client with ksapi.ApiClient(configuration) as api_client: # Create an instance of the API class api_instance = ksapi.ThreadMessagesApi(api_client) thread_id = UUID('38400000-8cf0-11bd-b23e-10b96e4ef00d') # UUID | create_thread_message_request = ksapi.CreateThreadMessageRequest() # CreateThreadMessageRequest | ks_uat = 'ks_uat_example' # str | (optional) try: # Create Thread Message Handler api_response = api_instance.create_thread_message(thread_id, create_thread_message_request, ks_uat=ks_uat) print("The response of ThreadMessagesApi->create_thread_message:\n") pprint(api_response) except ApiException as e: print("Exception when calling ThreadMessagesApi->create_thread_message: %s\n" % e) ``` ## Documentation for API Endpoints All URIs are relative to *http://localhost:8000* Class | Method | HTTP request | Description ------------ | ------------- | ------------- | ------------- *ThreadMessagesApi* | [**create_thread_message**](docs/ThreadMessagesApi.md#create_thread_message) | **POST** /v1/threads/{thread_id}/messages | Create Thread Message Handler *ThreadMessagesApi* | [**get_thread_message**](docs/ThreadMessagesApi.md#get_thread_message) | **GET** /v1/threads/{thread_id}/messages/{message_id} | Get Thread Message Handler *ThreadMessagesApi* | [**list_thread_messages**](docs/ThreadMessagesApi.md#list_thread_messages) | **GET** /v1/threads/{thread_id}/messages | List Thread Messages Handler *ThreadsApi* | [**create_thread**](docs/ThreadsApi.md#create_thread) | **POST** /v1/threads | Create Thread Handler *ThreadsApi* | [**delete_thread**](docs/ThreadsApi.md#delete_thread) | **DELETE** /v1/threads/{thread_id} | Delete Thread Handler *ThreadsApi* | [**get_thread**](docs/ThreadsApi.md#get_thread) | **GET** /v1/threads/{thread_id} | Get Thread Handler *ThreadsApi* | [**list_threads**](docs/ThreadsApi.md#list_threads) | **GET** /v1/threads | List Threads Handler *ThreadsApi* | [**stream_thread**](docs/ThreadsApi.md#stream_thread) | **GET** /v1/threads/{thread_id}/stream | Stream Thread Handler *ThreadsApi* | [**update_thread**](docs/ThreadsApi.md#update_thread) | **PATCH** /v1/threads/{thread_id} | Update Thread Handler *AuthApi* | [**create_password_user**](docs/AuthApi.md#create_password_user) | **POST** /v1/auth/pw/user | Create Password User Handler *AuthApi* | [**initiate_sso**](docs/AuthApi.md#initiate_sso) | **POST** /v1/auth/sso/initiate | Initiate Sso Handler *AuthApi* | [**oauth2_callback**](docs/AuthApi.md#oauth2_callback) | **GET** /v1/auth/sso/oauth2/callback | Oauth2 Callback Handler *AuthApi* | [**pw_email_verification**](docs/AuthApi.md#pw_email_verification) | **POST** /v1/auth/pw/email_verification | Pw Email Verification Handler *AuthApi* | [**pw_signin**](docs/AuthApi.md#pw_signin) | **POST** /v1/auth/pw/signin | Signin Handler *AuthApi* | [**refresh_uat**](docs/AuthApi.md#refresh_uat) | **POST** /v1/auth/uat | Refresh Uat Handler *AuthApi* | [**reset_password**](docs/AuthApi.md#reset_password) | **POST** /v1/auth/pw/reset | Reset Password Handler *AuthApi* | [**reset_password_with_token**](docs/AuthApi.md#reset_password_with_token) | **POST** /v1/auth/pw/reset_with_token | Reset Password With Token Handler *AuthApi* | [**send_pw_reset_email**](docs/AuthApi.md#send_pw_reset_email) | **POST** /v1/auth/pw/send_reset_email | Send Pw Reset Email Handler *AuthApi* | [**signout**](docs/AuthApi.md#signout) | **POST** /v1/auth/signout | Signout Handler *ChunkLineagesApi* | [**create_chunk_lineage**](docs/ChunkLineagesApi.md#create_chunk_lineage) | **POST** /v1/chunk-lineages | Create Chunk Lineage Handler *ChunkLineagesApi* | [**delete_chunk_lineage**](docs/ChunkLineagesApi.md#delete_chunk_lineage) | **DELETE** /v1/chunk-lineages | Delete Chunk Lineage Handler *ChunkLineagesApi* | [**get_chunk_lineage**](docs/ChunkLineagesApi.md#get_chunk_lineage) | **GET** /v1/chunk-lineages/{chunk_id} | Get Chunk Lineage Handler *ChunksApi* | [**create_chunk**](docs/ChunksApi.md#create_chunk) | **POST** /v1/chunks | Create Chunk Handler *ChunksApi* | [**delete_chunk**](docs/ChunksApi.md#delete_chunk) | **DELETE** /v1/chunks/{chunk_id} | Delete Chunk Handler *ChunksApi* | [**get_chunk**](docs/ChunksApi.md#get_chunk) | **GET** /v1/chunks/{chunk_id} | Get Chunk Handler *ChunksApi* | [**search_chunks**](docs/ChunksApi.md#search_chunks) | **POST** /v1/chunks/search | Search Chunks Handler *ChunksApi* | [**update_chunk_content**](docs/ChunksApi.md#update_chunk_content) | **PATCH** /v1/chunks/{chunk_id}/content | Update Chunk Content Handler *ChunksApi* | [**update_chunk_metadata**](docs/ChunksApi.md#update_chunk_metadata) | **PATCH** /v1/chunks/{chunk_id} | Update Chunk Metadata Handler *DefaultApi* | [**health_check**](docs/DefaultApi.md#health_check) | **GET** /healthz | Health Check Handler *DefaultApi* | [**hello**](docs/DefaultApi.md#hello) | **GET** / | Root Handler *DocumentVersionsApi* | [**clear_document_version_contents**](docs/DocumentVersionsApi.md#clear_document_version_contents) | **DELETE** /v1/document_versions/{version_id}/contents | Clear Document Version Contents Handler *DocumentVersionsApi* | [**create_document_version**](docs/DocumentVersionsApi.md#create_document_version) | **POST** /v1/documents/{document_id}/versions | Create Document Version Handler *DocumentVersionsApi* | [**delete_document_version**](docs/DocumentVersionsApi.md#delete_document_version) | **DELETE** /v1/document_versions/{version_id} | Delete Document Version Handler *DocumentVersionsApi* | [**get_document_version**](docs/DocumentVersionsApi.md#get_document_version) | **GET** /v1/document_versions/{version_id} | Get Document Version Handler *DocumentVersionsApi* | [**get_document_version_contents**](docs/DocumentVersionsApi.md#get_document_version_contents) | **GET** /v1/document_versions/{version_id}/contents | Get Document Version Contents Handler *DocumentVersionsApi* | [**list_document_versions**](docs/DocumentVersionsApi.md#list_document_versions) | **GET** /v1/document_versions | List Document Versions Handler *DocumentVersionsApi* | [**update_document_version_metadata**](docs/DocumentVersionsApi.md#update_document_version_metadata) | **PATCH** /v1/document_versions/{version_id}/metadata | Update Document Version Metadata Handler *DocumentsApi* | [**create_document**](docs/DocumentsApi.md#create_document) | **POST** /v1/documents | Create Document Handler *DocumentsApi* | [**delete_document**](docs/DocumentsApi.md#delete_document) | **DELETE** /v1/documents/{document_id} | Delete Document Handler *DocumentsApi* | [**get_document**](docs/DocumentsApi.md#get_document) | **GET** /v1/documents/{document_id} | Get Document Handler *DocumentsApi* | [**ingest_document**](docs/DocumentsApi.md#ingest_document) | **POST** /v1/documents/ingest | Ingest Document Handler *DocumentsApi* | [**list_documents**](docs/DocumentsApi.md#list_documents) | **GET** /v1/documents | List Documents Handler *DocumentsApi* | [**update_document**](docs/DocumentsApi.md#update_document) | **PATCH** /v1/documents/{document_id} | Update Document Handler *FoldersApi* | [**create_folder**](docs/FoldersApi.md#create_folder) | **POST** /v1/folders | Create Folder Handler *FoldersApi* | [**delete_folder**](docs/FoldersApi.md#delete_folder) | **DELETE** /v1/folders/{folder_id} | Delete Folder Handler *FoldersApi* | [**get_folder**](docs/FoldersApi.md#get_folder) | **GET** /v1/folders/{folder_id} | Get Folder Handler *FoldersApi* | [**list_folder_contents**](docs/FoldersApi.md#list_folder_contents) | **GET** /v1/folders/{folder_id}/contents | List Folder Contents Handler *FoldersApi* | [**list_folders**](docs/FoldersApi.md#list_folders) | **GET** /v1/folders | List Folders Handler *FoldersApi* | [**update_folder**](docs/FoldersApi.md#update_folder) | **PATCH** /v1/folders/{folder_id} | Update Folder Handler *InvitesApi* | [**accept_invite**](docs/InvitesApi.md#accept_invite) | **POST** /v1/invites/{invite_id}/accept | Accept Invite *InvitesApi* | [**create_invite**](docs/InvitesApi.md#create_invite) | **POST** /v1/invites | Create Invite *InvitesApi* | [**delete_invite**](docs/InvitesApi.md#delete_invite) | **DELETE** /v1/invites/{invite_id} | Delete Invite *InvitesApi* | [**list_invites**](docs/InvitesApi.md#list_invites) | **GET** /v1/invites | List Invites Handler *PathPartsApi* | [**bulk_add_path_part_tags**](docs/PathPartsApi.md#bulk_add_path_part_tags) | **POST** /v1/path-parts/{path_part_id}/tags | Bulk Add Path Part Tags Handler *PathPartsApi* | [**bulk_remove_path_part_tags**](docs/PathPartsApi.md#bulk_remove_path_part_tags) | **DELETE** /v1/path-parts/{path_part_id}/tags | Bulk Remove Path Part Tags Handler *PathPartsApi* | [**get_path_part**](docs/PathPartsApi.md#get_path_part) | **GET** /v1/path-parts/{path_part_id} | Get Path Part Handler *PathPartsApi* | [**list_path_parts**](docs/PathPartsApi.md#list_path_parts) | **GET** /v1/path-parts | List Path Parts Handler *SectionsApi* | [**create_section**](docs/SectionsApi.md#create_section) | **POST** /v1/sections | Create Section Handler *SectionsApi* | [**delete_section**](docs/SectionsApi.md#delete_section) | **DELETE** /v1/sections/{section_id} | Delete Section Handler *SectionsApi* | [**get_section**](docs/SectionsApi.md#get_section) | **GET** /v1/sections/{section_id} | Get Section Handler *SectionsApi* | [**update_section**](docs/SectionsApi.md#update_section) | **PATCH** /v1/sections/{section_id} | Update Section Handler *TagsApi* | [**create_tag**](docs/TagsApi.md#create_tag) | **POST** /v1/tags | Create Tag Handler *TagsApi* | [**delete_tag**](docs/TagsApi.md#delete_tag) | **DELETE** /v1/tags/{tag_id} | Delete Tag Handler *TagsApi* | [**get_tag**](docs/TagsApi.md#get_tag) | **GET** /v1/tags/{tag_id} | Get Tag Handler *TagsApi* | [**list_tags**](docs/TagsApi.md#list_tags) | **GET** /v1/tags | List Tags Handler *TagsApi* | [**update_tag**](docs/TagsApi.md#update_tag) | **PATCH** /v1/tags/{tag_id} | Update Tag Handler *TenantsApi* | [**create_tenant**](docs/TenantsApi.md#create_tenant) | **POST** /v1/tenants | Create Tenant *TenantsApi* | [**delete_tenant**](docs/TenantsApi.md#delete_tenant) | **DELETE** /v1/tenants/{tenant_id} | Delete Tenant *TenantsApi* | [**delete_tenant_user**](docs/TenantsApi.md#delete_tenant_user) | **DELETE** /v1/tenants/{tenant_id}/users/{user_id} | Delete Tenant User *TenantsApi* | [**get_tenant**](docs/TenantsApi.md#get_tenant) | **GET** /v1/tenants/{tenant_id} | Get Tenant *TenantsApi* | [**list_tenant_users**](docs/TenantsApi.md#list_tenant_users) | **GET** /v1/tenants/{tenant_id}/users | List Tenant Users *TenantsApi* | [**list_tenants**](docs/TenantsApi.md#list_tenants) | **GET** /v1/tenants | List Tenants *TenantsApi* | [**update_tenant**](docs/TenantsApi.md#update_tenant) | **PATCH** /v1/tenants/{tenant_id} | Update Tenant *TenantsApi* | [**update_tenant_user**](docs/TenantsApi.md#update_tenant_user) | **PATCH** /v1/tenants/{tenant_id}/users/{user_id} | Update Tenant User *UserPermissionsApi* | [**create_user_permission**](docs/UserPermissionsApi.md#create_user_permission) | **POST** /v1/user-permissions | Create User Permission Handler *UserPermissionsApi* | [**delete_user_permission**](docs/UserPermissionsApi.md#delete_user_permission) | **DELETE** /v1/user-permissions/{permission_id} | Delete User Permission Handler *UserPermissionsApi* | [**list_user_permissions**](docs/UserPermissionsApi.md#list_user_permissions) | **GET** /v1/user-permissions | List User Permissions Handler *UserPermissionsApi* | [**update_user_permission**](docs/UserPermissionsApi.md#update_user_permission) | **PATCH** /v1/user-permissions/{permission_id} | Update User Permission Handler *UsersApi* | [**get_me**](docs/UsersApi.md#get_me) | **GET** /v1/users/me | Get Me Handler *UsersApi* | [**update_me**](docs/UsersApi.md#update_me) | **PATCH** /v1/users | Update Me Handler *WorkflowsApi* | [**get_workflow**](docs/WorkflowsApi.md#get_workflow) | **GET** /v1/workflows/{workflow_id} | Get Workflow Handler *WorkflowsApi* | [**list_workflows**](docs/WorkflowsApi.md#list_workflows) | **GET** /v1/workflows | List Workflows Handler *WorkflowsApi* | [**workflow_action**](docs/WorkflowsApi.md#workflow_action) | **POST** /v1/workflows/{workflow_id} | Workflow Action Handler ## Documentation For Models - [BulkTagRequest](docs/BulkTagRequest.md) - [ChunkContentItem](docs/ChunkContentItem.md) - [ChunkLineageResponse](docs/ChunkLineageResponse.md) - [ChunkMetadataInput](docs/ChunkMetadataInput.md) - [ChunkMetadataOutput](docs/ChunkMetadataOutput.md) - [ChunkResponse](docs/ChunkResponse.md) - [ChunkSearchRequest](docs/ChunkSearchRequest.md) - [ChunkType](docs/ChunkType.md) - [ClearVersionContentsResponse](docs/ClearVersionContentsResponse.md) - [CreateChunkLineageRequest](docs/CreateChunkLineageRequest.md) - [CreateChunkRequest](docs/CreateChunkRequest.md) - [CreateDocumentRequest](docs/CreateDocumentRequest.md) - [CreateFolderRequest](docs/CreateFolderRequest.md) - [CreatePasswordUserRequest](docs/CreatePasswordUserRequest.md) - [CreatePermissionRequest](docs/CreatePermissionRequest.md) - [CreateSectionRequest](docs/CreateSectionRequest.md) - [CreateTagRequest](docs/CreateTagRequest.md) - [CreateTenantRequest](docs/CreateTenantRequest.md) - [CreateThreadMessageRequest](docs/CreateThreadMessageRequest.md) - [CreateThreadRequest](docs/CreateThreadRequest.md) - [DocumentOrigin](docs/DocumentOrigin.md) - [DocumentResponse](docs/DocumentResponse.md) - [DocumentType](docs/DocumentType.md) - [DocumentVersionMetadata](docs/DocumentVersionMetadata.md) - [DocumentVersionMetadataUpdate](docs/DocumentVersionMetadataUpdate.md) - [DocumentVersionResponse](docs/DocumentVersionResponse.md) - [EmailSentResponse](docs/EmailSentResponse.md) - [EmailVerificationRequest](docs/EmailVerificationRequest.md) - [EmbeddingModel](docs/EmbeddingModel.md) - [FolderResponse](docs/FolderResponse.md) - [FolderResponseOrDocumentResponse](docs/FolderResponseOrDocumentResponse.md) - [HTTPValidationError](docs/HTTPValidationError.md) - [HealthCheckResponse](docs/HealthCheckResponse.md) - [IdpType](docs/IdpType.md) - [IngestDocumentResponse](docs/IngestDocumentResponse.md) - [InviteResponse](docs/InviteResponse.md) - [InviteStatus](docs/InviteStatus.md) - [InviteUserRequest](docs/InviteUserRequest.md) - [LineageEdgeResponse](docs/LineageEdgeResponse.md) - [LineageGraphResponse](docs/LineageGraphResponse.md) - [LineageNodeResponse](docs/LineageNodeResponse.md) - [LocationInner](docs/LocationInner.md) - [MessageRole](docs/MessageRole.md) - [OAuth2Config](docs/OAuth2Config.md) - [PaginatedResponseAnnotatedUnionFolderResponseDocumentResponseDiscriminator](docs/PaginatedResponseAnnotatedUnionFolderResponseDocumentResponseDiscriminator.md) - [PaginatedResponseAnnotatedUnionSectionContentItemChunkContentItemDiscriminator](docs/PaginatedResponseAnnotatedUnionSectionContentItemChunkContentItemDiscriminator.md) - [PaginatedResponseDocumentResponse](docs/PaginatedResponseDocumentResponse.md) - [PaginatedResponseDocumentVersionResponse](docs/PaginatedResponseDocumentVersionResponse.md) - [PaginatedResponseFolderResponse](docs/PaginatedResponseFolderResponse.md) - [PaginatedResponseInviteResponse](docs/PaginatedResponseInviteResponse.md) - [PaginatedResponsePathPartResponse](docs/PaginatedResponsePathPartResponse.md) - [PaginatedResponsePermissionResponse](docs/PaginatedResponsePermissionResponse.md) - [PaginatedResponseTagResponse](docs/PaginatedResponseTagResponse.md) - [PaginatedResponseTenantResponse](docs/PaginatedResponseTenantResponse.md) - [PaginatedResponseTenantUserResponse](docs/PaginatedResponseTenantUserResponse.md) - [PaginatedResponseThreadMessageResponse](docs/PaginatedResponseThreadMessageResponse.md) - [PaginatedResponseThreadResponse](docs/PaginatedResponseThreadResponse.md) - [PaginatedResponseWorkflowSummaryResponse](docs/PaginatedResponseWorkflowSummaryResponse.md) - [PartType](docs/PartType.md) - [PasswordResetRequest](docs/PasswordResetRequest.md) - [PasswordResetWithTokenRequest](docs/PasswordResetWithTokenRequest.md) - [PathOrder](docs/PathOrder.md) - [PathPartResponse](docs/PathPartResponse.md) - [PathPartTagsResponse](docs/PathPartTagsResponse.md) - [PermissionCapability](docs/PermissionCapability.md) - [PermissionResponse](docs/PermissionResponse.md) - [PipelineState](docs/PipelineState.md) - [PipelineStatus](docs/PipelineStatus.md) - [Polygon](docs/Polygon.md) - [PolygonReference](docs/PolygonReference.md) - [RootResponse](docs/RootResponse.md) - [ScoredChunkResponse](docs/ScoredChunkResponse.md) - [SectionContentItem](docs/SectionContentItem.md) - [SectionContentItemOrChunkContentItem](docs/SectionContentItemOrChunkContentItem.md) - [SectionResponse](docs/SectionResponse.md) - [SignInRequest](docs/SignInRequest.md) - [TagResponse](docs/TagResponse.md) - [TenantResponse](docs/TenantResponse.md) - [TenantUserEditRequest](docs/TenantUserEditRequest.md) - [TenantUserResponse](docs/TenantUserResponse.md) - [TenantUserRole](docs/TenantUserRole.md) - [ThreadMessageResponse](docs/ThreadMessageResponse.md) - [ThreadResponse](docs/ThreadResponse.md) - [UpdateChunkContentRequest](docs/UpdateChunkContentRequest.md) - [UpdateChunkMetadataRequest](docs/UpdateChunkMetadataRequest.md) - [UpdateDocumentRequest](docs/UpdateDocumentRequest.md) - [UpdateFolderRequest](docs/UpdateFolderRequest.md) - [UpdatePermissionRequest](docs/UpdatePermissionRequest.md) - [UpdateSectionRequest](docs/UpdateSectionRequest.md) - [UpdateTagRequest](docs/UpdateTagRequest.md) - [UpdateTenantRequest](docs/UpdateTenantRequest.md) - [UpdateThreadRequest](docs/UpdateThreadRequest.md) - [UpdateUserRequest](docs/UpdateUserRequest.md) - [UserResponse](docs/UserResponse.md) - [ValidationError](docs/ValidationError.md) - [WorkflowAction](docs/WorkflowAction.md) - [WorkflowActionResponse](docs/WorkflowActionResponse.md) - [WorkflowDetailResponse](docs/WorkflowDetailResponse.md) - [WorkflowSummaryResponse](docs/WorkflowSummaryResponse.md) <a id="documentation-for-authorization"></a> ## Documentation For Authorization Endpoints do not require authorization. ## Author
text/markdown
OpenAPI Generator community
OpenAPI Generator Community <team@openapitools.org>
null
null
null
OpenAPI, OpenAPI-Generator, Knowledge Stack API
[]
[]
null
null
>=3.9
[]
[]
[]
[ "urllib3<3.0.0,>=2.1.0", "python-dateutil>=2.8.2", "pydantic>=2", "typing-extensions>=4.7.1" ]
[]
[]
[]
[ "Repository, https://github.com/GIT_USER_ID/GIT_REPO_ID" ]
twine/6.2.0 CPython/3.12.12
2026-02-20T10:30:48.571326
ksapi-1.24.0.tar.gz
108,364
c1/3c/08313f5ad8ff74664a17a6103618450152ff39691f8071cd60bac50d7f03/ksapi-1.24.0.tar.gz
source
sdist
null
false
0e1ee4eb9b17d688472670afa7dfd4a1
86c11c62718bd9b647dc62d50c5d0e59a8051ab8b47552e76c144d975c3f3fdc
c13c08313f5ad8ff74664a17a6103618450152ff39691f8071cd60bac50d7f03
null
[]
247
2.3
supermemory
3.26.0
The official Python library for the supermemory API
# Supermemory Python API library <!-- prettier-ignore --> [![PyPI version](https://img.shields.io/pypi/v/supermemory.svg?label=pypi%20(stable))](https://pypi.org/project/supermemory/) The Supermemory Python library provides convenient access to the Supermemory REST API from any Python 3.9+ application. The library includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx). It is generated with [Stainless](https://www.stainless.com/). ## MCP Server Use the Supermemory MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application. [![Add to Cursor](https://cursor.com/deeplink/mcp-install-dark.svg)](https://cursor.com/en-US/install-mcp?name=supermemory-mcp&config=eyJuYW1lIjoic3VwZXJtZW1vcnktbWNwIiwidHJhbnNwb3J0IjoiaHR0cCIsInVybCI6Imh0dHBzOi8vc3VwZXJtZW1vcnktbmV3LnN0bG1jcC5jb20iLCJoZWFkZXJzIjp7Ingtc3VwZXJtZW1vcnktYXBpLWtleSI6Ik15IEFQSSBLZXkifX0) [![Install in VS Code](https://img.shields.io/badge/_-Add_to_VS_Code-blue?style=for-the-badge&logo=data:image/svg%2bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGZpbGw9Im5vbmUiIHZpZXdCb3g9IjAgMCA0MCA0MCI+PHBhdGggZmlsbD0iI0VFRSIgZmlsbC1ydWxlPSJldmVub2RkIiBkPSJNMzAuMjM1IDM5Ljg4NGEyLjQ5MSAyLjQ5MSAwIDAgMS0xLjc4MS0uNzNMMTIuNyAyNC43OGwtMy40NiAyLjYyNC0zLjQwNiAyLjU4MmExLjY2NSAxLjY2NSAwIDAgMS0xLjA4Mi4zMzggMS42NjQgMS42NjQgMCAwIDEtMS4wNDYtLjQzMWwtMi4yLTJhMS42NjYgMS42NjYgMCAwIDEgMC0yLjQ2M0w3LjQ1OCAyMCA0LjY3IDE3LjQ1MyAxLjUwNyAxNC41N2ExLjY2NSAxLjY2NSAwIDAgMSAwLTIuNDYzbDIuMi0yYTEuNjY1IDEuNjY1IDAgMCAxIDIuMTMtLjA5N2w2Ljg2MyA1LjIwOUwyOC40NTIuODQ0YTIuNDg4IDIuNDg4IDAgMCAxIDEuODQxLS43MjljLjM1MS4wMDkuNjk5LjA5MSAxLjAxOS4yNDVsOC4yMzYgMy45NjFhMi41IDIuNSAwIDAgMSAxLjQxNSAyLjI1M3YuMDk5LS4wNDVWMzMuMzd2LS4wNDUuMDk1YTIuNTAxIDIuNTAxIDAgMCAxLTEuNDE2IDIuMjU3bC04LjIzNSAzLjk2MWEyLjQ5MiAyLjQ5MiAwIDAgMS0xLjA3Ny4yNDZabS43MTYtMjguOTQ3LTExLjk0OCA5LjA2MiAxMS45NTIgOS4wNjUtLjAwNC0xOC4xMjdaIi8+PC9zdmc+)](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22supermemory-mcp%22%2C%22type%22%3A%22http%22%2C%22url%22%3A%22https%3A%2F%2Fsupermemory-new.stlmcp.com%22%2C%22headers%22%3A%7B%22x-supermemory-api-key%22%3A%22My%20API%20Key%22%7D%7D) > Note: You may need to set environment variables in your MCP client. ## Documentation The REST API documentation can be found on [docs.supermemory.ai](https://docs.supermemory.ai). The full API of this library can be found in [api.md](https://github.com/supermemoryai/python-sdk/tree/main/api.md). ## Installation ```sh # install from PyPI pip install supermemory ``` ## Usage The full API of this library can be found in [api.md](https://github.com/supermemoryai/python-sdk/tree/main/api.md). ```python import os from supermemory import Supermemory client = Supermemory( api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted ) response = client.search.documents( q="documents related to python", ) print(response.results) ``` While you can provide an `api_key` keyword argument, we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/) to add `SUPERMEMORY_API_KEY="My API Key"` to your `.env` file so that your API Key is not stored in source control. ## Async usage Simply import `AsyncSupermemory` instead of `Supermemory` and use `await` with each API call: ```python import os import asyncio from supermemory import AsyncSupermemory client = AsyncSupermemory( api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted ) async def main() -> None: response = await client.search.documents( q="documents related to python", ) print(response.results) asyncio.run(main()) ``` Functionality between the synchronous and asynchronous clients is otherwise identical. ### With aiohttp By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend. You can enable this by installing `aiohttp`: ```sh # install from PyPI pip install supermemory[aiohttp] ``` Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`: ```python import os import asyncio from supermemory import DefaultAioHttpClient from supermemory import AsyncSupermemory async def main() -> None: async with AsyncSupermemory( api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted http_client=DefaultAioHttpClient(), ) as client: response = await client.search.documents( q="documents related to python", ) print(response.results) asyncio.run(main()) ``` ## Using types Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like: - Serializing back into JSON, `model.to_json()` - Converting to a dictionary, `model.to_dict()` Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`. ## Nested params Nested parameters are dictionaries, typed using `TypedDict`, for example: ```python from supermemory import Supermemory client = Supermemory() response = client.search.memories( q="machine learning concepts", include={}, ) print(response.include) ``` ## File uploads Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`. ```python from pathlib import Path from supermemory import Supermemory client = Supermemory() client.documents.upload_file( file=Path("/path/to/file"), ) ``` The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically. ## Handling errors When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `supermemory.APIConnectionError` is raised. When the API returns a non-success status code (that is, 4xx or 5xx response), a subclass of `supermemory.APIStatusError` is raised, containing `status_code` and `response` properties. All errors inherit from `supermemory.APIError`. ```python import supermemory from supermemory import Supermemory client = Supermemory() try: client.add( content="content", ) except supermemory.APIConnectionError as e: print("The server could not be reached") print(e.__cause__) # an underlying Exception, likely raised within httpx. except supermemory.RateLimitError as e: print("A 429 status code was received; we should back off a bit.") except supermemory.APIStatusError as e: print("Another non-200-range status code was received") print(e.status_code) print(e.response) ``` Error codes are as follows: | Status Code | Error Type | | ----------- | -------------------------- | | 400 | `BadRequestError` | | 401 | `AuthenticationError` | | 403 | `PermissionDeniedError` | | 404 | `NotFoundError` | | 422 | `UnprocessableEntityError` | | 429 | `RateLimitError` | | >=500 | `InternalServerError` | | N/A | `APIConnectionError` | ### Retries Certain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default. You can use the `max_retries` option to configure or disable retry settings: ```python from supermemory import Supermemory # Configure the default for all requests: client = Supermemory( # default is 2 max_retries=0, ) # Or, configure per-request: client.with_options(max_retries=5).add( content="content", ) ``` ### Timeouts By default requests time out after 1 minute. You can configure this with a `timeout` option, which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object: ```python from supermemory import Supermemory # Configure the default for all requests: client = Supermemory( # 20 seconds (default is 1 minute) timeout=20.0, ) # More granular control: client = Supermemory( timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0), ) # Override per-request: client.with_options(timeout=5.0).add( content="content", ) ``` On timeout, an `APITimeoutError` is thrown. Note that requests that time out are [retried twice by default](https://github.com/supermemoryai/python-sdk/tree/main/#retries). ## Advanced ### Logging We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module. You can enable logging by setting the environment variable `SUPERMEMORY_LOG` to `info`. ```shell $ export SUPERMEMORY_LOG=info ``` Or to `debug` for more verbose logging. ### How to tell whether `None` means `null` or missing In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`: ```py if response.my_field is None: if 'my_field' not in response.model_fields_set: print('Got json like {}, without a "my_field" key present at all.') else: print('Got json like {"my_field": null}.') ``` ### Accessing raw response data (e.g. headers) The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g., ```py from supermemory import Supermemory client = Supermemory() response = client.with_raw_response.add( content="content", ) print(response.headers.get('X-My-Header')) client = response.parse() # get the object that `add()` would have returned print(client.id) ``` These methods return an [`APIResponse`](https://github.com/supermemoryai/python-sdk/tree/main/src/supermemory/_response.py) object. The async client returns an [`AsyncAPIResponse`](https://github.com/supermemoryai/python-sdk/tree/main/src/supermemory/_response.py) with the same structure, the only difference being `await`able methods for reading the response content. #### `.with_streaming_response` The above interface eagerly reads the full response body when you make the request, which may not always be what you want. To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods. ```python with client.with_streaming_response.add( content="content", ) as response: print(response.headers.get("X-My-Header")) for line in response.iter_lines(): print(line) ``` The context manager is required so that the response will reliably be closed. ### Making custom/undocumented requests This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used. #### Undocumented endpoints To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other http verbs. Options on the client will be respected (such as retries) when making this request. ```py import httpx response = client.post( "/foo", cast_to=httpx.Response, body={"my_param": True}, ) print(response.headers.get("x-foo")) ``` #### Undocumented request params If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request options. #### Undocumented response properties To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You can also get all the extra fields on the Pydantic model as a dict with [`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra). ### Configuring the HTTP client You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including: - Support for [proxies](https://www.python-httpx.org/advanced/proxies/) - Custom [transports](https://www.python-httpx.org/advanced/transports/) - Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality ```python import httpx from supermemory import Supermemory, DefaultHttpxClient client = Supermemory( # Or use the `SUPERMEMORY_BASE_URL` env var base_url="http://my.test.server.example.com:8083", http_client=DefaultHttpxClient( proxy="http://my.test.proxy.example.com", transport=httpx.HTTPTransport(local_address="0.0.0.0"), ), ) ``` You can also customize the client on a per-request basis by using `with_options()`: ```python client.with_options(http_client=DefaultHttpxClient(...)) ``` ### Managing HTTP resources By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting. ```py from supermemory import Supermemory with Supermemory() as client: # make requests here ... # HTTP client is now closed ``` ## Versioning This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions: 1. Changes that only affect static types, without breaking runtime behavior. 2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_ 3. Changes that we do not expect to impact the vast majority of users in practice. We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience. We are keen for your feedback; please open an [issue](https://www.github.com/supermemoryai/python-sdk/issues) with questions, bugs, or suggestions. ### Determining the installed version If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version. You can determine the version that is being used at runtime with: ```py import supermemory print(supermemory.__version__) ``` ## Requirements Python 3.9 or higher. ## Contributing See [the contributing documentation](https://github.com/supermemoryai/python-sdk/tree/main/./CONTRIBUTING.md).
text/markdown
null
Supermemory <dhravya@supermemory.com>
null
null
Apache-2.0
null
[ "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: MacOS", "Operating System :: Microsoft :: Windows", "Operating System :: OS Independent", "Operating System :: POSIX", "Operating System :: POSIX :: Linux", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Software Development :: Libraries :: Python Modules", "Typing :: Typed" ]
[]
null
null
>=3.9
[]
[]
[]
[ "anyio<5,>=3.5.0", "distro<2,>=1.7.0", "httpx<1,>=0.23.0", "pydantic<3,>=1.9.0", "sniffio", "typing-extensions<5,>=4.10", "aiohttp; extra == \"aiohttp\"", "httpx-aiohttp>=0.1.9; extra == \"aiohttp\"" ]
[]
[]
[]
[ "Homepage, https://github.com/supermemoryai/python-sdk", "Repository, https://github.com/supermemoryai/python-sdk" ]
twine/5.1.1 CPython/3.12.9
2026-02-20T10:30:32.452396
supermemory-3.26.0.tar.gz
154,511
66/b9/15db2b26087296a21ca6b352b854b2ee760816fe4b3662464ae6dc889bde/supermemory-3.26.0.tar.gz
source
sdist
null
false
c96755d45e296a17c2464a1a036639d7
35513788822e0a500da0086be4cfb5dd541fdced682131cbae316380f2cda680
66b915db2b26087296a21ca6b352b854b2ee760816fe4b3662464ae6dc889bde
null
[]
651
2.4
hexa-ddd-blueprint
0.1.1
CLI tool that scaffolds opinionated Python projects following DDD + Hexagonal Architecture
# hexa-ddd-blueprint [![CI](https://github.com/AymanKastali/hexa-ddd-blueprint/actions/workflows/ci.yml/badge.svg)](https://github.com/AymanKastali/hexa-ddd-blueprint/actions/workflows/ci.yml) [![PyPI](https://img.shields.io/pypi/v/hexa-ddd-blueprint)](https://pypi.org/project/hexa-ddd-blueprint/) CLI tool that scaffolds opinionated Python projects following **DDD + Hexagonal (Ports & Adapters) Architecture**. ## Installation ```bash uv tool install hexa-ddd-blueprint ``` ## Usage ### Interactive mode (default) ```bash hexa-ddd-blueprint new ``` Launches prompts for project name, description, author, DB choice, and Python version. ### Flag-driven mode ```bash hexa-ddd-blueprint new myproject \ --description "My awesome service" \ --author "John Doe" \ --db postgres \ --python 3.14 \ --no-docker \ --no-ci ``` ### All flags | Flag | Default | Description | |---|---|---| | `[NAME]` | (prompted) | Project name | | `--description` / `-d` | (prompted) | Project description | | `--author` / `-a` | (prompted) | Author name | | `--db` | (prompted) | Database: `postgres`, `none` | | `--python` | `3.14` | Python version | | `--no-docker` | false | Skip Docker/Compose | | `--no-ci` | false | Skip GitHub Actions CI | | `--no-devcontainer` | false | Skip devcontainer | | `--no-interactive` / `-y` | false | Use defaults, skip prompts | ## Generated Project Structure ``` myproject/ ├── src/myproject/ │ ├── domain/ # Pure business logic │ │ └── shared/ # Shared kernel (base entity) │ ├── application/ # Use cases, ports, DTOs │ └── adapters/ │ ├── config/ # Pydantic Settings │ ├── inbound/ │ │ └── api/ │ │ └── rest/ # FastAPI REST adapter │ └── outbound/ │ ├── persistence/ │ │ └── postgres/ # DB-specific adapter │ └── logging/ # Logging adapter ├── tests/ ├── docs/architecture/ # PlantUML diagrams ├── docker/ ├── .github/workflows/ └── .devcontainer/ ``` ## Development ```bash uv sync uv run pytest ```
text/markdown
null
ayman kastali <aymankastali.backend@gmail.com>
null
null
null
null
[ "Development Status :: 3 - Alpha", "Programming Language :: Python :: 3.14", "Topic :: Software Development :: Code Generators", "Typing :: Typed" ]
[]
null
null
>=3.14
[]
[]
[]
[ "jinja2>=3.1.0", "rich>=13.0.0", "typer>=0.9.0" ]
[]
[]
[]
[ "Homepage, https://github.com/AymanKastali/hexa-ddd-blueprint", "Repository, https://github.com/AymanKastali/hexa-ddd-blueprint", "Issues, https://github.com/AymanKastali/hexa-ddd-blueprint/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:30:18.736588
hexa_ddd_blueprint-0.1.1.tar.gz
40,217
e8/47/46f40e1ae91a197ab80558ea25225933df06fde9ac237888dee3184c9c1a/hexa_ddd_blueprint-0.1.1.tar.gz
source
sdist
null
false
5665fdfd1afe0effb22bd3a5a05fe7cb
12c41dfcc4e9246ccf244640b52baf840f2620e9239555c48ae085febd0713d7
e84746f40e1ae91a197ab80558ea25225933df06fde9ac237888dee3184c9c1a
MIT
[ "LICENSE" ]
223
2.4
hevo-assistant
0.10.0
Chat-to-Action CLI for Hevo Data pipelines with RAG-powered assistance
# Hevo Assistant A chat-to-action CLI tool for managing Hevo Data pipelines using natural language. Ask questions, check status, pause/resume pipelines, and more - all through conversation. ## Features - **Natural Language Interface**: Interact with your Hevo pipelines using plain English - **RAG-Powered Responses**: Get accurate answers based on Hevo documentation - **Multiple LLM Providers**: Choose between OpenAI, Anthropic Claude, or local Ollama - **Pipeline Management**: List, pause, resume, and run pipelines - **Model & Workflow Support**: Manage dbt models and workflows - **Secure Configuration**: Credentials stored locally in `~/.hevo/` - **Fast Installation**: ~30 seconds (no heavy ML dependencies) ## Installation ```bash pip install hevo-assistant ``` That's it! Installation is fast because we use cloud services (Pinecone + OpenAI) instead of local ML models. ### For Development ```bash git clone https://github.com/Legolasan/hevo-app.git cd hevo-app pip install -e . ``` ### Optional: Local RAG (Heavy) If you prefer to run embeddings locally (offline mode), install with: ```bash pip install hevo-assistant[local-rag] ``` Note: This adds ~2GB of dependencies (PyTorch, sentence-transformers). ## Quick Start ### 1. Setup Configuration Run the interactive setup wizard: ```bash hevo setup ``` You'll be prompted for: - **Hevo API credentials**: Get from [Hevo Dashboard > Settings > API Keys](https://app.hevodata.com/settings/api-keys) - **LLM provider**: Choose OpenAI, Anthropic, or Ollama - **LLM API key**: Your provider's API key (not needed for Ollama) - **Pinecone API key**: Get free at [pinecone.io](https://pinecone.io) ### 2. Start Chatting ```bash # Interactive chat mode hevo chat # Or ask a one-shot question hevo ask "List my pipelines" ``` ## Usage Examples ```bash # Check pipeline status hevo ask "What's the status of my Salesforce pipeline?" # List all pipelines hevo ask "Show me all my pipelines" # Pause a pipeline hevo ask "Pause the MySQL pipeline" # Resume a pipeline hevo ask "Resume the MySQL pipeline" # Run a pipeline now hevo ask "Run the Salesforce pipeline now" # Ask about Hevo features hevo ask "How do I create a new destination?" # List models hevo ask "What models do I have?" # Run a model hevo ask "Run my revenue model" ``` ## Commands | Command | Description | |---------|-------------| | `hevo setup` | Interactive setup wizard | | `hevo config show` | Show current configuration | | `hevo chat` | Start interactive chat session | | `hevo ask "query"` | Ask a one-shot question | ## Configuration Configuration is stored in `~/.hevo/config.json`: ```json { "hevo": { "api_key": "your-api-key", "api_secret": "your-api-secret", "region": "us" }, "llm": { "provider": "openai", "api_key": "sk-...", "model": "gpt-4" }, "rag": { "backend": "pinecone", "pinecone_api_key": "pc-...", "pinecone_index": "hevo-docs" } } ``` ### Supported Regions - `us` - United States (default) - `eu` - Europe - `in` - India - `apac` - Asia Pacific ### Supported LLM Providers | Provider | Models | Notes | |----------|--------|-------| | OpenAI | gpt-4, gpt-4-turbo, gpt-3.5-turbo | Recommended for best results | | Anthropic | claude-3-opus, claude-3-sonnet | Great for detailed explanations | | Ollama | llama3, mistral, etc. | Local, free, no API key needed | ## Available Actions The assistant can execute these actions on your behalf: ### Pipeline Actions - List all pipelines - Get pipeline status - Pause a pipeline - Resume a pipeline - Run a pipeline immediately ### Object Actions - List objects in a pipeline - Skip a failed object - Restart an object ### Model Actions - List all models - Run a model ### Workflow Actions - List all workflows - Run a workflow ### Destination Actions - List all destinations ## Architecture ``` ┌─────────────────────────────────────────────────────────────────┐ │ hevo-assistant CLI │ ├─────────────────────────────────────────────────────────────────┤ │ User Query ──► Intent Parser ──► RAG Context ──► LLM ──► Action│ │ │ │ │ │ │ ▼ ▼ ▼ │ │ Pinecone OpenAI Hevo API │ │ (Vector DB) (Embeddings) (Actions) │ └─────────────────────────────────────────────────────────────────┘ ``` ## Requirements - Python 3.10+ - Hevo Data account with API access - LLM API key (OpenAI, Anthropic) or local Ollama installation - Pinecone API key (free tier available at [pinecone.io](https://pinecone.io)) ## Troubleshooting ### "Configuration incomplete" error Run `hevo setup` to configure your API credentials. ### API authentication errors 1. Verify your Hevo API key and secret are correct 2. Check that your API key has the required permissions 3. Ensure you've selected the correct region ### LLM errors 1. Verify your LLM API key is valid 2. For Ollama, ensure the service is running (`ollama serve`) 3. Check that the model name is correct ### Pinecone errors 1. Verify your Pinecone API key is correct 2. Ensure the index "hevo-docs" exists and is accessible ## Development ```bash # Install dev dependencies pip install -e ".[dev]" # Run tests pytest # Format code black src/ ``` ## License MIT License - see LICENSE file for details. ## Contributing Contributions welcome! Please read the contributing guidelines first.
text/markdown
null
Hevo App <support@hevodata.com>
null
null
null
hevo, data-pipeline, cli, rag, assistant
[ "Development Status :: 3 - Alpha", "Environment :: Console", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.10
[]
[]
[]
[ "click>=8.0", "rich>=13.0", "openai>=1.0.0", "anthropic>=0.18.0", "ollama>=0.1.0", "pinecone-client>=3.0.0", "requests>=2.28.0", "beautifulsoup4>=4.12.0", "lxml>=4.9.0", "pydantic>=2.0.0", "pydantic-settings>=2.0.0", "chromadb>=0.4.0; extra == \"local-rag\"", "sentence-transformers>=2.2.0; extra == \"local-rag\"", "pytest>=7.0; extra == \"dev\"", "pytest-cov>=4.0; extra == \"dev\"", "black>=23.0; extra == \"dev\"", "ruff>=0.1.0; extra == \"dev\"", "mypy>=1.0; extra == \"dev\"", "hevo-assistant[dev,local-rag]; extra == \"all\"" ]
[]
[]
[]
[ "Homepage, https://github.com/Legolasan/hevo-app", "Documentation, https://docs.hevodata.com", "Repository, https://github.com/Legolasan/hevo-app" ]
twine/6.2.0 CPython/3.13.2
2026-02-20T10:30:17.230240
hevo_assistant-0.10.0.tar.gz
77,486
2a/f4/2a8fd6e5f6a81e2c2d0d76dee2da5f3f2e91b15c5da66b3adfc4365dd68f/hevo_assistant-0.10.0.tar.gz
source
sdist
null
false
78b9f164edcc58742af73309c4bc56f4
a45c09ec6d1ac0643d67a0af0578857a1924d37a7e5671e7c6a09a3eb4894022
2af42a8fd6e5f6a81e2c2d0d76dee2da5f3f2e91b15c5da66b3adfc4365dd68f
MIT
[ "LICENSE" ]
218
2.4
cognee-community-tasks-scrapegraph
0.1.0
Package containing custom cognee tasks for scraping web content using ScrapeGraphAI
# cognee-community-tasks-scrapegraph Custom cognee tasks for scraping web content using [ScrapeGraphAI](https://github.com/ScrapeGraphAI/scrapegraph-py). ## Overview This package provides two async tasks: - **`scrape_urls`** – scrape a list of URLs with a natural language prompt and return structured results. - **`scrape_and_add`** – scrape URLs and ingest the content directly into a cognee dataset. ## Installation ```bash uv pip install cognee-community-tasks-scrapegraph ``` Or install locally with all dependencies: ```bash cd packages/task/scrapegraph_tasks uv sync --all-extras # OR poetry install ``` ## Requirements You need two API keys: | Variable | Description | |---|---| | `LLM_API_KEY` | OpenAI (or other LLM provider) API key used by cognee | | `SGAI_API_KEY` | [ScrapeGraphAI](https://scrapegraphai.com) API key | Set them in your environment or in a `.env` file: ```bash export LLM_API_KEY="sk-..." export SGAI_API_KEY="sgai-..." ``` ## Usage ### Scrape only ```python import asyncio from cognee_community_tasks_scrapegraph import scrape_urls results = asyncio.run( scrape_urls( urls=["https://cognee.ai", "https://docs.cognee.ai"], user_prompt="Extract the main content, title, and key information from this page", ) ) for item in results: print(item["url"], item["content"]) ``` ### Scrape and add to cognee ```python import asyncio from cognee_community_tasks_scrapegraph import scrape_and_add asyncio.run( scrape_and_add( urls=["https://cognee.ai"], user_prompt="Extract the main content and key information", dataset_name="web_scrape", ) ) ``` ## Run the example ```bash cd packages/task/scrapegraph_tasks uv run python examples/example.py # OR poetry run python examples/example.py ``` ## API Reference ### `scrape_urls` ```python async def scrape_urls( urls: List[str], user_prompt: str = "Extract the main content, title, and key information from this page", api_key: Optional[str] = None, ) -> List[dict] ``` Returns a list of dicts: ```python [ {"url": "https://example.com", "content": {...}}, # success {"url": "https://bad.invalid", "content": "", "error": "..."}, # failure ] ``` ### `scrape_and_add` ```python async def scrape_and_add( urls: List[str], user_prompt: str = "Extract the main content, title, and key information from this page", api_key: Optional[str] = None, dataset_name: str = "scrapegraph", ) -> Any ``` Scrapes all URLs, combines the successful results into a single text document, calls `cognee.add`, and then `cognee.cognify`. Returns the cognify result.
text/markdown
null
null
null
null
null
null
[]
[]
null
null
<=3.13,>=3.11
[]
[]
[]
[ "cognee==0.5.2", "scrapegraph-py>=1.7.0" ]
[]
[]
[]
[ "Homepage, https://github.com/topoteretes/cognee", "Repository, https://github.com/topoteretes/cognee-community" ]
poetry/2.3.2 CPython/3.11.13 Darwin/24.1.0
2026-02-20T10:30:11.751139
cognee_community_tasks_scrapegraph-0.1.0-py3-none-any.whl
3,632
ea/61/7aade5d71d0a2d880253d2ddbba2235c4dfa3917692208e0a76e4f633e47/cognee_community_tasks_scrapegraph-0.1.0-py3-none-any.whl
py3
bdist_wheel
null
false
c64eb40f1cd2547614d4c88459962d94
f63c7c498980a53e75bef83974c48d543e66325aa50c44e7aef26dfa92ae0352
ea617aade5d71d0a2d880253d2ddbba2235c4dfa3917692208e0a76e4f633e47
null
[]
232
2.4
pygridgain-dbapi
9.1.18
GridGain 9 DB API Driver
# pygridgain_dbapi GridGain 9 DB API Driver. ## Prerequisites - Python 3.10 or above (3.10, 3.11, 3.12, 3.13 and 3.14 are tested), - Access to GridGain 9 node, local or remote. ## Installation ### From repository This is a recommended way for users. If you only want to use the `pygridgain_dbapi` module in your project, do: ``` $ pip install pygridgain-dbapi ``` ### From sources This way is more suitable for developers, or if you install the client from zip archive. 1. Download and/or unzip GridGain 9 DB API Driver sources to `pygridgain_dbapi_path` 2. Go to `pygridgain_dbapi_path` folder 3. Execute `pip install -e .` ```bash $ cd <pygridgain_dbapi_path> $ pip install -e . ``` This will install the repository version of `pygridgain_dbapi` into your environment in so-called “develop” or “editable” mode. You may read more about [editable installs](https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs) in the `pip` manual. Then run through the contents of `requirements` folder to install the additional requirements into your working Python environment using ``` $ pip install -r requirements/<your task>.txt ``` You may also want to consult the `setuptools` manual about using `setup.py`. ### *C extension* The core of the package is a C++ extension. It shares the code with the GridGain C++ Client. The package is pre-built for the most common platforms, but you may need to build it if your platform is not included. Linux building requirements: - GCC (and G++); - CMake version >=3.18; - OpenSSL (dev version of the package); - Docker to build wheels; - Supported versions of Python (3.10, 3.11, 3.12, 3.13 and 3.14). You can disable some of these versions, but you'd need to edit the script for that. For building universal `wheels` (binary packages) for Linux, just invoke script `./scripts/create_distr.sh`. Windows building requirements: - MSVC 14.x, and it should be in path; - CMake version >=3.18; - OpenSSL (headers are required for the build); - Supported versions of Python (3.10, 3.11, 3.12, 3.13 and 3.14). You can disable some of these versions, but you'd need to edit the script for that. For building `wheels` for Windows, invoke script `.\scripts\BuildWheels.ps1` using PowerShell. Make sure that your execution policy allows execution of scripts in your environment. The script only works with Python distributions installed in a standard path, which is LOCALAPPDATA\Programs\Python. Ready wheels will be located in `distr` directory. ### Updating from an older version To upgrade an existing package, use the following command: ``` pip install --upgrade pygridgain_dbapi ``` To install the latest version of a package: ``` pip install pygridgain_dbapi ``` To install a specific version: ``` pip install pygridgain_dbapi==9.0.15 ``` ## Testing *NB!* It is recommended installing `pygridgain_dbapi` in development mode. Refer to [this section](#from-sources) for instructions. Remember to install test requirements: ```bash $ pip install -r requirements/install.txt -r requirements/tests.txt ``` ### Run basic tests Running tests themselves: ```bash $ pytest ``` ## Documentation Install documentation requirements: ```bash $ pip install -r requirements/docs.txt ``` Generate documentation: ```bash $ cd docs $ make html ``` The resulting documentation can be found in `docs/_build/html`. If you want to open the documentation locally, you can open the index of the documentation `docs/_build/html/index.html` using any modern browser.
text/markdown
GridGain Systems
eng@gridgain.com
null
null
Copyright (C) GridGain Systems. All Rights Reserved.
null
[ "Programming Language :: C++", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Programming Language :: Python :: 3 :: Only", "Intended Audience :: Developers", "Topic :: Database :: Front-Ends", "Topic :: Software Development :: Libraries :: Python Modules", "Operating System :: MacOS", "Operating System :: Microsoft :: Windows", "Operating System :: POSIX :: Linux" ]
[]
https://www.gridgain.com
null
>=3.10
[]
[]
[]
[ "attrs==23.1.0" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.11.12
2026-02-20T10:29:45.217904
pygridgain_dbapi-9.1.18.tar.gz
515,362
48/a5/0b62f0a0538a9ad9cba3847766a415a20910a4342651515ad2fa2242d612/pygridgain_dbapi-9.1.18.tar.gz
source
sdist
null
false
335d0c15d74b99177ea0d666deaa8e3e
ae1512bfd6299c3726f9f7b5e233f54386bfd882291e248de6e359d0e63bd3b9
48a50b62f0a0538a9ad9cba3847766a415a20910a4342651515ad2fa2242d612
null
[ "LICENSE", "NOTICE", "LICENSE-CE.txt", "LICENSE-EVAL.txt" ]
1,418
2.4
robotnikai
0.2.9
RobotnikAI API
# RobotnikAI Python Client - API Documentation A Python client for RobotnikAI API that enables seamless integration with various APIs and services. > **For development documentation** (versioning, publishing, etc.), see [README.dev.md](README.dev.md) ## Installation ```bash pip install robotnikai ``` ## Quick Start ### Environment Variables Create a `.env` file in your project root for configuration: ```env # Cache Service (when available) CACHE_SERVICE_URL=https://cache.robotnikai.com CACHE_SERVICE_TOKEN=your_cache_token # Redis Configuration (for local development) REDIS_HOST=localhost REDIS_PORT=6379 REDIS_DB=0 REDIS_PASSWORD=optional_password # RobotnikAI API Configuration API_BASE_URL=https://robotnikai.com APP_TOKEN=your_api_token APP_ID=your_app_id ACTION_ID=your_action_id # Task Management TASK_ID=optional_custom_task_id ``` ### Usage Example ```python from robotnikai.wrapper import API api = API() ``` ## Core Methods ### GraphQL (Tables API) Use `GraphQLRequest` together with `api.tables.apps_graphql_query(...)` to run queries and mutations against dynamic app tables (Hasura-backed). This is how the trigger scripts operate: - Queries are scoped by `APP_ID` and table name (e.g., `app_{APP_ID}_orders`). - Variables are passed via `GraphQLRequest(query=..., variables=...)`. - Responses include `.data` and `.errors`; always check `.errors` before using data. - Mutations use `insert_*` or `update_*` with `where` clauses and `_set` updates. - Optional `organization_id` is injected into the `where` clause for multi-tenant safety. #### ✅ Example: Query a single order by `orderId` ```python from typing import Any, Dict from robotnikai import GraphQLRequest from robotnikai.wrapper import API api = API() APP_ID = 130 ORDERS_TABLE_NAME = "orders" order_query = f"""query GetOrder($order_id: String!, $organization_id: Int) {{ app_{APP_ID}_{ORDERS_TABLE_NAME}( where: {{orderId: {{_eq: $order_id}}, organization_id: {{_eq: $organization_id}}}}, limit: 1 ) {{ id orderId buyerLogin }} }}""" variables: Dict[str, Any] = {"order_id": "eba6bc90-faab-11f0-96f9-c518c5ba21fb", "organization_id": 2} request = GraphQLRequest(query=order_query, variables=variables) response = api.tables.apps_graphql_query( app=APP_ID, table=ORDERS_TABLE_NAME, graph_ql_request=request ) if response.errors: print(f"Order fetch errors: {response.errors}") else: records = response.data.get(f"app_{APP_ID}_{ORDERS_TABLE_NAME}") or [] print(records[0] if records else "Order not found") ``` #### ✅ Example: Update `delivered` flag by `orderId` ```python from typing import Any, Dict from robotnikai import GraphQLRequest from robotnikai.wrapper import API api = API() APP_ID = 130 ORDERS_TABLE_NAME = "orders" mutation = f"""mutation UpdateOrderDelivered($order_id: String!, $delivered: Boolean!, $organization_id: Int) {{ update_app_{APP_ID}_{ORDERS_TABLE_NAME}( where: {{orderId: {{_eq: $order_id}}, organization_id: {{_eq: $organization_id}}}}, _set: {{delivered: $delivered}} ) {{ affected_rows }} }}""" variables: Dict[str, Any] = {"order_id": "eba6bc90-faab-11f0-96f9-c518c5ba21fb", "delivered": True, "organization_id": 2} request = GraphQLRequest(query=mutation, variables=variables) response = api.tables.apps_graphql_query( app=APP_ID, table=ORDERS_TABLE_NAME, graph_ql_request=request ) if response.errors: print(f"Order update errors: {response.errors}") ``` #### ✅ Example: Insert a message linked to an order ```python from typing import Any, Dict from robotnikai import GraphQLRequest from robotnikai.wrapper import API api = API() APP_ID = 130 MESSAGES_TABLE_NAME = "messages" message_mutation = f"""mutation InsertMessage($data: app_{APP_ID}_{MESSAGES_TABLE_NAME}_insert_input!) {{ insert_app_{APP_ID}_{MESSAGES_TABLE_NAME}_one(object: $data) {{ order_id }} }}""" send_message_payload: Dict[str, Any] = { "order_id": "c9a1d25a-fcf3-4950-a660-2f5b04a25320", # foreign key id "messageContent": "Hello, user", "organization_id": 2, } request = GraphQLRequest(query=message_mutation, variables={"data": send_message_payload}) response = api.tables.apps_graphql_query( app=APP_ID, table=MESSAGES_TABLE_NAME, graph_ql_request=request ) if response.errors: print(f"Message insert errors: {response.errors}") else: print(response.data) ``` ### Integration API Calls #### `api.integrations.call()` Make synchronous API calls to integrated services. ```python # Basic API call integration = api.integrations.get_integration("github") data, response = api.integrations.call( integration, method="GET", endpoint="/user" ) # With parameters and selected account data, response = api.integrations.call( integration, method="GET", endpoint="/sale/offers", params={"limit": 10, "offset": 0}, connection_id="ecommercelab@yandex.com" ) # POST with JSON payload data, response = api.integrations.call( integration, method="POST", endpoint="/messages/submit", json={ "from": {"name": "John Doe", "address": "john@example.com"}, "to": [{"name": "Jane Doe", "address": "jane@example.com"}], "subject": "Test Email", "text": "Hello from Python client!" }, connection_id="user@domain.com" ) ``` **Returns:** `(data, response)` tuple where: - `data`: Parsed JSON response data - `response`: HTTP response object with `.ok`, `.status_code`, `.text`, `.headers`, `.url` attributes **Parallel Methods Return:** `List[ParallelCallResponse]` objects where each response contains: - `.api_data`: Parsed JSON response data from the external service - `.api_response`: HTTP response object (same as above) - `.connection_id`: ID of the connection used for this request - `.organization_id`: Organization ID associated with the connection ### Parallel API Calls #### `api.integrations.parallel_call()` Execute multiple API calls concurrently using different connections per request for maximum performance and flexibility. ```python # Multi-connection parallel calls (recommended for scaling across accounts) responses = api.integrations.parallel_call( integration, method="GET", endpoint="/sale/product-offers/{offerId}", data_list=[ { "connection_id": 123, "organization_id": 456, "url_params": {"offerId": "offer_1"}, "params": {"include": "details"} }, { "connection_id": 789, "organization_id": 101, "url_params": {"offerId": "offer_2"}, "data": {"expand": True} } ], table="app_123_products" ) # Process results with new object-based API for response in responses: print(f"Connection {response.connection_id}, Org {response.organization_id}") if response.api_response.ok: print(f"Offer: {response.api_data['name']}") print(f"Status: {response.api_response.status_code}") else: print(f"Error: {response.api_response.text}") ``` #### `api.integrations.parallel_call_for_connection()` Execute multiple API calls in parallel using a single connection (simpler for single-account operations). ```python # Single-connection parallel calls responses = api.integrations.parallel_call_for_connection( integration, method="GET", endpoint="/sale/product-offers/{offerId}", data_list=[ {"url_params": {"offerId": "123"}}, {"url_params": {"offerId": "456"}}, {"url_params": {"offerId": "789"}} ], connection_id="user@domain.com", organization_id=123, table="app_124_offers" ) # Process results for response in responses: if response.api_response.ok: print(f"Offer: {response.api_data['name']}") else: print(f"Error: {response.api_response.text}") ``` #### `api.integrations.parallel_call_stream()` Stream parallel API calls for real-time processing and progress visibility. ```python # Stream parallel calls with real-time processing (single connection) for result in api.integrations.parallel_call_stream( integration, method="GET", endpoint="/sale/product-offers/{offerId}", data_list=[{"url_params": {"offerId": offer["id"]}} for offer in offers], connection_id=connection_id ): if result.get("final"): print(f" Completed! Processed {result['total_processed']} requests") break else: # Process individual response as it arrives data = result["data"] response = result["response"] index = result["index"] if response.ok: # Process immediately without waiting for all requests processed_data = process_data(data) save_to_database(processed_data) else: print(f"❌ Error for request {index}: {response.text}") ``` ### Connection Management #### `api.integrations.get_connections()` Get all connected accounts for an integration. Returns list of connections with `id` and `name`. ```python # Get all connected accounts connections = api.integrations.get_connections("allegro_sandbox") for connection in connections: print(f"ID: {connection.id}, Name: {connection.name}") # Use connection ID in API calls data, response = api.integrations.call( integration, method="GET", endpoint="/sale/offers", connection_id=connection.id # Use the connection ID in API calls ) ``` ### Caching System #### App Cache Shared cache accessible across the entire application. Access by key only for users within the same organization. ```python # Set cache with TTL api.app_cache.set( key="api_data", value={"rates": [1.2, 1.5, 1.8]}, ttl=300 # 5 minutes ) # Get cached data cached_data = api.app_cache.get(key="api_data") print(cached_data) # {'rates': [1.2, 1.5, 1.8]} # Delete cache entry api.app_cache.delete(key="api_data") ``` #### User Cache User-specific cache for personalized data. It is assigned to user's organization and can be shared between apps. ```python # Set user-specific cache api.user_cache.set( key="my_key", value="my_value", ttl=3600 # 1 hour ) # Get user cache preferences = api.user_cache.get(key="my_key") # Delete user cache api.user_cache.delete(key="my_key") ``` ### Notifications #### `api.notify_me()` Send notifications (via email) to yourself (useful for monitoring and alerts). ```python # Send notification api.notify_me( subject="API Process Completed", text="Successfully processed 1000 records", html="<h1>Success!</h1><p>Processed <strong>1000</strong> records</p>" ) ``` ### Task Progress Tracking #### `api.task.set_progress(progress: int, info: str, status: 'pending' | 'completed' | 'failed')` Update task progress for long-running operations. `progress` and `info` are displayed in the UI during task execution, while `status` indicates the task state. ```python # Initialize progress api.task.set_progress(0, "Starting data processing...", "pending") # Update progress throughout your task for i, item in enumerate(large_dataset): # Process item process_item(item) # Update progress every 100 items if i % 100 == 0: progress = int((i / len(large_dataset)) * 100) api.task.set_progress( progress, f"Processed {i}/{len(large_dataset)} items", "pending" ) # Complete the task api.task.set_progress(100, "Processing completed!", "completed") ``` ## Configuration & Environment ### Cache & Progress Service Configuration The client automatically detects and adapts to your environment: #### **RobotnikAI Sandbox (Automatic)** When running in RobotnikAI Sandbox: - Uses the hosted cache service API - `TASK_ID` is automatically assigned by the platform - Progress tracking is displayed in the RobotnikAI UI - Cache is shared across the organization #### **Local Development (Redis)** When running locally without `CACHE_SERVICE_URL`: - Automatically uses direct Redis connection - `TASK_ID` is generated as a random UUID4 - Progress is logged to console - Requires Redis server running locally #### Required Dependencies For local Redis usage, install the Redis client: ```bash pip install redis ``` ### Usage Impact **From the user perspective, there are no code changes required.** The client automatically: - **Detects environment** and chooses appropriate backend - **Maintains consistent API** across both configurations - **Handles serialization/deserialization** transparently - **Provides same response format** regardless of backend ```python # This code works identically in both environments api.app_cache.set("my_key", {"data": "value"}, ttl=300) cached_data = api.app_cache.get("my_key") api.task.set_progress(50, "Halfway done", "pending") ``` #### Environment Detection Logic ```python # The client automatically determines configuration: if CACHE_SERVICE_URL: # Use RobotnikAI cache service API # - HTTP requests to cache service # - Progress updates sent to platform # - Task ID from environment or auto-generated else: # Use local Redis connection # - Direct Redis operations # - Progress logged to console # - UUID4 generated for task tracking ``` ### Integration Management #### `api.integrations.get_integration()` Get integration configuration for API calls. ```python # Get integration integration = api.integrations.get_integration("allegro_sandbox") # Now use this integration object in .call() methods ``` #### `api.integrations.get_integrations()` List all available integrations. ```python integrations = api.integrations.get_integrations() for integration in integrations.results: print(f"ID: {integration.integration_id}, Name: {integration.name}") ``` ## Common Patterns ### 1. **Parallel Data Processing** ```python # Efficient parallel processing pattern integration = api.integrations.get_integration("api_service") # Step 1: Get list of items list_data, response = api.integrations.call( integration, method="GET", endpoint="/items", params={"limit": 50}, connection_id="user@domain.com" ) # Step 2: Process details in parallel (single connection) detail_requests = [ {"url_params": {"id": item["id"]}} for item in list_data["items"] ] responses = api.integrations.parallel_call_for_connection( integration, method="GET", endpoint="/items/{id}/details", data_list=detail_requests, connection_id="user@domain.com", organization_id=123 ) # Step 3: Process results with new object API processed_data = [] for response in responses: if response.api_response.ok: processed_data.append(transform_data(response.api_data)) else: print(f"Error for connection {response.connection_id}: {response.api_response.text}") ``` ### 2. **Multi-Connection Parallel Processing** ```python # Advanced: Parallel processing across multiple connections simultaneously integration = api.integrations.get_integration("api_service") # Get all connections for the service table_connections = api.integrations.all_connections(integration, "data_table") # Build parallel requests for multiple connections data_list = [] for org_id, connections in table_connections.items(): for connection in connections: data_list.append({ "connection_id": connection["id"], "organization_id": org_id, "params": {"limit": 50}, "url_params": {"account_id": connection["id"]} }) # Execute all requests in parallel across all connections responses = api.integrations.parallel_call( integration, method="GET", endpoint="/accounts/{account_id}/data", data_list=data_list, table="app_123_consolidated_data" ) # Process results with full traceability for response in responses: print(f"Org {response.organization_id}, Connection {response.connection_id}") if response.api_response.ok: save_data(response.api_data, response.connection_id, response.organization_id) else: log_error(response.connection_id, response.api_response.text) ``` ### 3. **Multi-Account Operations** ```python # Process data across multiple connected accounts connections = api.integrations.get_connections("service_name") for connection in connections: print(f"Processing account: {connection.name}") data, response = api.integrations.call( integration, method="GET", endpoint="/data", connection_id=connection.id ) if response.ok: # Process account-specific data process_account_data(data, connection.id) ``` ### 3. **Progress Tracking with Caching** ```python def long_running_task(): api.task.set_progress(0, "Initializing...", "in_progress") # Cache intermediate results api.app_cache.set("task_checkpoint", {"processed": 0}, ttl=3600) for i in range(1000): # Do work process_item(i) # Update progress and cache if i % 100 == 0: progress = int((i / 1000) * 100) api.task.set_progress(progress, f"Processed {i}/1000", "in_progress") api.app_cache.set("task_checkpoint", {"processed": i}, ttl=3600) api.task.set_progress(100, "Completed!", "completed") api.app_cache.delete("task_checkpoint") ``` ## Performance Tips - **Use parallel calls** for multiple API requests to the same service - **Cache frequently accessed data** to reduce API calls - **Stream parallel calls** for real-time processing of large datasets - **Set appropriate TTL** for cached data based on update frequency - **Monitor progress** for long-running tasks to improve user experience ## Error Handling ### Single API Calls ```python data, response = api.integrations.call(integration, method="GET", endpoint="/data") if not response.ok: print(f"API Error: {response.status_code} - {response.text}") return if not data: print("No data received") return # Process successful response process_data(data) ``` ### Parallel API Calls ```python responses = api.integrations.parallel_call( integration, method="GET", endpoint="/data", data_list=requests_data ) for response in responses: if not response.api_response.ok: print(f"Error for connection {response.connection_id}: {response.api_response.text}") continue if not response.api_data: print(f"No data received for connection {response.connection_id}") continue # Process successful response process_data(response.api_data, response.connection_id) ```
text/markdown
OpenAPI Generator community
team@openapitools.org
null
null
null
OpenAPI, OpenAPI-Generator, RobotnikAI API
[]
[]
null
null
null
[]
[]
[]
[ "urllib3<3.0.0,>=2.1.0", "python-dateutil>=2.8.2", "pydantic>=2", "typing-extensions>=4.7.1", "python-dotenv==1.1.0" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:29:42.069789
robotnikai-0.2.9.tar.gz
77,555
af/69/21e966c675234f6920de2812a91ea0c7b246ca105bf393b99938e4856bc9/robotnikai-0.2.9.tar.gz
source
sdist
null
false
33406bf3ce5511bf37c369f7061031fe
8c4ef3cdf85c8fc0af5859f662c5152382d95924b91e20bbd405ce1a02acab24
af6921e966c675234f6920de2812a91ea0c7b246ca105bf393b99938e4856bc9
null
[]
228
2.4
ledgerwallet
0.9.2
Library to communicate between the Ledger devices (Nano S/S+/X, Stax, Flex, Apex P and Apex M) and Speculos
# ledgerwallet A Python library to control Ledger devices ## Install This package provides ledgerwallet, a library to interact with Ledger devices, and ledgerctl, a command line tool based on that library to easily perform operations on the devices. Supported devices are Ledger Blue, Ledger Nano S, Ledger Nano X and Ledger Nano S Plus. ### Quick install ledgerctl and the ledgerwallet library can be installed using pip: ```shell pip3 install --upgrade protobuf setuptools ecdsa pip3 install ledgerwallet ``` Under a Debian or Ubuntu based system, compiling HIDAPI requires to install additional packages: ```shell sudo apt install python3-dev libusb-1.0-0-dev libudev-dev ``` ### Install from source ```shell git clone https://github.com/LedgerHQ/ledgerctl.git pip3 install --upgrade protobuf setuptools ecdsa cd ledgerctl pip install -e . ``` ### Device configuration > **ATTENTION:** This step is optional and only advised for **developers**. It > will allow the installation of apps, that weren't reviewed by Ledger, without > user interaction. You should install a custom certificate authority (CA) on the device to make the usage of ledgerctl easier. This certificate is used to establish a custom secure channel between the computer and the device, and identifies ledgerctl as a "trusted manager" on the device. To install a custom CA, boot the device in "Recovery" mode by pressing the right button at boot time. There are no visual indicators of recovery mode. Then run: ```shell ledgerctl install-ca <NAME> ``` where \<NAME\> is the name that will be displayed on the device to identify the CA. It can be any label, like "ledgerctl", "Dev", or "CA". You are now ready to use ledgerctl. ## Usage To display the commands supported by ledgerctl, run `ledgerctl` or `ledgerctl --help`. Help for each command can be displayed by running `ledgerctl <command> --help`. Supported commands include retrieving basic device information, installing and removing apps, viewing available space on the device, etc. Here are a few examples: - Displaying available space on the device ```shell ledgerctl meminfo ``` - Listing installed applications ```shell ledgerctl list ``` - Deleting the Bitcoin application ```shell ledgerctl delete Bitcoin ``` ### Installing custom apps Loading an application on the device is currently bound to the SDK and to the build process. Installation of custom apps differ from the way provided by the SDK. To keep the install process simple, we chose to use "Manifest" files for applications. Manifests are JSON files which contain the required parameters to install the application. You can find an example manifest in the tests/app directory. Manifest entries are pretty straightforward if you are familiar with the BOLOS SDK, except one of them: `dataSize`. That entry specifies the size of the writable area of the application. This is the size needed by the application to save persistent data. Its value seldom changes. You can use an ugly one-liner to retrieve it: ```shell echo $(($(grep _envram_data debug/app.map | awk '{ print $1 }') - $(grep _nvram_data debug/app.map | awk '{ print $1 }'))) ``` As an example, the standard way to install the [Bitcoin application]( https://github.com/LedgerHQ/ledger-app-btc ) you compiled is to run `make load` with the BOLOS SDK. It launches the following command: ```shell python3 -m ledgerblue.loadApp --curve secp256k1 --tlv --targetId 0x31100004 --targetVersion="1.6.0" --delete --fileName bin/app.hex --appName "Bitcoin" --appVersion 1.3.13 --dataSize $((0x`cat debug/app.map |grep _envram_data | tr -s ' ' | cut -f2 -d' '|cut -f2 -d'x'` - 0x`cat debug/app.map |grep _nvram_data | tr -s ' ' | cut -f2 -d' '|cut -f2 -d'x'`)) `ICONHEX=\`python3 /home/dev/sdk/icon3.py --hexbitmaponly nanos_app_bitcoin.gif 2>/dev/null\` ; [ ! -z "$ICONHEX" ] && echo "--icon $ICONHEX"` --path "" --appFlags 0xa50 --offline bin/app.apdu | grep "Application" | cut -f5 -d' ' > bin/app.sha256 ``` To install it with ledgerctl: 1. Retrieve `dataSize` using the above one-liner. 2. Create a manifest file app.toml in the ledger-app-btc directory: ```toml name = "Bitcoin" version = "1.3.13" [0x31100004] #NanoS icon = "nanos_app_bitcoin.gif" flags = "0xA50" derivationPath = {curves = ["secp256k1"]} binary = "bin/app.hex" dataSize = 64 [0x33100004] #NanoSP icon = "nanosp_app_bitcoin.gif" flags = "0xA50" derivationPath = {curves = ["secp256k1"]} binary = "bin/app_nanosp.hex" dataSize = 64 ``` 3. Install with `ledgerctl install app.json`. If you want to force the deletion of the previous version, run the previous command with the `-f` flag. ### Viewing APDUs Communication between the host and the device use Application Protocol Data Unit (APDUs). To display the raw APDUs, usually for debugging purposes, run ledgerctl with the `-v` switch on any command. For example, here are the APDUs exchanged to run the Bitcoin application: ```shell $ ledgerctl -v run Bitcoin => e0d8000007426974636f696e <= 9000 ``` ## Contributing ### Rebuild the proto files ```shell for file in ledgerwallet/proto/*.proto; do \ python -m grpc_tools.protoc -I. --python_out=. --pyi_out=. $file; \ done ``` ### Pre-commit checks > **Note:** It's advised to install `pre-commit` using > [`pipx`](https://github.com/pypa/pipx) Before submitting your pull-request, please make sure that all [pre-commit](https://pre-commit.com/) hooks are passing. They can be locally installed with the following command: ```console pre-commit install ``` And executed with: ```console pre-commit run --all-files ```
text/markdown
null
Ledger <hello@ledger.fr>
null
null
null
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Operating System :: POSIX :: Linux", "Operating System :: Microsoft :: Windows", "Operating System :: MacOS :: MacOS X" ]
[]
null
null
>=3.9
[]
[]
[]
[ "click>=8.0", "construct>=2.10", "cryptography>=2.5", "ecdsa", "hidapi", "intelhex", "Pillow", "protobuf<6,>=5.28", "requests", "tabulate", "toml" ]
[]
[]
[]
[ "Home, https://github.com/LedgerHQ/ledgerctl" ]
twine/6.2.0 CPython/3.12.3
2026-02-20T10:29:33.427756
ledgerwallet-0.9.2.tar.gz
61,633
4f/64/21eae822fa70740da506996dfab7366f9ff4910af2c0c10295875515b239/ledgerwallet-0.9.2.tar.gz
source
sdist
null
false
640949c72eb560bdcad78d8f9c134e2f
7a58e31f85d5de849edd8ddcf8859a5e104d5f25a75fe4de55b96f5c8856e713
4f6421eae822fa70740da506996dfab7366f9ff4910af2c0c10295875515b239
MIT
[ "LICENSE" ]
640
2.1
explainiverse
0.9.5
Unified, extensible explainability framework supporting 18 XAI methods including LIME, SHAP, LRP, TCAV, GradCAM, and more
# Explainiverse [![PyPI version](https://badge.fury.io/py/explainiverse.svg)](https://badge.fury.io/py/explainiverse) [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) **Explainiverse** is a unified, extensible Python framework for Explainable AI (XAI). It provides a standardized interface for **18 state-of-the-art explanation methods** across local, global, gradient-based, concept-based, and example-based paradigms, along with **comprehensive evaluation metrics** for assessing explanation quality. --- ## Key Features | Feature | Description | |---------|-------------| | **18 Explainers** | LIME, KernelSHAP, TreeSHAP, Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++, LRP, TCAV, Anchors, Counterfactual, Permutation Importance, PDP, ALE, SAGE, ProtoDash | | **27 Evaluation Metrics** | Faithfulness (PGI, PGU, Comprehensiveness, Sufficiency, Correlation, Faithfulness Estimate, Monotonicity, Monotonicity-Nguyen, Pixel Flipping, Region Perturbation, Selectivity, Sensitivity-n, IROF, Infidelity, ROAD, Insertion AUC, Deletion AUC), Stability (RIS, ROS, Lipschitz), Robustness (Max-Sensitivity, Avg-Sensitivity, Continuity), Complexity (Sparseness, Complexity, Effective Complexity) | | **Unified API** | Consistent `BaseExplainer` interface with standardized `Explanation` output | | **Plugin Registry** | Filter explainers by scope, model type, data type; automatic recommendations | | **Framework Support** | Adapters for scikit-learn and PyTorch (with gradient computation) | --- ## Explainer Coverage ### Local Explainers (Instance-Level) | Method | Type | Reference | |--------|------|-----------| | **LIME** | Perturbation | [Ribeiro et al., 2016](https://arxiv.org/abs/1602.04938) | | **KernelSHAP** | Perturbation | [Lundberg & Lee, 2017](https://arxiv.org/abs/1705.07874) | | **TreeSHAP** | Exact (Trees) | [Lundberg et al., 2018](https://arxiv.org/abs/1802.03888) | | **Integrated Gradients** | Gradient | [Sundararajan et al., 2017](https://arxiv.org/abs/1703.01365) | | **DeepLIFT** | Gradient | [Shrikumar et al., 2017](https://arxiv.org/abs/1704.02685) | | **DeepSHAP** | Gradient + Shapley | [Lundberg & Lee, 2017](https://arxiv.org/abs/1705.07874) | | **SmoothGrad** | Gradient | [Smilkov et al., 2017](https://arxiv.org/abs/1706.03825) | | **Saliency Maps** | Gradient | [Simonyan et al., 2014](https://arxiv.org/abs/1312.6034) | | **GradCAM / GradCAM++** | Gradient (CNN) | [Selvaraju et al., 2017](https://arxiv.org/abs/1610.02391) | | **LRP** | Decomposition | [Bach et al., 2015](https://doi.org/10.1371/journal.pone.0130140) | | **TCAV** | Concept-Based | [Kim et al., 2018](https://arxiv.org/abs/1711.11279) | | **Anchors** | Rule-Based | [Ribeiro et al., 2018](https://ojs.aaai.org/index.php/AAAI/article/view/11491) | | **Counterfactual** | Contrastive | [Mothilal et al., 2020](https://arxiv.org/abs/1905.07697) | | **ProtoDash** | Example-Based | [Gurumoorthy et al., 2019](https://arxiv.org/abs/1707.01212) | ### Global Explainers (Model-Level) | Method | Type | Reference | |--------|------|-----------| | **Permutation Importance** | Feature Importance | [Breiman, 2001](https://link.springer.com/article/10.1023/A:1010933404324) | | **Partial Dependence (PDP)** | Feature Effect | [Friedman, 2001](https://projecteuclid.org/euclid.aos/1013203451) | | **ALE** | Feature Effect | [Apley & Zhu, 2020](https://academic.oup.com/jrsssb/article/82/4/1059/7056085) | | **SAGE** | Shapley Importance | [Covert et al., 2020](https://arxiv.org/abs/2004.00668) | --- ## Evaluation Metrics Explainiverse includes a comprehensive suite of evaluation metrics based on the XAI literature: ### Faithfulness Metrics | Metric | Description | Reference | |--------|-------------|-----------| | **PGI** | Prediction Gap on Important features | [Petsiuk et al., 2018](https://arxiv.org/abs/1806.07421) | | **PGU** | Prediction Gap on Unimportant features | [Petsiuk et al., 2018](https://arxiv.org/abs/1806.07421) | | **Comprehensiveness** | Drop when removing top-k features | [DeYoung et al., 2020](https://arxiv.org/abs/1911.03429) | | **Sufficiency** | Prediction using only top-k features | [DeYoung et al., 2020](https://arxiv.org/abs/1911.03429) | | **Faithfulness Correlation** | Correlation between attribution and impact | [Bhatt et al., 2020](https://arxiv.org/abs/2005.00631) | | **Faithfulness Estimate** | Correlation of attributions with single-feature perturbation impact | [Alvarez-Melis & Jaakkola, 2018](https://arxiv.org/abs/1806.08049) | | **Monotonicity** | Sequential feature addition shows monotonic prediction increase | [Arya et al., 2019](https://arxiv.org/abs/1909.03012) | | **Monotonicity-Nguyen** | Spearman correlation between attributions and feature removal impact | [Nguyen & Martinez, 2020](https://arxiv.org/abs/2010.07455) | | **Pixel Flipping** | AUC of prediction degradation when removing features by importance | [Bach et al., 2015](https://doi.org/10.1371/journal.pone.0130140) | | **Region Perturbation** | AUC of prediction degradation when perturbing feature regions by importance | [Samek et al., 2015](https://arxiv.org/abs/1509.06321) | | **Selectivity (AOPC)** | Average prediction drop when sequentially removing features by importance | [Montavon et al., 2018](https://doi.org/10.1016/j.dsp.2017.10.011) | | **Sensitivity-n** | Correlation between attribution sums and prediction changes for random feature subsets | [Ancona et al., 2018](https://arxiv.org/abs/1711.06104) | | **IROF** | Area over curve measuring prediction degradation when iteratively removing features | [Rieger & Hansen, 2020](https://arxiv.org/abs/2003.08747) | | **Infidelity** | Measures how well attributions predict model output changes under perturbation | [Yeh et al., 2019](https://arxiv.org/abs/1901.09392) | | **ROAD** | RemOve And Debias - uses noisy linear imputation for out-of-distribution robust evaluation | [Rong et al., 2022](https://proceedings.mlr.press/v162/rong22a.html) | ### Stability Metrics | Metric | Description | Reference | |--------|-------------|-----------| | **RIS** | Relative Input Stability | [Agarwal et al., 2022](https://arxiv.org/abs/2203.06877) | | **ROS** | Relative Output Stability | [Agarwal et al., 2022](https://arxiv.org/abs/2203.06877) | | **Lipschitz Estimate** | Local Lipschitz continuity | [Alvarez-Melis & Jaakkola, 2018](https://arxiv.org/abs/1806.08049) | --- ## Installation ```bash # From PyPI pip install explainiverse # With PyTorch support (for gradient-based methods) pip install explainiverse[torch] # For development git clone https://github.com/jemsbhai/explainiverse.git cd explainiverse poetry install ``` --- ## Quick Start ### Basic Usage with Registry ```python from explainiverse import default_registry, SklearnAdapter from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import load_iris # Train a model iris = load_iris() model = RandomForestClassifier(n_estimators=100, random_state=42) model.fit(iris.data, iris.target) # Wrap with adapter adapter = SklearnAdapter(model, class_names=iris.target_names.tolist()) # List all available explainers print(default_registry.list_explainers()) # ['lime', 'shap', 'treeshap', 'integrated_gradients', 'deeplift', 'deepshap', # 'smoothgrad', 'saliency', 'gradcam', 'lrp', 'tcav', 'anchors', 'counterfactual', # 'protodash', 'permutation_importance', 'partial_dependence', 'ale', 'sage'] # Create an explainer via registry explainer = default_registry.create( "lime", model=adapter, training_data=iris.data, feature_names=iris.feature_names.tolist(), class_names=iris.target_names.tolist() ) # Generate explanation explanation = explainer.explain(iris.data[0]) print(explanation.explanation_data["feature_attributions"]) ``` ### Filter and Recommend Explainers ```python # Filter by criteria local_explainers = default_registry.filter(scope="local", data_type="tabular") neural_explainers = default_registry.filter(model_type="neural") image_explainers = default_registry.filter(data_type="image") # Get recommendations recommendations = default_registry.recommend( model_type="neural", data_type="tabular", scope_preference="local", max_results=5 ) ``` --- ## Gradient-Based Explainers (PyTorch) ### Integrated Gradients ```python from explainiverse import PyTorchAdapter from explainiverse.explainers.gradient import IntegratedGradientsExplainer import torch.nn as nn # Define and wrap model model = nn.Sequential( nn.Linear(10, 64), nn.ReLU(), nn.Linear(64, 32), nn.ReLU(), nn.Linear(32, 3) ) adapter = PyTorchAdapter(model, task="classification", class_names=["A", "B", "C"]) # Create explainer explainer = IntegratedGradientsExplainer( model=adapter, feature_names=[f"feature_{i}" for i in range(10)], class_names=["A", "B", "C"], n_steps=50, method="riemann_trapezoid" ) # Explain with convergence check explanation = explainer.explain(X[0], return_convergence_delta=True) print(f"Attributions: {explanation.explanation_data['feature_attributions']}") print(f"Convergence δ: {explanation.explanation_data['convergence_delta']:.6f}") ``` ### Layer-wise Relevance Propagation (LRP) ```python from explainiverse.explainers.gradient import LRPExplainer # LRP - Decomposition-based attribution with conservation property explainer = LRPExplainer( model=adapter, feature_names=feature_names, class_names=class_names, rule="epsilon", # Propagation rule: epsilon, gamma, alpha_beta, z_plus, composite epsilon=1e-6 # Stabilization constant ) # Basic explanation explanation = explainer.explain(X[0], target_class=0) print(explanation.explanation_data["feature_attributions"]) # Verify conservation property (sum of attributions ≈ target output) explanation = explainer.explain(X[0], return_convergence_delta=True) print(f"Conservation delta: {explanation.explanation_data['convergence_delta']:.6f}") # Compare different LRP rules comparison = explainer.compare_rules(X[0], rules=["epsilon", "gamma", "z_plus"]) for rule, result in comparison.items(): print(f"{rule}: top feature = {result['top_feature']}") # Layer-wise relevance analysis layer_result = explainer.explain_with_layer_relevances(X[0]) for layer, relevances in layer_result["layer_relevances"].items(): print(f"{layer}: sum = {sum(relevances):.4f}") # Composite rules: different rules for different layers explainer_composite = LRPExplainer( model=adapter, feature_names=feature_names, class_names=class_names, rule="composite" ) explainer_composite.set_composite_rule({ 0: "z_plus", # Input layer: focus on what's present 2: "epsilon", # Middle layers: balanced 4: "epsilon" # Output layer }) explanation = explainer_composite.explain(X[0]) ``` **LRP Propagation Rules:** | Rule | Description | Use Case | |------|-------------|----------| | `epsilon` | Adds stabilization constant | General purpose (default) | | `gamma` | Enhances positive contributions | Image classification | | `alpha_beta` | Separates pos/neg (α-β=1) | Fine-grained control | | `z_plus` | Only positive weights | Input layers, what's present | | `composite` | Different rules per layer | Best practice for deep nets | **Supported Layers:** - Linear, Conv2d - BatchNorm1d, BatchNorm2d - ReLU, LeakyReLU, ELU, Tanh, Sigmoid, GELU - MaxPool2d, AvgPool2d, AdaptiveAvgPool2d - Flatten, Dropout ### DeepLIFT and DeepSHAP ```python from explainiverse.explainers.gradient import DeepLIFTExplainer, DeepLIFTShapExplainer # DeepLIFT - Fast reference-based attributions deeplift = DeepLIFTExplainer( model=adapter, feature_names=feature_names, class_names=class_names, baseline=None # Uses zero baseline by default ) explanation = deeplift.explain(X[0]) # DeepSHAP - DeepLIFT averaged over background samples deepshap = DeepLIFTShapExplainer( model=adapter, feature_names=feature_names, class_names=class_names, background_data=X_train[:100] ) explanation = deepshap.explain(X[0]) ``` ### Saliency Maps ```python from explainiverse.explainers.gradient import SaliencyExplainer # Saliency Maps - simplest and fastest gradient method explainer = SaliencyExplainer( model=adapter, feature_names=feature_names, class_names=class_names, absolute_value=True # Default: absolute gradient magnitudes ) # Standard saliency (absolute gradients) explanation = explainer.explain(X[0], method="saliency") # Input × Gradient (gradient scaled by input values) explanation = explainer.explain(X[0], method="input_times_gradient") # Signed saliency (keep gradient direction) explainer_signed = SaliencyExplainer( model=adapter, feature_names=feature_names, class_names=class_names, absolute_value=False ) explanation = explainer_signed.explain(X[0]) # Compare all variants variants = explainer.compute_all_variants(X[0]) print(variants["saliency_absolute"]) print(variants["saliency_signed"]) print(variants["input_times_gradient"]) ``` ### SmoothGrad ```python from explainiverse.explainers.gradient import SmoothGradExplainer # SmoothGrad - Noise-averaged gradients for smoother saliency explainer = SmoothGradExplainer( model=adapter, feature_names=feature_names, class_names=class_names, n_samples=50, noise_scale=0.15, noise_type="gaussian" # or "uniform" ) # Standard SmoothGrad explanation = explainer.explain(X[0], method="smoothgrad") # SmoothGrad-Squared (sharper attributions) explanation = explainer.explain(X[0], method="smoothgrad_squared") # VarGrad (variance of gradients) explanation = explainer.explain(X[0], method="vargrad") # With absolute values explanation = explainer.explain(X[0], absolute_value=True) ``` ### GradCAM for CNNs ```python from explainiverse.explainers.gradient import GradCAMExplainer # For CNN models adapter = PyTorchAdapter(cnn_model, task="classification", class_names=class_names) explainer = GradCAMExplainer( model=adapter, target_layer="layer4", # Last conv layer class_names=class_names, method="gradcam++" # or "gradcam" ) explanation = explainer.explain(image) heatmap = explanation.explanation_data["heatmap"] overlay = explainer.get_overlay(original_image, heatmap, alpha=0.5) ``` ### TCAV (Concept-Based Explanations) ```python from explainiverse.explainers.gradient import TCAVExplainer # For neural network models with concept examples adapter = PyTorchAdapter(model, task="classification", class_names=class_names) # Create TCAV explainer targeting a specific layer explainer = TCAVExplainer( model=adapter, layer_name="layer3", # Target layer for concept analysis class_names=class_names ) # Learn a concept from examples (e.g., "striped" pattern) explainer.learn_concept( concept_name="striped", concept_examples=striped_images, # Images with stripes negative_examples=random_images, # Random images without stripes min_accuracy=0.6 # Minimum CAV classifier accuracy ) # Compute TCAV score: fraction of inputs where concept positively influences prediction tcav_score = explainer.compute_tcav_score( test_inputs=test_images, target_class=0, # e.g., "zebra" concept_name="striped" ) print(f"TCAV score: {tcav_score:.3f}") # >0.5 means concept positively influences class # Statistical significance testing against random concepts result = explainer.statistical_significance_test( test_inputs=test_images, target_class=0, concept_name="striped", n_random=10, negative_examples=random_images ) print(f"p-value: {result['p_value']:.4f}, significant: {result['significant']}") # Full explanation with multiple concepts explanation = explainer.explain( test_inputs=test_images, target_class=0, run_significance_test=True ) print(explanation.explanation_data["tcav_scores"]) ``` --- ## Example-Based Explanations ### ProtoDash ```python from explainiverse.explainers.example_based import ProtoDashExplainer explainer = ProtoDashExplainer( model=adapter, training_data=X_train, feature_names=feature_names, n_prototypes=5, kernel="rbf", gamma=0.1 ) explanation = explainer.explain(X_test[0]) print(explanation.explanation_data["prototype_indices"]) print(explanation.explanation_data["prototype_weights"]) ``` --- ## Evaluation Metrics ### Faithfulness Evaluation ```python from explainiverse.evaluation import ( compute_pgi, compute_pgu, compute_comprehensiveness, compute_sufficiency, compute_faithfulness_correlation ) # PGI - Higher is better (important features affect predictions) pgi = compute_pgi( model=adapter, instance=X[0], attributions=attributions, feature_names=feature_names, top_k=3 ) # PGU - Lower is better (unimportant features don't affect predictions) pgu = compute_pgu( model=adapter, instance=X[0], attributions=attributions, feature_names=feature_names, top_k=3 ) # Comprehensiveness - Higher is better comp = compute_comprehensiveness( model=adapter, instance=X[0], attributions=attributions, feature_names=feature_names, top_k_values=[1, 2, 3, 5] ) # Sufficiency - Lower is better suff = compute_sufficiency( model=adapter, instance=X[0], attributions=attributions, feature_names=feature_names, top_k_values=[1, 2, 3, 5] ) # Faithfulness Correlation corr = compute_faithfulness_correlation( model=adapter, instance=X[0], attributions=attributions, feature_names=feature_names ) ``` ### Stability Evaluation ```python from explainiverse.evaluation import ( compute_ris, compute_ros, compute_lipschitz_estimate ) # RIS - Relative Input Stability (lower is better) ris = compute_ris( explainer=explainer, instance=X[0], n_perturbations=10, perturbation_scale=0.1 ) # ROS - Relative Output Stability (lower is better) ros = compute_ros( model=adapter, explainer=explainer, instance=X[0], n_perturbations=10, perturbation_scale=0.1 ) # Lipschitz Estimate (lower is better) lipschitz = compute_lipschitz_estimate( explainer=explainer, instance=X[0], n_perturbations=20, perturbation_scale=0.1 ) ``` --- ## Global Explainers ```python from explainiverse.explainers import ( PermutationImportanceExplainer, PartialDependenceExplainer, ALEExplainer, SAGEExplainer ) # Permutation Importance perm_imp = PermutationImportanceExplainer( model=adapter, X=X_test, y=y_test, feature_names=feature_names, n_repeats=10 ) explanation = perm_imp.explain() # Partial Dependence Plot pdp = PartialDependenceExplainer( model=adapter, X=X_train, feature_names=feature_names ) explanation = pdp.explain(feature="feature_0", grid_resolution=50) # ALE (handles correlated features) ale = ALEExplainer( model=adapter, X=X_train, feature_names=feature_names ) explanation = ale.explain(feature="feature_0", n_bins=20) # SAGE (global Shapley importance) sage = SAGEExplainer( model=adapter, X=X_train, y=y_train, feature_names=feature_names, n_permutations=512 ) explanation = sage.explain() ``` --- ## Multi-Explainer Comparison ```python from explainiverse import ExplanationSuite suite = ExplanationSuite( model=adapter, explainer_configs=[ ("lime", {"training_data": X_train, "feature_names": feature_names, "class_names": class_names}), ("shap", {"background_data": X_train[:50], "feature_names": feature_names, "class_names": class_names}), ("treeshap", {"feature_names": feature_names, "class_names": class_names}), ] ) results = suite.run(X_test[0]) suite.compare() ``` --- ## Custom Explainer Registration Explainiverse's plugin architecture allows you to register your own custom explainers and have them integrate seamlessly with the registry's discovery, filtering, and recommendation system. ### Why Register Custom Explainers? | Benefit | Description | |---------|-------------| | **Discoverability** | Your explainer appears in `list_explainers()` and can be filtered by criteria | | **Rich Metadata** | Attach scope, model types, data types, paper references, and complexity info | | **Unified API** | Create instances via `default_registry.create("my_explainer", ...)` | | **Recommendations** | Your explainer can be recommended based on the user's use case | | **Consistency** | Follows the same `BaseExplainer` interface as all built-in methods | ### Method 1: Decorator-Based Registration (Recommended) The cleanest way to register a custom explainer: ```python from explainiverse import default_registry, BaseExplainer, Explanation from explainiverse.core.registry import ExplainerMeta @default_registry.register_decorator( name="my_explainer", meta=ExplainerMeta( scope="local", # "local" or "global" model_types=["any"], # ["any", "tree", "linear", "neural", "ensemble"] data_types=["tabular"], # ["tabular", "image", "text", "time_series"] task_types=["classification", "regression"], description="My custom attribution method", paper_reference="Author et al., 2024 - 'My Method' (Conference)", complexity="O(n * d)", # Computational complexity requires_training_data=False, supports_batching=True ) ) class MyExplainer(BaseExplainer): """Custom explainer implementing your attribution method.""" def __init__(self, model, feature_names, class_names=None, **kwargs): super().__init__(model) self.feature_names = feature_names self.class_names = class_names def explain(self, instance, target_class=None, **kwargs): """ Generate explanation for a single instance. Args: instance: Input to explain (1D array for tabular) target_class: Class to explain (optional) **kwargs: Additional method-specific parameters Returns: Explanation object with feature attributions """ # Your attribution logic here attributions = self._compute_attributions(instance, target_class) # Return standardized Explanation object return Explanation( explainer_name="MyExplainer", target_class=str(target_class or 0), explanation_data={"feature_attributions": attributions}, feature_names=self.feature_names, metadata={"method": "my_method", "params": kwargs} ) def _compute_attributions(self, instance, target_class): """Compute feature attributions (implement your method here).""" # Example: simple gradient-like computation import numpy as np attributions = {} for i, name in enumerate(self.feature_names): attributions[name] = float(np.random.randn()) # Replace with real logic return attributions ``` ### Method 2: Programmatic Registration For dynamic registration or when decorators aren't suitable: ```python from explainiverse import default_registry, BaseExplainer, Explanation from explainiverse.core.registry import ExplainerMeta, get_default_registry # Define your explainer class class AnotherExplainer(BaseExplainer): def __init__(self, model, feature_names, **kwargs): super().__init__(model) self.feature_names = feature_names def explain(self, instance, **kwargs): # Implementation return Explanation( explainer_name="AnotherExplainer", target_class="0", explanation_data={"feature_attributions": {}}, feature_names=self.feature_names ) # Register programmatically registry = get_default_registry() registry.register( name="another_explainer", explainer_class=AnotherExplainer, meta=ExplainerMeta( scope="local", model_types=["neural"], data_types=["image"], description="Another custom explainer" ) ) ``` ### Using Your Registered Explainer Once registered, your explainer works like any built-in method: ```python # Verify registration print(default_registry.list_explainers()) # [..., 'my_explainer', ...] # Check metadata meta = default_registry.get_meta("my_explainer") print(meta.description) # "My custom attribution method" # Create via registry explainer = default_registry.create( "my_explainer", model=adapter, feature_names=feature_names, class_names=class_names ) # Generate explanations explanation = explainer.explain(X[0]) print(explanation.get_top_features(k=5)) # Your explainer is now discoverable via filtering local_explainers = default_registry.filter(scope="local") print("my_explainer" in local_explainers) # True # And included in recommendations recommended = default_registry.recommend( model_type="any", data_type="tabular", scope_preference="local" ) ``` ### ExplainerMeta Fields Reference | Field | Type | Description | |-------|------|-------------| | `scope` | `str` | **Required.** `"local"` (instance-level) or `"global"` (model-level) | | `model_types` | `List[str]` | Compatible models: `["any", "tree", "linear", "neural", "ensemble"]` | | `data_types` | `List[str]` | Compatible data: `["tabular", "image", "text", "time_series"]` | | `task_types` | `List[str]` | Compatible tasks: `["classification", "regression"]` | | `description` | `str` | Human-readable description (shown in `summary()`) | | `paper_reference` | `str` | Citation for the method's paper | | `complexity` | `str` | Computational complexity (e.g., `"O(n^2)"`) | | `requires_training_data` | `bool` | Whether `explain()` needs background/training data | | `supports_batching` | `bool` | Whether the explainer can process batches efficiently | ### Managing Registrations ```python from explainiverse.core.registry import get_default_registry registry = get_default_registry() # Override an existing registration registry.register( name="my_explainer", explainer_class=ImprovedExplainer, meta=ExplainerMeta(scope="local", description="Improved version"), override=True # Required to replace existing ) # Unregister an explainer registry.unregister("my_explainer") # View summary of all registered explainers print(registry.summary()) ``` --- ## Architecture ``` explainiverse/ ├── core/ │ ├── explainer.py # BaseExplainer abstract class │ ├── explanation.py # Unified Explanation container │ └── registry.py # ExplainerRegistry with metadata ├── adapters/ │ ├── sklearn_adapter.py │ └── pytorch_adapter.py # With gradient support ├── explainers/ │ ├── attribution/ # LIME, SHAP, TreeSHAP │ ├── gradient/ # IG, DeepLIFT, DeepSHAP, SmoothGrad, Saliency, GradCAM, LRP, TCAV │ ├── rule_based/ # Anchors │ ├── counterfactual/ # DiCE-style │ ├── global_explainers/ # Permutation, PDP, ALE, SAGE │ └── example_based/ # ProtoDash ├── evaluation/ │ ├── faithfulness.py # PGI, PGU, Comprehensiveness, Sufficiency │ └── stability.py # RIS, ROS, Lipschitz └── engine/ └── suite.py # Multi-explainer comparison ``` --- ## Running Tests ```bash # Run all tests poetry run pytest # Run with coverage poetry run pytest --cov=explainiverse --cov-report=html # Run specific test file poetry run pytest tests/test_lrp.py -v # Run specific test class poetry run pytest tests/test_lrp.py::TestLRPConv2d -v ``` --- ## Roadmap ### Completed ✅ - [x] Core framework (BaseExplainer, Explanation, Registry) - [x] Perturbation methods: LIME, KernelSHAP, TreeSHAP - [x] Gradient methods: Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++ - [x] Decomposition methods: Layer-wise Relevance Propagation (LRP) with ε, γ, αβ, z⁺, composite rules - [x] Concept-based: TCAV (Testing with Concept Activation Vectors) - [x] Rule-based: Anchors - [x] Counterfactual: DiCE-style - [x] Global: Permutation Importance, PDP, ALE, SAGE - [x] Example-based: ProtoDash - [x] Evaluation: Faithfulness metrics (PGI, PGU, Comprehensiveness, Sufficiency, Correlation) - [x] Evaluation: Stability metrics (RIS, ROS, Lipschitz) - [x] PyTorch adapter with gradient support ### In Progress 🔄 - [ ] **Evaluation metrics expansion** - Adding 42 more metrics across 7 categories to exceed Quantus (37 metrics) - Phase 1: Faithfulness (+12 metrics) - 10/12 complete - Phase 2: Robustness (+7 metrics) - Phase 3: Localisation (+8 metrics) - Phase 4: Complexity (+4 metrics) - Phase 5: Randomisation (+5 metrics) - Phase 6: Axiomatic (+4 metrics) - Phase 7: Fairness (+4 metrics) ### Planned 📋 - [ ] Attention-based explanations (for Transformers) - [ ] TensorFlow/Keras adapter - [ ] Interactive visualization dashboard - [ ] Explanation caching and serialization - [ ] Distributed computation support --- ## Citation If you use Explainiverse in your research, please cite: ```bibtex @software{explainiverse2025, title = {Explainiverse: A Unified Framework for Explainable AI}, author = {Syed, Muntaser}, year = {2025}, url = {https://github.com/jemsbhai/explainiverse}, version = {0.9.1} } ``` --- ## Contributing Contributions are welcome! Please see our [Contributing Guide](CONTRIBUTING.md) for details. 1. Fork the repository 2. Create a feature branch (`git checkout -b feature/amazing-feature`) 3. Write tests for your changes 4. Ensure all tests pass (`poetry run pytest`) 5. Commit your changes (`git commit -m 'Add amazing feature'`) 6. Push to the branch (`git push origin feature/amazing-feature`) 7. Open a Pull Request --- ## License MIT License - see [LICENSE](LICENSE) for details. --- ## Acknowledgments Explainiverse builds upon the foundational work of many researchers in the XAI community. We thank the authors of LIME, SHAP, Integrated Gradients, DeepLIFT, LRP, GradCAM, TCAV, Anchors, DiCE, ALE, SAGE, and ProtoDash for their contributions to interpretable machine learning.
text/markdown
Muntaser Syed
jemsbhai@gmail.com
null
null
MIT
xai, explainability, interpretability, machine-learning, lime, shap, anchors
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Scientific/Engineering :: Artificial Intelligence" ]
[]
https://github.com/jemsbhai/explainiverse
null
<3.14,>=3.10
[]
[]
[]
[ "numpy<3.0,>=1.24", "lime<1.0,>=0.2.0.1", "scikit-learn<2.0,>=1.6", "shap<1.0,>=0.46", "pandas<3.0,>=1.5", "scipy<3.0,>=1.10", "xgboost<4.0,>=1.7", "torch>=2.0; extra == \"torch\"" ]
[]
[]
[]
[ "Repository, https://github.com/jemsbhai/explainiverse" ]
poetry/1.8.3 CPython/3.12.2 Windows/11
2026-02-20T10:29:28.270041
explainiverse-0.9.5.tar.gz
128,489
3b/59/06794b90a36fe4ab4e5dec674cdc66c23fdc494cfb684657b0fe059b1fb3/explainiverse-0.9.5.tar.gz
source
sdist
null
false
c8f70fd6b8d22f9b2bb86e2be9336f82
b28c52f25e2e078d22e3c3abd4ee1c9c54f0492b784e30a7e17a547261b6aa38
3b5906794b90a36fe4ab4e5dec674cdc66c23fdc494cfb684657b0fe059b1fb3
null
[]
235
2.4
sinq
0.2.0
SINQ quantization I/O for Hugging Face Transformers
[![arXiv](https://img.shields.io/badge/arXiv-2509.22944-b31b1b.svg)](https://arxiv.org/abs/2509.22944) [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![GitHub stars](https://img.shields.io/github/stars/huawei-csl/SINQ?label=Stars&logo=github&logoColor=white&style=flat-square)](https://github.com/huawei-csl/SINQ/stargazers) [![hf-space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Huawei%20CSL-ffc107?color=ffc107&logoColor=white)](https://huggingface.co/huawei-csl) <table border="0" cellspacing="0" cellpadding="0"> <tr> <td><img src="imgs/logo.png" alt="SINQ Logo" width="110"></td> <td style="vertical-align: middle;"><h1>SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLMs</h1></td> </tr> </table> > ⚡️ **A fast, plug-and-play, model-agnostic quantization technique** delivering **state-of-the-art performance** for Large Language Models **without sacrificing accuracy.** > 💡 **Want to run a large model on your GPU but don’t have enough memory?** With **SINQ**, you can deploy models that would otherwise be too big **drastically reducing memory usage while preserving LLM quality.** > ⏱️ SINQ quantizes **Qwen3-14B** in just **~21 sec** and **DeepSeekV2.5-236B** in **~5 min** --- 🆕 [17/10/2025] **First models on 🤗 Hugging Face Hub:** > We’ve started uploading our first pre-quantized models to the 🤗 [**Hugging Face Hub**](https://huggingface.co/huawei-csl) and will continue adding more soon. > **Note: We’re also actively working to add support for popular frameworks such as <code>vLLM</code>, <code>SGLang</code>, and <code>llama.cpp</code> to enable fast SINQ-ference** (sorry for the joke). > In the meantime, you can ⭐️ **star** and **watch** the repo to stay updated! --- ## 🚀 Welcome to the **official SINQ repository**! **SINQ** (Sinkhorn-Normalized Quantization) is a **novel, fast and high-quality quantization method** designed to make any Large Language Models **smaller** while keeping their accuracy almost intact. ### 🔍 What You’ll Find Here - [1. How does SINQ work?](#1-how-does-sinq-work) - [2. Why should I use SINQ?](#2-why-should-i-use-sinq) - <u>[3. Quantize (and save) any LLM with SINQ](#3-quantize-any-llm-with-sinq)</u> - [4. How to reproduce paper results](#4-how-to-reproduce-paper-results) - [5. Ongoing updates on new features and integrations](#5-ongoing-updates-on-new-features-and-integrations) - [6. How to Cite This Work](#6-how-to-cite-this-work) - [7. Related Repositories](#7-related-repositories) #### 📊 Feature Comparison: SINQ vs HQQ _(calibration-free)_ and A-SINQ vs AWQ _(calibrated)_ | Feature | **SINQ** | **HQQ** | **A-SINQ** | **AWQ** | |------------|:--------:|:--------:|:----------:|:-------:| | 🎯 Calibration | Calibration-free | Calibration-free | Calibrated | Calibrated | | 🧮 Quantization Type | Symmetric & Asymmetric | Asymmetric only | Symmetric & Asymmetric | Symmetric & Asymmetric | | 📦 NF4 Support | **Yes** | No | **Yes** | No | | ⚡ Quantization Speed | ~2× **Faster** than HQQ | Slower | ~4× **Faster** than AWQ | Slower | | 📈 Model Quality | **Higher** | Lower | **Higher** | Lower | 📄 **Want to know more?** Read our paper on [**arXiv**](http://arxiv.org/abs/2509.22944)! --- ## 1. How does SINQ work? <details> <summary>Click to expand a quick explanation of SINQ’s core idea</summary> #### 1️⃣ Dual-Scaling for Better Quantization <p align="left"> <img src="imgs/dualscale.png" alt="Dual Scale Illustration" width="330" align="right" style="margin-left: 20px;"/> </p> Conventional quantization uses **one scale per weight dimension**, which makes models vulnerable to **outliers**: large weights that distort scaling and cause significant errors. **SINQ** solves this by introducing **dual scaling**: separate scale factors for **rows and columns**. This flexibility redistributes outlier influence and keeps quantization errors smaller and more balanced. --- #### 2️⃣ More Even Error Distribution <p align="left"> <img src="imgs/error.png" alt="Error Distribution Comparison" width="370" align="right" style="margin-left: 20px;"/> </p> With standard single-scale quantization, errors tend to **cluster around outliers**. With **SINQ**, they become **spread out and less severe**, preserving model accuracy even at **3 bit precision**. This improvement is driven by SINQ’s **Sinkhorn-normalized optimization**, which iteratively rescales rows and columns to balance their variance - a process inspired by Sinkhorn matrix normalization. By reducing the overall **_matrix imbalance_** (refer to the paper for more info), weights become inherently easier to quantize, leading to more stable behavior across layers and consistently higher accuracy even at very low bit-widths. </details> --- ## 2. Why should I use SINQ? <details> <summary>Click to expand a quick explanation on why you should use SINQ to quantize your LLM</summary> #### **SINQ (calibration-free)** - **Higher LLM quality** and **~2× faster** quantization than **HQQ** - **>31× faster** quantization process and comparable or better LLM quality compared to **AWQ / GPTQ** - **Model-agnostic**: works without knowing the specific LLM architecture, unlike **QuaRot** - **Training-free**: it does not require end-to-end training, unlike **SpinQuant** or **KurTail** - **Additionally, A-SINQ (calibrated)** further **beats AWQ, GPTQ, and Hadamard+GPTQ** on quality while achieving **>4× faster** quantization time. **Example** - ⏱️ SINQ quantizes **Qwen3-14B** in just **~21 sec** and **DeepSeekV2.5-236B** in **~5 min** on a single GPU - 💾 Enables you to **run DeepSeekV2.5-236B** on a single GPU with **~110 GB** of memory (vs ~472 GB) while losing **< 1 ppl** on **WikiText2** and **C4** </details> ## 3. Quantize any LLM with SINQ ### Setup & Quick Start First, install the dependencies and set up the package: ```bash # 1. Clone the repository git clone https://github.com/huawei-csl/SINQ.git cd SINQ # 2. Install dependencies pip install -r req.txt # 3. Install SINQ pip install . ``` --- ### Quantize in a few lines Quantizing any 🤗 Hugging Face model with SINQ is simple and takes only a few lines of code: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from sinq.patch_model import AutoSINQHFModel from sinq.sinqlinear import BaseQuantizeConfig model_name = "Qwen/Qwen3-1.7B" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_name) quant_cfg = BaseQuantizeConfig( nbits=4, # quantization bit-width group_size=64, # group size tiling_mode="1D", # tiling strategy method="sinq" # quantization method ("asinq" for the calibrated version) ) qmodel = AutoSINQHFModel.quantize_model( model, tokenizer=tokenizer, quant_config=quant_cfg, compute_dtype=torch.bfloat16, device="cuda:0" ) ``` ✅ That’s it. Your model is now quantized with **SINQ** and ready for inference or saving. ### Optional Flags You can further customize the quantization process to balance **accuracy** and **memory** for your needs. Here’s a summary of the main arguments you can tune: | Flag | Description | Options | Default | |------|-------------|---------|----------| | `--nbits` | Bit-width for weight quantization | 2, 3, 4, 5, 6, 8 | 4 | | `--tiling_mode` | Weight matrix tiling strategy | 1D, 2D | 1D | | `--group_size` | Weights per quantization group | 64, 128 | 64 | | `--method` | Quantization method | sinq, asinq | sinq | 💡 **Tip:** For most cases, the defaults (`--nbits 4 --tiling_mode 1D --group_size 64 --method sinq`) provide an excellent trade-off between compression and accuracy. --- ### Save & reload If you want to reuse a quantized model later, save it to disk in **HF-style sharded safetensors** and reload without needing base FP weights. > Requires: `pip install safetensors` ```python # --- Save to a folder (sharded safetensors) --- from sinq.patch_model import AutoSINQHFModel import torch save_dir = "qwen3-1.7b-sinq-4bit" # any path # 'model' must already be SINQ-quantized (e.g., via AutoSINQHFModel.quantize_model) AutoSINQHFModel.save_quantized_safetensors( qmodel, tokenizer, save_dir, verbose=True, max_shard_size="4GB", # typical HF shard size (use "8GB" if you prefer) ) ``` ```python # --- Reload later-- from sinq.patch_model import AutoSINQHFModel import torch tokenizer = AutoTokenizer.from_pretrained(save_dir) qmodel = AutoSINQHFModel.from_quantized_safetensors( save_dir, device="cuda:0", compute_dtype=torch.bfloat16, ) # (optional) quick smoke test prompt = "Explain neural network quantization in one sentence." inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0") with torch.inference_mode(): out_ids = qmodel.generate(**inputs, max_new_tokens=32, do_sample=False) print(tokenizer.decode(out_ids[0], skip_special_tokens=True)) ``` <details> <summary><strong>Alternative: save & reload as a single <code>.pt</code> file</strong> </summary> ```python # --- Save to a folder (.pt) --- from sinq.patch_model import AutoSINQHFModel save_dir = "qwen3-1.7b-sinq-4bit" # any path AutoSINQHFModel.save_quantized(qmodel, tokenizer, save_dir, verbose=True) # creates qmodel.pt ``` ```python # --- Reload later from .pt --- from sinq.patch_model import AutoSINQHFModel import torch tokenizer = AutoTokenizer.from_pretrained(save_dir) qmodel = AutoSINQHFModel.from_quantized( save_dir, device="cuda:0", compute_dtype=torch.bfloat16, ) ``` </details> ### Compatible with [`lm-eval`](https://github.com/EleutherAI/lm-evaluation-harness) evaluation framework Below is a minimal example showing how to evaluate a SINQ-quantized model on a benchmark dataset: ```python from lm_eval import evaluator from lm_eval.models.huggingface import HFLM # Wrap the already quantized model and tokenizer with HFLM lm = HFLM(pretrained=qmodel, tokenizer=tokenizer, device="cuda:0") # Evaluate (many tasks available on lm-eval such as MMLU and HellaSwag) results = evaluator.simple_evaluate( model=lm, tasks=["lambada_openai"], # small and fast benchmark device="cuda:0" ) ``` ## 4. How to reproduce paper results <details> <summary>Click to expand the commands to reproduce the paper results</summary> ### Setup & Quick Start First, install the dependencies and set up the package: ```bash # 1. Clone the repository git clone https://github.com/huawei-csl/SINQ.git cd SINQ # 2. Install dependencies pip install -r req.txt # 3. Install SINQ pip install . ``` Then run the following command to quantize **Qwen3-1.7B** out of the box: ```bash cd tests python quant_model_eval.py ``` By default, this will run SINQ with the following settings: - ✅ 4-bit weight quantization - ✅ Dual-scale + shift parameterization - ✅ 1D tiling - ✅ Group size = 64 --- ### Uniform, Uncalibrated Quantization Reproduce the **core SINQ results** (as shown in Table 1 of the paper): ```bash python quant_model_eval.py --model_name Qwen/Qwen3-1.7B ``` This uses **INT4 uniform quantization** without calibration - the main benchmark setting of the paper. --- ### Non-Uniform Quantization (NF4) Try SINQ with **non-uniform quantization** (e.g., NF4): ```bash python quant_model_eval.py --method sinq_nf4 --model_name Qwen/Qwen3-1.7B ``` --- ### Calibrated Quantization (AWQ + SINQ = A-SINQ) Combine SINQ with **activation-aware calibration (AWQ)** for higher accuracy: ```bash python quant_model_eval.py --method asinq --model_name Qwen/Qwen3-1.7B ``` --- ### ⚙️ Optional Flags Customize experiments with the following command-line arguments: | Flag | Description | Options | Default | |------|-------------|---------|----------| | `--nbits` | Number of bits used to quantize model weights | 2, 3, 4, 8 | 4 | | `--tiling_mode` | Strategy for tiling weight matrices during quantization | 1D, 2D | 1D | | `--group_size` | Number of weights processed together as a quantization group | 64, 128 | 64 | > 📝 **Note:** All results reported in the paper were obtained using the evaluation framework from [Efficient-ML/Qwen3-Quantization](https://github.com/Efficient-ML/Qwen3-Quantization) rather than `lm-eval`. </details> ## 5. Ongoing updates on new features and integrations We are actively expanding SINQ with new features and integrations. Stay tuned here for the latest updates: - [26/09/2025] - SINQ paper released on [**arXiv**](https://arxiv.org/abs/2509.22944) - [30/09/2025] - SINQ GitHub repository made public - [02/10/2025] - SINQ paper featured on 🤗 [**Hugging Face Papers**](https://huggingface.co/papers/2509.22944) - [17/10/2025] - First pre-quantized **SINQ models** available on 🤗[**Hugging Face Hub**](https://huggingface.co/huawei-csl)! 🆕 - [23/10/2025] - Faster inference with gemlite backend (4-bit 1D tiling) - 🔜 **Coming soon** - 🤗 Integration with **Hugging Face Transformers** - 🔜 **Coming soon** - Support for **Conv2D layers** and **timm models** for computer vision tasks - 🔜 **Work in progress** - Support for **mixed-precision quantization** (combine multiple bitwidths for optimal accuracy-efficiency balance) - 🔜 **Work in progress** - We’re actively working to provide support for popular frameworks such as <code>vLLM</code>, <code>SGLang</code>, and <code>llama.cpp</code>. ## 6. How to Cite This Work If you find **SINQ** useful in your research or applications, please cite our <a href="http://arxiv.org/abs/2509.22944" target="_blank"><strong>paper</strong></a>: ```bibtex @misc{muller2025sinq, title={SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights}, author={Lorenz K. Muller and Philippe Bich and Jiawei Zhuang and Ahmet Celik and Luca Benfenati and Lukas Cavigelli}, year={2025}, eprint={2509.22944}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={http://arxiv.org/abs/2509.22944} } ``` ## 7. Related Repositories This project builds upon and extends the excellent work from the following open-source projects: - [**Qwen3-Quantization**](https://github.com/Efficient-ML/Qwen3-Quantization) - Base implementation and evaluation scripts for Qwen3 quantization. - [**HQQ**](https://github.com/mobiusml/hqq) - High-quality calibration-free quantization baseline. 📜 You can find their original licenses in the corresponding `LICENSE` files in these repositories.
text/markdown
null
Chiara Boretti <chiara.boretti.95@gmail.com>
null
null
Apache-2.0
transformers, quantization, huggingface, LLM
[ "Programming Language :: Python :: 3", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "transformers>=5.2.0", "huggingface_hub>=0.22", "torch>=2.1", "safetensors>=0.4.2", "hf_transfer>=0.1.9", "einops>=0.8.1", "datasets", "numpy", "scipy>=1.13.1", "torchaudio>=2.8.0", "torchvision>=0.23.0", "lm_eval>=0.4.11", "termcolor", "gemlite==0.5.1.post1" ]
[]
[]
[]
[ "Homepage, https://github.com/huawei-csl/SINQ", "Issues, https://github.com/huawei-csl/SINQ/issues" ]
twine/6.2.0 CPython/3.10.12
2026-02-20T10:29:19.846045
sinq-0.2.0.tar.gz
71,169
64/60/a534a27cafbb878aff0c4db2da0cb4306362655d1dea70c77368213980f1/sinq-0.2.0.tar.gz
source
sdist
null
false
2d2d484e0b6109cc02aa38cef899995e
c33a51e0ec3d657e1f53333a3b340b562846755e1a0bad1a91037e3bdf90f53b
6460a534a27cafbb878aff0c4db2da0cb4306362655d1dea70c77368213980f1
null
[ "LICENSE.txt" ]
234
2.4
trustgraph-vertexai
2.0.9
TrustGraph provides a means to run a pipeline of flexible AI processing components in a flexible means to achieve a processing pipeline.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "trustgraph-base<2.1,>=2.0", "pulsar-client", "google-genai", "google-api-core", "prometheus-client", "anthropic" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:59.820327
trustgraph_vertexai-2.0.9.tar.gz
6,712
c0/dd/cbde33c5583c3c492fc32056b9b6033f0885a0cd95279643b12ab16b1238/trustgraph_vertexai-2.0.9.tar.gz
source
sdist
null
false
6718d07e20e28f9814a073106dc472da
907a17076ed3e1b3a812aa885f3debbc2fa579297b0cf20a871678de4b540d5f
c0ddcbde33c5583c3c492fc32056b9b6033f0885a0cd95279643b12ab16b1238
null
[]
153
2.4
trustgraph-ocr
2.0.9
TrustGraph provides a means to run a pipeline of flexible AI processing components in a flexible means to achieve a processing pipeline.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "trustgraph-base<2.1,>=2.0", "pulsar-client", "prometheus-client", "boto3", "pdf2image", "pytesseract" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:58.843047
trustgraph_ocr-2.0.9.tar.gz
2,512
72/43/04eab70bca17d3f394fc8c18bf584c079684bdb7ac702d61ac84cc356ce7/trustgraph_ocr-2.0.9.tar.gz
source
sdist
null
false
b0fca08da5bf6abe505551bc43be85a7
54c6df56c593d7fed01f8bfa8b1289ce567b7569311fb7ce2c8138a285e9ff07
724304eab70bca17d3f394fc8c18bf584c079684bdb7ac702d61ac84cc356ce7
null
[]
153
2.4
trustgraph-mcp
2.0.9
TrustGraph provides a means to run a pipeline of flexible AI processing components in a flexible means to achieve a processing pipeline.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "mcp", "websockets", "trustgraph-base" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:57.917024
trustgraph_mcp-2.0.9.tar.gz
15,257
da/1c/03e8ed6f2085053251056d75a0b51a179cb9891b6a55797c6134ecf667eb/trustgraph_mcp-2.0.9.tar.gz
source
sdist
null
false
fe6dc54a5998c066f0a13b7b70f2fa8c
f9567ac04a1691869b1c0b9550fafe96d7f2d997bce45480778d2bbc32677f43
da1c03e8ed6f2085053251056d75a0b51a179cb9891b6a55797c6134ecf667eb
null
[]
149
2.4
trustgraph-flow
2.0.9
TrustGraph provides a means to run a pipeline of flexible AI processing components in a flexible means to achieve a processing pipeline.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "trustgraph-base<2.1,>=2.0", "aiohttp", "anthropic", "scylla-driver", "cohere", "cryptography", "faiss-cpu", "falkordb", "fastembed", "ibis", "jsonschema", "langchain", "langchain-community", "langchain-core", "langchain-text-splitters", "mcp", "minio", "mistralai", "neo4j", "nltk", "ollama", "openai", "pinecone[grpc]", "prometheus-client", "pulsar-client", "pymilvus", "pypdf", "pyyaml", "qdrant-client", "rdflib", "requests", "strawberry-graphql", "tabulate", "tiktoken", "urllib3" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:56.629463
trustgraph_flow-2.0.9.tar.gz
215,979
f6/70/228d644a90a4d73457c881fa49f7a1ae62986b880905c0de5194bcad241f/trustgraph_flow-2.0.9.tar.gz
source
sdist
null
false
c432c0e8d388f5faad7fa6c84e4ae21a
2fe81960146c76074b53f4c5d01d56de8c1a9caf6ab5704bd635309549d79f6f
f670228d644a90a4d73457c881fa49f7a1ae62986b880905c0de5194bcad241f
null
[]
159
2.4
trustgraph-embeddings-hf
2.0.9
HuggingFace embeddings support for TrustGraph.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "trustgraph-base<2.1,>=2.0", "trustgraph-flow<2.1,>=2.0", "torch", "urllib3", "transformers", "sentence-transformers", "langchain", "langchain-core", "langchain-huggingface", "langchain-community", "huggingface-hub", "pulsar-client", "pyyaml", "prometheus-client" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:55.389299
trustgraph_embeddings_hf-2.0.9.tar.gz
2,629
82/bc/d130d366c76f61c65d8eec9261f5799a8844417e40ec0c28a15f9bf624f5/trustgraph_embeddings_hf-2.0.9.tar.gz
source
sdist
null
false
41058259eb53f5c80cbdb365f9e55640
351b62f15567dab053d202358ead86caca618a2b201cb7a8b092dbe61012bae8
82bcd130d366c76f61c65d8eec9261f5799a8844417e40ec0c28a15f9bf624f5
null
[]
153
2.4
trustgraph-cli
2.0.9
TrustGraph provides a means to run a pipeline of flexible AI processing components in a flexible means to achieve a processing pipeline.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "trustgraph-base<2.1,>=2.0", "requests", "pulsar-client", "aiohttp", "rdflib", "tabulate", "msgpack", "websockets" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:54.365204
trustgraph_cli-2.0.9.tar.gz
58,477
c3/82/0afb7c51c18dae5858a3d0f33331746e572508743d1e4be94365e43b4078/trustgraph_cli-2.0.9.tar.gz
source
sdist
null
false
8c3779bcfda605cf9f92e5bf76d1413c
4785c368c7be8221105bd699e189fd89f8305ce735c000fe784df12247d7bff9
c3820afb7c51c18dae5858a3d0f33331746e572508743d1e4be94365e43b4078
null
[]
151
2.4
trustgraph-bedrock
2.0.9
TrustGraph provides a means to run a pipeline of flexible AI processing components in a flexible means to achieve a processing pipeline.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "trustgraph-base<2.1,>=2.0", "pulsar-client", "prometheus-client", "boto3" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:53.128284
trustgraph_bedrock-2.0.9.tar.gz
5,389
b3/d9/4d21c48290dda7ff67959ff1f7b85df72bfdad697f86cc5e638ff5d633ed/trustgraph_bedrock-2.0.9.tar.gz
source
sdist
null
false
883547c6816d67edfda989b26ad2a1ce
47257b338e464739d289d572df1e2151823450b452b3e6e88dcaeae29b0d5ef3
b3d94d21c48290dda7ff67959ff1f7b85df72bfdad697f86cc5e638ff5d633ed
null
[]
146
2.4
trustgraph-base
2.0.9
TrustGraph provides a means to run a pipeline of flexible AI processing components in a flexible means to achieve a processing pipeline.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "pulsar-client", "prometheus-client", "requests", "python-logging-loki" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:51.788924
trustgraph_base-2.0.9.tar.gz
90,161
4e/15/0ab6b5c9c4223a1c8468faf5572524188409f601aa643dcf1f2fe3d66857/trustgraph_base-2.0.9.tar.gz
source
sdist
null
false
bc3cd721b7878603eab70159a4add348
d6f3480721aa99e47eff307163484cf687fee12702bd632f438806e65ba7e552
4e150ab6b5c9c4223a1c8468faf5572524188409f601aa643dcf1f2fe3d66857
null
[]
175
2.4
trustgraph
2.0.9
TrustGraph provides a means to run a pipeline of flexible AI processing components in a flexible means to achieve a processing pipeline.
See https://trustgraph.ai/
text/markdown
null
"trustgraph.ai" <security@trustgraph.ai>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: OS Independent" ]
[]
null
null
>=3.8
[]
[]
[]
[ "trustgraph-base<1.9,>=1.8", "trustgraph-bedrock<1.9,>=1.8", "trustgraph-cli<1.9,>=1.8", "trustgraph-embeddings-hf<1.9,>=1.8", "trustgraph-flow<1.9,>=1.8", "trustgraph-vertexai<1.9,>=1.8" ]
[]
[]
[]
[ "Homepage, https://github.com/trustgraph-ai/trustgraph" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:50.437489
trustgraph-2.0.9.tar.gz
1,419
82/25/c797a00ef88a054da3799746aab6d4e14bd27fe9667aaddf62543f6873b4/trustgraph-2.0.9.tar.gz
source
sdist
null
false
8c76e13a7d4ed22ecbe5906e5dbaf4f9
5c8203830877197ddb68153e87bed198da5ebcf82360b65111dcc0adef634302
8225c797a00ef88a054da3799746aab6d4e14bd27fe9667aaddf62543f6873b4
null
[]
149
2.4
dbvault
0.1.3
Multi-database backup utility with Fernet encryption, S3/Azure cloud upload, async execution, and retry logic
# DBVault ``` ___ ____ _ __ ____ / _ \/ __ )| | / /___ _____/ / /_ / // / __ || |/ / __ `/ __/ / __/ /____/_/ /_/ |___/\__,_/\__/_/\__/ ``` **Encrypted · Cloud-ready · Multi-database backup utility** [![Python](https://img.shields.io/badge/python-3.10%2B-blue.svg)](https://python.org) [![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE) [![PyPI](https://img.shields.io/pypi/v/dbvault.svg)](https://pypi.org/project/dbvault/) [![Tests](https://img.shields.io/badge/tests-pytest-orange.svg)](tests/) DBVault is a command-line backup utility that gives you a consistent pipeline across six database engines: **dump → validate → compress → encrypt → upload**. Every step is covered with retry/backoff logic and all operations are available in both sync and async modes. --- ## Features | Feature | Detail | |---|---| | **6 database engines** | MySQL, PostgreSQL, MongoDB, Redis, SQLite, IBM Db2 | | **Gzip compression** | Every backup is compressed before storage | | **Fernet encryption** | Optional AES-128-CBC + HMAC-SHA256 (symmetric, authenticated) | | **Cloud upload** | AWS S3, Azure Blob Storage, Google Cloud Storage (GCS), and MinIO | | **Retry + backoff** | 3 attempts, exponential 2–10 s (powered by `tenacity`) | | **Async execution** | `async_perform_backup_pipeline` via `asyncio.to_thread` | | **Validation** | Each backup is restored to a temp target and verified before being kept | | **Clean CLI** | `click`-powered interface with `pyfiglet` banner | --- ## Supported Databases | Alias | Engine | Backup tool | Validation method | |---|---|---|---| | `mysql` | MySQL / MariaDB | `mysqldump` | restore to temp DB via `mysql` | | `postgres` | PostgreSQL | `pg_dump` | restore to temp DB via `psql` | | `mongo` | MongoDB | `mongodump --archive` | `mongorestore --nsFrom/--nsTo` | | `redis` | Redis | `redis-cli --rdb` | RDB magic-byte check (`REDIS`) | | `sqlite` | SQLite | `sqlite3.Connection.backup()` | `PRAGMA integrity_check` | | `db2` | IBM Db2 | `db2 BACKUP DATABASE` | `db2ckbkp` | --- ## Installation ### From PyPI ```bash pip install dbvault ``` ### From source (development) ```bash git clone https://github.com/Abhishek772/dbvault cd dbvault uv sync --group dev ``` --- ## Quick Start ### 1. Back up a MySQL database ```bash dbvault backup \ --db mysql \ --host localhost \ --user root \ --database mydb \ --output ./backups ``` ### 2. Back up with encryption ```bash dbvault backup \ --db postgres \ --host db.internal \ --user admin \ --database analytics \ --output ./backups \ --encrypt # DBVault prints the generated key — save it! ``` ### 3. Back up directly to S3 (with encryption) ```bash dbvault backup \ --db mysql \ --host localhost \ --user root \ --database mydb \ --output ./backups \ --encrypt \ --cloud s3 \ --s3-bucket my-backup-bucket \ --s3-owner 123456789012 ``` ### 4. Back up directly to GCP Cloud Storage ```bash dbvault backup \ --db postgres \ --host localhost \ --user admin \ --database mydb \ --output ./backups \ --cloud gcp \ --gcp-bucket my-gcp-backup-bucket # Uses GOOGLE_APPLICATION_CREDENTIALS environment variable ``` ### 5. Back up to MinIO ```bash dbvault backup \ --db mongo \ --host localhost \ --user admin \ --database myapp \ --output ./backups \ --cloud minio \ --minio-endpoint play.min.io \ --minio-bucket my-minio-bucket # Uses MINIO_ACCESS_KEY and MINIO_SECRET_KEY environment variables ``` ### 6. Decrypt a backup ```bash dbvault decrypt \ --file ./backups/backup.sql.gz.enc \ --key <your-fernet-key> ``` ### 7. Generate an encryption key ```bash dbvault keygen # or save directly to a file dbvault keygen --save ~/.dbvault.key ``` --- ## CLI Reference ### `dbvault backup` ``` Options: -d, --db [mysql|postgres|mongo|redis|sqlite|db2] Database engine [required] -H, --host TEXT Database host [default: localhost] -u, --user TEXT Database username -p, --password TEXT Database password (prompted if omitted) -D, --database TEXT Database name / SQLite file path [required] -o, --output PATH Output directory [required] -e, --encrypt Fernet-encrypt the backup -k, --key TEXT Existing Fernet key (generated if --encrypt and omitted) -c, --cloud [s3|azure|gcp|minio] Upload to cloud after backup --s3-bucket TEXT S3 bucket name --s3-key TEXT S3 object key --s3-owner TEXT Expected S3 bucket owner — 12-digit AWS account ID --azure-conn-str TEXT Azure Storage connection string --azure-container TEXT Azure container name --azure-blob TEXT Azure blob name --gcp-bucket TEXT GCP bucket name --gcp-blob TEXT GCP blob name (optional) --minio-endpoint TEXT MinIO endpoint URL --minio-bucket TEXT MinIO bucket name --minio-object TEXT MinIO object name (optional) -a, --async-mode Run asynchronously -h, --help Show this message and exit. ``` ### `dbvault keygen` ``` Options: -s, --save PATH Write key to a file -h, --help Show this message and exit. ``` ### `dbvault decrypt` ``` Options: -f, --file PATH Encrypted backup file (.enc) [required] -k, --key TEXT Fernet key used during encryption [required] -h, --help Show this message and exit. ``` --- ## Architecture ``` dbvault backup │ ▼ DatabaseBackupManager (ABC) │ ├── connect() — open DB connection ├── _run_*dump() — engine-specific dump subprocess ├── validate() — restore to temp target, verify, drop ├── compress() — gzip the dump file ├── encrypt() — Fernet encrypt (optional) ├── _upload_to_cloud() — S3 / Azure / GCP / MinIO upload (optional) └── perform_backup_pipeline() ← @retry(3×, exp backoff 2-10 s) async_perform_backup_pipeline() ← asyncio.to_thread wrapper ``` ``` core/ ├── interfaces/ │ └── backup_utility_interface.py # Abstract base class ├── helpers/ │ ├── cryptographic_helper.py # Fernet generate / encrypt / decrypt │ └── blobstorage_uploader.py # S3, Azure, GCP, MinIO upload (sync + async) └── services/ ├── sql_backup_utility.py # MySQL ├── postgres_backup_utility.py # PostgreSQL ├── mongo_backup_utility.py # MongoDB ├── redis_backup_utility.py # Redis ├── sqllite_backup_utility.py # SQLite └── ibm_db2_backup_uitlity.py # IBM Db2 cli/ └── app.py # Click CLI entry point tests/ ├── conftest.py ├── test_cryptographic_helper.py ├── test_sqlite_backup.py ├── test_sql_backup.py ├── test_blobstorage_uploader.py └── test_cli.py ``` --- ## Development ### Setup ```bash git clone https://github.com/Abhishek772/dbvault cd dbvault uv sync --group dev ``` ### Run tests ```bash pytest # with coverage pytest --cov=core --cov=cli --cov-report=term-missing ``` ### Run the CLI from source ```bash python main.py backup --db sqlite --database ./my.db --output ./out # or uv run dbvault backup --db sqlite --database ./my.db --output ./out ``` ### Build for PyPI ```bash uv build # produces dist/dbvault-0.1.0-py3-none-any.whl and .tar.gz ``` ### Publish ```bash uv publish --token $PYPI_TOKEN ``` --- ## Adding a New Database Engine 1. Create `core/services/<engine>_backup_utility.py` 2. Extend `DatabaseBackupManager` and implement all abstract methods: - `connect()`, `backup()`, `validate()`, `compress()`, `encrypt()`, `perform_backup_pipeline()`, `async_perform_backup_pipeline()` 3. Register the alias in `cli/app.py → DB_MANAGERS` 4. Add test coverage in `tests/test_<engine>_backup.py` --- ## Security Notes - Passwords are passed via environment variables (`MYSQL_PWD`, `PGPASSWORD`) or the tool's own `--password` flag and are never written to disk. - S3 uploads enforce `ExpectedBucketOwner` to prevent confused-deputy bucket hijacking. - Fernet encryption is authenticated (HMAC-SHA256); tampering with the ciphertext raises `InvalidToken`. - Encryption keys are printed once at generation time and are never stored by DBVault — keep them safe. --- ## License MIT — see [LICENSE](LICENSE).
text/markdown
null
null
null
null
MIT
azure, backup, cli, database, encryption, ibm-db2, mongodb, mysql, postgresql, redis, s3, sqlite
[ "Development Status :: 3 - Alpha", "Environment :: Console", "Intended Audience :: Developers", "Intended Audience :: System Administrators", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Database", "Topic :: Security :: Cryptography", "Topic :: System :: Archiving :: Backup", "Topic :: System :: Systems Administration" ]
[]
null
null
>=3.10
[]
[]
[]
[ "azure-storage-blob>=12.28.0", "boto3>=1.42.53", "click>=8.1", "cryptography>=46.0.5", "google-cloud-storage>=2.19", "minio>=7.2", "psycopg2-binary>=2.9.11", "pyfiglet>=1.0", "pymongo>=4.10", "pymysql>=1.1.2", "redis>=5.2", "tenacity>=9.1.4", "pytest-asyncio>=0.25; extra == \"dev\"", "pytest-mock>=3.14; extra == \"dev\"", "pytest>=8.3; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/abhishekbiswas72/dbvault", "Repository, https://github.com/abhishekbiswas772/dbvault", "Issues, https://github.com/Abhishek772/dbvault/issues", "Changelog, https://github.com/abhishekbiswas72/dbvault/blob/main/CHANGELOG.md" ]
uv/0.9.0
2026-02-20T10:28:30.302591
dbvault-0.1.3.tar.gz
59,605
14/27/fc81dc15785cd57d1fdddf00bbb460b29658825c76a5dd4c02eeedbbe392/dbvault-0.1.3.tar.gz
source
sdist
null
false
e0f39ae67024bf3b2a6f5dda23f2708f
0e67bd21a07b5ebdffe8be5d3bb6bd733ff2ef302005b58d842c43deec39357d
1427fc81dc15785cd57d1fdddf00bbb460b29658825c76a5dd4c02eeedbbe392
null
[ "LICENSE" ]
231
2.4
meitner
0.5.0
Python Client SDK Generated by Speakeasy.
# meitner Developer-friendly & type-safe Python SDK specifically catered to leverage *meitner* API. <div align="left"> <a href="https://www.speakeasy.com/?utm_source=meitner&utm_campaign=python"><img src="https://www.speakeasy.com/assets/badges/built-by-speakeasy.svg" /></a> <a href="https://opensource.org/licenses/MIT"> <img src="https://img.shields.io/badge/License-MIT-blue.svg" style="width: 100px; height: 28px;" /> </a> </div> <br /><br /> > [!IMPORTANT] > This SDK is not yet ready for production use. To complete setup please follow the steps outlined in your [workspace](https://app.speakeasy.com/org/meitner-2u8/api). Delete this section before > publishing to a package manager. <!-- Start Summary [summary] --> ## Summary Directory API: Generated API documentation <!-- End Summary [summary] --> <!-- Start Table of Contents [toc] --> ## Table of Contents <!-- $toc-max-depth=2 --> * [meitner](https://github.com/meitner-se/api-client-python/blob/master/#meitner) * [SDK Installation](https://github.com/meitner-se/api-client-python/blob/master/#sdk-installation) * [IDE Support](https://github.com/meitner-se/api-client-python/blob/master/#ide-support) * [SDK Example Usage](https://github.com/meitner-se/api-client-python/blob/master/#sdk-example-usage) * [Authentication](https://github.com/meitner-se/api-client-python/blob/master/#authentication) * [Available Resources and Operations](https://github.com/meitner-se/api-client-python/blob/master/#available-resources-and-operations) * [Pagination](https://github.com/meitner-se/api-client-python/blob/master/#pagination) * [Retries](https://github.com/meitner-se/api-client-python/blob/master/#retries) * [Error Handling](https://github.com/meitner-se/api-client-python/blob/master/#error-handling) * [Server Selection](https://github.com/meitner-se/api-client-python/blob/master/#server-selection) * [Custom HTTP Client](https://github.com/meitner-se/api-client-python/blob/master/#custom-http-client) * [Resource Management](https://github.com/meitner-se/api-client-python/blob/master/#resource-management) * [Debugging](https://github.com/meitner-se/api-client-python/blob/master/#debugging) * [Development](https://github.com/meitner-se/api-client-python/blob/master/#development) * [Maturity](https://github.com/meitner-se/api-client-python/blob/master/#maturity) * [Contributions](https://github.com/meitner-se/api-client-python/blob/master/#contributions) <!-- End Table of Contents [toc] --> <!-- Start SDK Installation [installation] --> ## SDK Installation > [!NOTE] > **Python version upgrade policy** > > Once a Python version reaches its [official end of life date](https://devguide.python.org/versions/), a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated. The SDK can be installed with *uv*, *pip*, or *poetry* package managers. ### uv *uv* is a fast Python package installer and resolver, designed as a drop-in replacement for pip and pip-tools. It's recommended for its speed and modern Python tooling capabilities. ```bash uv add meitner ``` ### PIP *PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line. ```bash pip install meitner ``` ### Poetry *Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies. ```bash poetry add meitner ``` ### Shell and script usage with `uv` You can use this SDK in a Python shell with [uv](https://docs.astral.sh/uv/) and the `uvx` command that comes with it like so: ```shell uvx --from meitner python ``` It's also possible to write a standalone Python script without needing to set up a whole project like so: ```python #!/usr/bin/env -S uv run --script # /// script # requires-python = ">=3.10" # dependencies = [ # "meitner", # ] # /// from meitner import Meitner sdk = Meitner( # SDK arguments ) # Rest of script here... ``` Once that is saved to a file, you can run it with `uv run script.py` where `script.py` can be replaced with the actual file name. <!-- End SDK Installation [installation] --> <!-- Start IDE Support [idesupport] --> ## IDE Support ### PyCharm Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin. - [PyCharm Pydantic Plugin](https://docs.pydantic.dev/latest/integrations/pycharm/) <!-- End IDE Support [idesupport] --> <!-- Start SDK Example Usage [usage] --> ## SDK Example Usage ### Example ```python # Synchronous Example from meitner import Meitner, models import os with Meitner( security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = m_client.schools.list(limit=1, offset=0) while res is not None: # Handle items res = res.next() ``` </br> The same SDK client can also be used to make asynchronous requests by importing asyncio. ```python # Asynchronous Example import asyncio from meitner import Meitner, models import os async def main(): async with Meitner( security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = await m_client.schools.list_async(limit=1, offset=0) while res is not None: # Handle items res = res.next() asyncio.run(main()) ``` <!-- End SDK Example Usage [usage] --> <!-- Start Authentication [security] --> ## Authentication ### Per-Client Security Schemes This SDK supports the following security schemes globally: | Name | Type | Scheme | Environment Variable | | -------------------- | ------ | ------- | ---------------------------- | | `client_credentials` | apiKey | API key | `MEITNER_CLIENT_CREDENTIALS` | | `client_secret` | apiKey | API key | `MEITNER_CLIENT_SECRET` | You can set the security parameters through the `security` optional parameter when initializing the SDK client instance. The selected scheme will be used by default to authenticate with the API for all operations that support it. For example: ```python from meitner import Meitner, models import os with Meitner( security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = m_client.schools.list(limit=1, offset=0) while res is not None: # Handle items res = res.next() ``` <!-- End Authentication [security] --> <!-- Start Available Resources and Operations [operations] --> ## Available Resources and Operations <details open> <summary>Available methods</summary> ### [AuditEvents](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/auditevents/README.md) * [list](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/auditevents/README.md#list) - List AuditEvents * [search](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/auditevents/README.md#search) - Search AuditEvents * [get](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/auditevents/README.md#get) - Get a AuditEvent ### [EmployeePlacements](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employeeplacements/README.md) * [list](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employeeplacements/README.md#list) - List EmployeePlacements * [create](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employeeplacements/README.md#create) - Create a new EmployeePlacement * [search](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employeeplacements/README.md#search) - Search EmployeePlacements * [get](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employeeplacements/README.md#get) - Get a EmployeePlacement * [delete](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employeeplacements/README.md#delete) - Delete a EmployeePlacement * [update](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employeeplacements/README.md#update) - Update a EmployeePlacement ### [Employees](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employees/README.md) * [list](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employees/README.md#list) - List Employees * [create](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employees/README.md#create) - Create a new Employee * [search](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employees/README.md#search) - Search Employees * [get](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employees/README.md#get) - Get a Employee * [delete](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employees/README.md#delete) - Delete a Employee * [update](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/employees/README.md#update) - Update a Employee ### [Groups](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/groups/README.md) * [list](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/groups/README.md#list) - List Groups * [create](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/groups/README.md#create) - Create a new Group * [search](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/groups/README.md#search) - Search Groups * [get](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/groups/README.md#get) - Get a Group * [delete](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/groups/README.md#delete) - Delete a Group * [update](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/groups/README.md#update) - Update a Group ### [Guardians](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/guardians/README.md) * [list](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/guardians/README.md#list) - List Guardians * [create](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/guardians/README.md#create) - Create a new Guardian * [search](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/guardians/README.md#search) - Search Guardians * [get](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/guardians/README.md#get) - Get a Guardian * [delete](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/guardians/README.md#delete) - Delete a Guardian * [update](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/guardians/README.md#update) - Update a Guardian ### [Schools](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/schools/README.md) * [list](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/schools/README.md#list) - List Schools * [create](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/schools/README.md#create) - Create a new School * [search](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/schools/README.md#search) - Search Schools * [get](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/schools/README.md#get) - Get a School * [update](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/schools/README.md#update) - Update a School ### [StudentPlacements](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md) * [list](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md#list) - List StudentPlacements * [create](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md#create) - Create a new StudentPlacement * [search](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md#search) - Search StudentPlacements * [get](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md#get) - Get a StudentPlacement * [delete](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md#delete) - Delete a StudentPlacement * [update](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md#update) - Update a StudentPlacement * [archive](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md#archive) - Archive a student placement * [restore](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/studentplacements/README.md#restore) - Restore an archived student placement ### [Students](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/students/README.md) * [list](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/students/README.md#list) - List Students * [create](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/students/README.md#create) - Create a new Student * [search](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/students/README.md#search) - Search Students * [get](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/students/README.md#get) - Get a Student * [delete](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/students/README.md#delete) - Delete a Student * [update](https://github.com/meitner-se/api-client-python/blob/master/docs/sdks/students/README.md#update) - Update a Student </details> <!-- End Available Resources and Operations [operations] --> <!-- Start Pagination [pagination] --> ## Pagination Some of the endpoints in this SDK support pagination. To use pagination, you make your SDK calls as usual, but the returned response object will have a `Next` method that can be called to pull down the next group of results. If the return value of `Next` is `None`, then there are no more pages to be fetched. Here's an example of one such pagination call: ```python from meitner import Meitner, models import os with Meitner( security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = m_client.schools.list(limit=1, offset=0) while res is not None: # Handle items res = res.next() ``` <!-- End Pagination [pagination] --> <!-- Start Retries [retries] --> ## Retries Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK. To change the default retry strategy for a single API call, simply provide a `RetryConfig` object to the call: ```python from meitner import Meitner, models from meitner.utils import BackoffStrategy, RetryConfig import os with Meitner( security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = m_client.schools.list(limit=1, offset=0, RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False)) while res is not None: # Handle items res = res.next() ``` If you'd like to override the default retry strategy for all operations that support retries, you can use the `retry_config` optional parameter when initializing the SDK: ```python from meitner import Meitner, models from meitner.utils import BackoffStrategy, RetryConfig import os with Meitner( retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False), security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = m_client.schools.list(limit=1, offset=0) while res is not None: # Handle items res = res.next() ``` <!-- End Retries [retries] --> <!-- Start Error Handling [errors] --> ## Error Handling [`MeitnerError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/meitnererror.py) is the base class for all HTTP error responses. It has the following properties: | Property | Type | Description | | ------------------ | ---------------- | --------------------------------------------------------------------------------------- | | `err.message` | `str` | Error message | | `err.status_code` | `int` | HTTP response status code eg `404` | | `err.headers` | `httpx.Headers` | HTTP response headers | | `err.body` | `str` | HTTP body. Can be empty string if no body is returned. | | `err.raw_response` | `httpx.Response` | Raw HTTP response | | `err.data` | | Optional. Some errors may contain structured data. [See Error Classes](https://github.com/meitner-se/api-client-python/blob/master/#error-classes). | ### Example ```python from meitner import Meitner, errors, models import os with Meitner( security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = None try: res = m_client.schools.list(limit=1, offset=0) while res is not None: # Handle items res = res.next() except errors.MeitnerError as e: # The base class for HTTP error responses print(e.message) print(e.status_code) print(e.body) print(e.headers) print(e.raw_response) # Depending on the method different errors may be thrown if isinstance(e, errors.Error400ResponseBody): print(e.data.error) # models.Error400ResponseBodyError ``` ### Error Classes **Primary errors:** * [`MeitnerError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/meitnererror.py): The base class for HTTP error responses. * [`Error400ResponseBody`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/error400responsebody.py): Bad Request - The request was malformed or contained invalid parameters. Status code `400`. * [`Error401ResponseBody`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/error401responsebody.py): Unauthorized - The request is missing valid authentication credentials. Status code `401`. * [`Error403ResponseBody`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/error403responsebody.py): Forbidden - Request is authenticated, but the user is not allowed to perform the operation. Status code `403`. * [`Error404ResponseBody`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/error404responsebody.py): Not Found - The requested resource does not exist. Status code `404`. * [`Error409ResponseBody`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/error409responsebody.py): Conflict - The request could not be completed due to a conflict. Status code `409`. * [`Error429ResponseBody`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/error429responsebody.py): Too Many Requests - When the rate limit has been exceeded. Status code `429`. * [`Error500ResponseBody`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/error500responsebody.py): Internal Server Error - An unexpected server error occurred. Status code `500`. <details><summary>Less common errors (27)</summary> <br /> **Network errors:** * [`httpx.RequestError`](https://www.python-httpx.org/exceptions/#httpx.RequestError): Base class for request errors. * [`httpx.ConnectError`](https://www.python-httpx.org/exceptions/#httpx.ConnectError): HTTP client was unable to make a request to a server. * [`httpx.TimeoutException`](https://www.python-httpx.org/exceptions/#httpx.TimeoutException): HTTP request timed out. **Inherit from [`MeitnerError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/meitnererror.py)**: * [`SchoolCreate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/schoolcreate422responsebodyerror.py): Validation error for School Create operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`SchoolSearch422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/schoolsearch422responsebodyerror.py): Validation error for School Search operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`SchoolUpdate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/schoolupdate422responsebodyerror.py): Validation error for School Update operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`GroupCreate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/groupcreate422responsebodyerror.py): Validation error for Group Create operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`GroupSearch422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/groupsearch422responsebodyerror.py): Validation error for Group Search operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`GroupUpdate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/groupupdate422responsebodyerror.py): Validation error for Group Update operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`EmployeeCreate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/employeecreate422responsebodyerror.py): Validation error for Employee Create operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`EmployeeSearch422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/employeesearch422responsebodyerror.py): Validation error for Employee Search operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`EmployeeUpdate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/employeeupdate422responsebodyerror.py): Validation error for Employee Update operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`EmployeePlacementCreate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/employeeplacementcreate422responsebodyerror.py): Validation error for EmployeePlacement Create operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`EmployeePlacementSearch422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/employeeplacementsearch422responsebodyerror.py): Validation error for EmployeePlacement Search operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`EmployeePlacementUpdate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/employeeplacementupdate422responsebodyerror.py): Validation error for EmployeePlacement Update operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`GuardianCreate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/guardiancreate422responsebodyerror.py): Validation error for Guardian Create operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`GuardianSearch422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/guardiansearch422responsebodyerror.py): Validation error for Guardian Search operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`GuardianUpdate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/guardianupdate422responsebodyerror.py): Validation error for Guardian Update operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`StudentCreate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/studentcreate422responsebodyerror.py): Validation error for Student Create operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`StudentSearch422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/studentsearch422responsebodyerror.py): Validation error for Student Search operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`StudentUpdate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/studentupdate422responsebodyerror.py): Validation error for Student Update operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`StudentPlacementCreate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/studentplacementcreate422responsebodyerror.py): Validation error for StudentPlacement Create operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`StudentPlacementSearch422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/studentplacementsearch422responsebodyerror.py): Validation error for StudentPlacement Search operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`StudentPlacementUpdate422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/studentplacementupdate422responsebodyerror.py): Validation error for StudentPlacement Update operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`AuditEventSearch422ResponseBodyError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/auditeventsearch422responsebodyerror.py): Validation error for AuditEvent Search operation - request data failed validation. Status code `422`. Applicable to 1 of 46 methods.* * [`ResponseValidationError`](https://github.com/meitner-se/api-client-python/blob/master/./src/meitner/errors/responsevalidationerror.py): Type mismatch between the response data and the expected Pydantic model. Provides access to the Pydantic validation error via the `cause` attribute. </details> \* Check [the method documentation](https://github.com/meitner-se/api-client-python/blob/master/#available-resources-and-operations) to see if the error is applicable. <!-- End Error Handling [errors] --> <!-- Start Server Selection [server] --> ## Server Selection ### Select Server by Name You can override the default server globally by passing a server name to the `server: str` optional parameter when initializing the SDK client instance. The selected server will then be used as the default on the operations that use it. This table lists the names associated with the available servers: | Name | Server | Description | | ------------ | --------------------------------------------- | ----------------------------------------------- | | `production` | `https://api.meitner.se/directory/v1` | Server to use in production | | `staging` | `https://api.staging.meitner.se/directory/v1` | Server to use when building and testing the API | #### Example ```python from meitner import Meitner, models import os with Meitner( server="production", security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = m_client.schools.list(limit=1, offset=0) while res is not None: # Handle items res = res.next() ``` ### Override Server URL Per-Client The default server can also be overridden globally by passing a URL to the `server_url: str` optional parameter when initializing the SDK client instance. For example: ```python from meitner import Meitner, models import os with Meitner( server_url="https://api.meitner.se/directory/v1", security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: res = m_client.schools.list(limit=1, offset=0) while res is not None: # Handle items res = res.next() ``` <!-- End Server Selection [server] --> <!-- Start Custom HTTP Client [http-client] --> ## Custom HTTP Client The Python SDK makes API calls using the [httpx](https://www.python-httpx.org/) HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance. Depending on whether you are using the sync or async version of the SDK, you can pass an instance of `HttpClient` or `AsyncHttpClient` respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls. This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of `httpx.Client` or `httpx.AsyncClient` directly. For example, you could specify a header for every request that this sdk makes as follows: ```python from meitner import Meitner import httpx http_client = httpx.Client(headers={"x-custom-header": "someValue"}) s = Meitner(client=http_client) ``` or you could wrap the client with your own custom logic: ```python from meitner import Meitner from meitner.httpclient import AsyncHttpClient import httpx class CustomClient(AsyncHttpClient): client: AsyncHttpClient def __init__(self, client: AsyncHttpClient): self.client = client async def send( self, request: httpx.Request, *, stream: bool = False, auth: Union[ httpx._types.AuthTypes, httpx._client.UseClientDefault, None ] = httpx.USE_CLIENT_DEFAULT, follow_redirects: Union[ bool, httpx._client.UseClientDefault ] = httpx.USE_CLIENT_DEFAULT, ) -> httpx.Response: request.headers["Client-Level-Header"] = "added by client" return await self.client.send( request, stream=stream, auth=auth, follow_redirects=follow_redirects ) def build_request( self, method: str, url: httpx._types.URLTypes, *, content: Optional[httpx._types.RequestContent] = None, data: Optional[httpx._types.RequestData] = None, files: Optional[httpx._types.RequestFiles] = None, json: Optional[Any] = None, params: Optional[httpx._types.QueryParamTypes] = None, headers: Optional[httpx._types.HeaderTypes] = None, cookies: Optional[httpx._types.CookieTypes] = None, timeout: Union[ httpx._types.TimeoutTypes, httpx._client.UseClientDefault ] = httpx.USE_CLIENT_DEFAULT, extensions: Optional[httpx._types.RequestExtensions] = None, ) -> httpx.Request: return self.client.build_request( method, url, content=content, data=data, files=files, json=json, params=params, headers=headers, cookies=cookies, timeout=timeout, extensions=extensions, ) s = Meitner(async_client=CustomClient(httpx.AsyncClient())) ``` <!-- End Custom HTTP Client [http-client] --> <!-- Start Resource Management [resource-management] --> ## Resource Management The `Meitner` class implements the context manager protocol and registers a finalizer function to close the underlying sync and async HTTPX clients it uses under the hood. This will close HTTP connections, release memory and free up other resources held by the SDK. In short-lived Python programs and notebooks that make a few SDK method calls, resource management may not be a concern. However, in longer-lived programs, it is beneficial to create a single SDK instance via a [context manager][context-manager] and reuse it across the application. [context-manager]: https://docs.python.org/3/reference/datamodel.html#context-managers ```python from meitner import Meitner, models import os def main(): with Meitner( security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: # Rest of application here... # Or when using async: async def amain(): async with Meitner( security=models.Security( client_credentials=os.getenv("MEITNER_CLIENT_CREDENTIALS", ""), client_secret=os.getenv("MEITNER_CLIENT_SECRET", ""), ), ) as m_client: # Rest of application here... ``` <!-- End Resource Management [resource-management] --> <!-- Start Debugging [debug] --> ## Debugging You can setup your SDK to emit debug logs for SDK requests and responses. You can pass your own logger class directly into your SDK. ```python from meitner import Meitner import logging logging.basicConfig(level=logging.DEBUG) s = Meitner(debug_logger=logging.getLogger("meitner")) ``` You can also enable a default debug logger by setting an environment variable `MEITNER_DEBUG` to true. <!-- End Debugging [debug] --> <!-- Placeholder for Future Speakeasy SDK Sections --> # Development ## Maturity This SDK is in beta, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning usage to a specific package version. This way, you can install the same version each time without breaking changes unless you are intentionally looking for the latest version. ## Contributions While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation. We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release. ### SDK Created by [Speakeasy](https://www.speakeasy.com/?utm_source=meitner&utm_campaign=python)
text/markdown
Speakeasy
null
null
null
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "httpcore>=1.0.9", "httpx>=0.28.1", "jsonpath-python>=1.0.6", "pydantic>=2.11.2" ]
[]
[]
[]
[ "repository, https://github.com/meitner-se/api-client-python.git" ]
uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-20T10:28:11.230776
meitner-0.5.0.tar.gz
198,805
c0/e3/f627604b55df1db5fd70a50ad736591aedf7e1e1baae88033c8992b86655/meitner-0.5.0.tar.gz
source
sdist
null
false
0dce9d70190535f81c4faf7ee60f7a42
959210f8562f52edf828a5e0b7d14ad5a49bb6d19093149b7be1107f4207e4b9
c0e3f627604b55df1db5fd70a50ad736591aedf7e1e1baae88033c8992b86655
null
[]
217
2.4
nexus-engine
0.6.0
Python DSL for the Nexus Engine
# Nexus Engine The Python client for [Nexus Engine](https://nexus-engine.io) — a Rust-based calculation engine for financial data processing. This package provides a fluent, chainable DSL for building data transformation and calculation pipelines. Define columns from SQL data sources, apply filters, joins, groupings, conditional logic, and hierarchical breakdowns — then send the pipeline to Nexus Engine for execution. > **Note:** Executing pipelines requires access to a running Nexus Engine instance and valid credentials. > Contact your administrator or visit [nexus-engine.io](https://nexus-engine.io) for access. ## Installation ```bash pip install nexus-engine ``` ## Quick Start ```python import nexus as nx # Define source columns from a SQL data view table = nx.TableBuilder([ nx.sql_column("Category", nx.DataType.String, "positions", "asset_category"), nx.sql_column("Currency", nx.DataType.String, "positions", "local_currency"), nx.sql_column("MarketValue", nx.DataType.Float, "positions", "market_value"), ]) # Build a transformation pipeline pipeline = ( table .filter(nx.col("MarketValue") > nx.lit(10000)) .group_by( by=["Category"], aggregations=[("MarketValue", "sum")] ) ) # Execute against a Nexus Engine server result = pipeline.execute( params={"account_id": "ACC001", "date": "2025-01-31"}, api_token="YOUR_API_TOKEN", ) print(result) ``` ## Type Support The package includes full type stubs (`.pyi`) for IDE autocompletion and type checking with mypy or pyright. ## Documentation Full documentation and tutorials are available at [nexus-engine.io](https://nexus-engine.io). ## License Dual-licensed under [MIT](https://opensource.org/licenses/MIT) or [Apache-2.0](https://opensource.org/licenses/Apache-2.0).
text/markdown; charset=UTF-8; variant=GFM
Synlynx
null
null
null
MIT OR Apache-2.0
finance, pipeline, dsl, dataframe, etl, nexus-engine, nexus
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Financial and Insurance Industry", "License :: OSI Approved :: MIT License", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Rust", "Programming Language :: Python :: Implementation :: CPython", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: Office/Business :: Financial", "Typing :: Typed" ]
[]
null
null
>=3.9
[]
[]
[]
[]
[]
[]
[]
[ "Homepage, https://nexus-engine.io", "Documentation, https://nexus-engine.synlynx.com" ]
twine/6.2.0 CPython/3.12.3
2026-02-20T10:28:01.891066
nexus_engine-0.6.0-pp311-pypy311_pp73-win_amd64.whl
1,879,536
a0/b9/6635ef2a7be08b936e60994b47d3bac6a0535bfcb8210b27d71b14d7d03a/nexus_engine-0.6.0-pp311-pypy311_pp73-win_amd64.whl
pp311
bdist_wheel
null
false
11b04ba13dcdaddab9ec00f998b97b02
dfcbb5f724056392d0f961dc1347aa71f2e91a4474ce8445f6022f46834d780b
a0b96635ef2a7be08b936e60994b47d3bac6a0535bfcb8210b27d71b14d7d03a
null
[]
380
2.4
asknews
0.13.16
Python SDK for AskNews
# AskNews Python SDK ![Static Badge](https://img.shields.io/badge/python-3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12%20%7C%203.13-blue?style=flat-square) Python SDK for the AskNews API. ## Installation ```bash pip install asknews ``` ## Usage ```python from asknews_sdk import AskNewsSDK ask = AskNewsSDK( api_key="<YOUR API KEY>" scopes=["news", "chat", "stories", "analytics"] ) query = "Effect of fed policy on tech sector" # prompt-optimized string ready to go for any LLM: news_context = ask.news.search_news(query).as_string ``` And you will have a prompt-optimized string ready to go for any LLM. The API doesn't stop there, explore a wide range of endpoints: - /stories, high level event tracking and state of the art article clustering - /forecasts, industry leading forecasting on any real-time event - /analytics, time-series data on finance and politics - /deepnews, a deep research agent that can explore the new knowledge graph, X, Reddit, Google, Wikipedia and more to build forecasts, reports, analytics, and anything else your system may need. - /graph, build any news knowledge graph imaginable from the largest news graph on the planet - /websearch, search the web and get back an LLM distillation of all the relevant web pages Find full details at the [AskNews API documentation](https://docs.asknews.app). ## Support Join our [Discord](https://discord.gg/2Yw66XXEhY) to see what other people are building, and to get support with your projects.
text/markdown
null
Emergent Methods <contact@emergentmethods.ai>
null
null
MIT
null
[ "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13" ]
[]
null
null
>=3.8
[]
[]
[]
[ "asgiref>=3.7.2", "crontab>=1.0.1", "cryptography<46.0.5,>=40.0.0", "httpx<0.29.0,>=0.27.2", "orjson>=3.9.10", "pydantic[email]>=2.10.4" ]
[]
[]
[]
[ "repository, https://github.com/emergentmethods/asknews-python-sdk" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:28:01.738432
asknews-0.13.16-py3-none-any.whl
48,525
d1/8a/a957f70cf1f5abfa668068ee65d98301262abba1c2aee211c73be423779f/asknews-0.13.16-py3-none-any.whl
py3
bdist_wheel
null
false
b479be4378d385cc405af6dcb02e14c0
a0b3e81dc78ef60a34874429179c8ee930d48a1ff2995638a349b700c2fb4d85
d18aa957f70cf1f5abfa668068ee65d98301262abba1c2aee211c73be423779f
null
[ "LICENSE" ]
1,106
2.4
gsd-lean
0.9.0
Lightweight meta-prompting and spec-driven development plugin for Claude Code
# GSD-Lean > A lightweight reimplementation of [Get-Shit-Done](https://github.com/glittercowboy/get-shit-done) — a complement to the original GSD system ## What is GSD? [GSD](https://github.com/glittercowboy/get-shit-done) is a meta-prompting, context engineering, and spec-driven development system for Claude Code. It solves context rot by breaking projects into phases with fresh contexts per task, using a discuss → plan → execute → verify loop. Written in JavaScript. ## What is GSD-Lean? A Python reimplementation of GSD's core ideas as a lightweight plugin. Rather than the full system, GSD-Lean focuses on being minimal and easy to extend. **Status:** Early development. ## How It Works GSD-Lean guides development through a 6-phase workflow: ```mermaid stateDiagram-v2 [*] --> init : gsd-lean init init --> discuss : project initialized discuss --> plan : requirements populated plan --> discuss : plan rejected plan --> execute : plan verified execute --> plan : revise plan execute --> verify verify --> execute : tasks remaining verify --> complete : all tasks done complete --> discuss : new cycle complete --> init : re-initialize ``` | Phase | What Happens | |-------|-------------| | **init** | Project scaffolded; tech stack auto-detected; PROJECT.md, CONTEXT.md, STATE.md created | | **discuss** | Research subagent investigates codebase + web; user preferences probed; REQUIREMENTS.md and DECISIONS.md populated | | **plan** | Requirements decomposed into tasks (T-NNN) in PLAN.md; plan-review subagent verifies completeness | | **execute** | Tasks implemented one by one, status tracked in PLAN.md | | **verify** | Lint, typecheck, tests run against task verification criteria | | **complete** | Summary generated, cycle can restart | Each phase is driven by a skill (`/init`, `/discuss`, `/plan`, `/execute`, `/verify`, `/complete`) that calls CLI commands (`gsd-lean init`, `gsd-lean transition`, `gsd-lean plan-status`) to manage state. See [PROJECT_KNOWLEDGE.md](./PROJECT_KNOWLEDGE.md) for detailed architecture — transitions, preconditions, subagents, and CLI reference. ## Quick Start **0. Install plugin** Add the Bifurcate Loops marketplace, then install GSD-Lean: ``` /plugin marketplace add Bifurcate-Loops/bifurcate-plugins /plugin install gsd-lean@bifurcate-plugins ``` **1. Initialize project** ``` /gsd-lean:init ``` Scaffolds the project. Claude asks clarifying questions, then writes PROJECT.md and CONTEXT.md. Sections "Constraints" and "Notes" in PROJECT.md should be filled with useful information as you work through your development cycles. This will be picked up by GSD-Lean phases `/gsd-lean:discuss` and `/gsd-lean:plan`. Example: ````markdown ## Constraints - (none yet) ## Notes - Use skills `/ai-sdk` (.claude/skills/ai-sdk/SKILL.md) and `/mastra` (.claude/skills/mastra/SKILL.md) when working on agentic features - Use skill `/vercel-react-best-practices` (.claude/skills/vercel-react-best-practices/SKILL.md) when working on React and Next.js applications - If env variables are introduced in a development cycle, remember to include them in `.env.example` ```` **2. Discuss** ``` /gsd-lean:discuss Add user authentication with OAuth ``` Claude explores the codebase and web, probes for preferences, and populates REQUIREMENTS.md and DECISIONS.md. **3. Plan** ``` /gsd-lean:plan ``` Decomposes requirements into structured tasks (T-NNN) in PLAN.md. **4. Execute** ``` /gsd-lean:execute ``` Implements tasks sequentially — one per invocation. If requirements change mid-execution, re-invoke `/plan` to revise the plan while preserving task progress. **5. Verify & Complete** ``` /gsd-lean:verify /gsd-lean:complete ``` `/verify` runs verification against task criteria. `/complete` generates a summary and optionally starts a new cycle. ## Caveats * Run each GSD-Lean phase in a **new Claude Code session**. The `/init`, `/discuss`, and `/plan` phases enter Plan Mode; when prompted to accept, select **Yes, auto-accept edits** — do **not** select "Yes, clear context and auto-accept edits (shift+tab)", as that erases internal context and causes GSD-Lean to lose track. * If your project's `CLAUDE.md` contains development workflow instructions (e.g. branching strategy, commit conventions, test-before-push rules), these can conflict with GSD-Lean's phased development cycle. Remove or comment out such instructions before running GSD-Lean.
text/markdown
Bifurcate-Loops
null
null
null
null
claude-code, development, meta-prompting, plugin, spec-driven
[ "Development Status :: 3 - Alpha", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
<3.15,>=3.12
[]
[]
[]
[ "click>=8.3.1", "pyyaml>=6.0" ]
[]
[]
[]
[ "Homepage, https://github.com/Bifurcate-Loops/gsd-lean", "Repository, https://github.com/Bifurcate-Loops/gsd-lean", "Issues, https://github.com/Bifurcate-Loops/gsd-lean/issues", "Changelog, https://github.com/Bifurcate-Loops/gsd-lean/blob/main/CHANGELOG.md" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:27:51.641540
gsd_lean-0.9.0.tar.gz
137,098
75/5b/adf1b0d1ff157271c6d5621b5ec127f985c4e13c8f9e835405d7acf4c83c/gsd_lean-0.9.0.tar.gz
source
sdist
null
false
15cd2bcf31bb14a6d0de1ceceb19d89f
d78ad0e56a6fc02d2375aa5d8e5889453700c7d6dd4030ff55e68dea5366075e
755badf1b0d1ff157271c6d5621b5ec127f985c4e13c8f9e835405d7acf4c83c
MIT
[ "LICENSE" ]
225
2.3
async-gym-agents
0.2.4
Async agents for Stable Baselines 3
# Async Gym Agents Wrapper environments and agent injectors to allow for drop-in async training. ```py import gymnasium as gym from functools import partial from stable_baselines3 import TD3 from async_gym_agents.agents.async_agent import get_injected_agent from async_gym_agents.envs.multi_env import IndexableMultiEnv # Create env with 8 parallel envs (Also supports VecEnvs) env = IndexableMultiEnv([partial(gym.make, "Pendulum-v1") for i in range(8)]) # Create the model, injected with async capabilities model = get_injected_agent(TD3)("MlpPolicy", env, use_mp=False) # Train the model model.learn(total_timesteps=10) # Shut down workers model.shutdown() ```
text/markdown
Jonas Peche
jonas.peche@aon.at
null
null
Unlicense
null
[ "License :: OSI Approved", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13" ]
[]
null
null
>=3.8
[]
[]
[]
[ "stable-baselines3[extra]<3.0.0,>=2.3.2", "torchinfo<2.0.0,>=1.8.0" ]
[]
[]
[]
[]
poetry/2.1.3 CPython/3.10.8 Windows/10
2026-02-20T10:26:58.127245
async_gym_agents-0.2.4.tar.gz
12,541
93/a4/c8aea17edf7d63e67f67b4e07adb53c3f3b0262a0af344ad80ec929f5f73/async_gym_agents-0.2.4.tar.gz
source
sdist
null
false
3ec91ee8b6b251e6993d4a5d7a4631ec
044381a5ce7b12e4dd87de6879f6159210a47236c2908f55cc3955aa9a2f49e4
93a4c8aea17edf7d63e67f67b4e07adb53c3f3b0262a0af344ad80ec929f5f73
null
[]
231
2.4
pbtk
1.0.7
A toolset for reverse engineering and fuzzing Protobuf-based apps
# pbtk - Reverse engineering Protobuf apps **[Protobuf](https://developers.google.com/protocol-buffers/) is a serialization format** developed by Google and used in an increasing number of Android, web, desktop and more applications. It consists of a **language for declaring data structures**, which is then compiled to code or another kind of structure depending on the target implementation. pbtk (*Protobuf toolkit*) is a full-fledged set of scripts, accessible through an unified GUI, that provides two main features: - **Extracting Protobuf structures from programs**, converting them back into readable *.proto*s, supporting various implementations: - All the main Java runtimes (base, Lite, Nano, Micro, J2ME), with full Proguard support, - Binaries containing embedded reflection metadata (typically C++, sometimes Java and most other bindings), - Web applications using the JsProtoUrl runtime. - **Editing, replaying and fuzzing data** sent to Protobuf network endpoints, through a handy graphical interface that allows you to edit live the fields for a Protobuf message and view the result. ![The pbtk editor GUI](https://i.imgur.com/7w6ABqy.png) ## Installation PBTK requires Python ≥ 3.5, PySide 6, Python-Protobuf 3, and a handful of executable programs (chromium, jad, dex2jar...) for running extractor scripts. Archlinux users can install directly through the [package](https://aur.archlinux.org/packages/pbtk-git/): ``` $ yay -S pbtk-git $ pbtk ``` On most other distributions, you'll want to run it directly: ```tcl # For Ubuntu/Debian testing derivates: $ sudo apt install python3-pip git openjdk-8-jre python3-qtpy-pyside6 # Then, using UV: $ sudo snap install astral-uv $ uv tool install pbtk $ pbtk # Or using pipx: $ sudo apt install pipx $ pipx install pbtk $ pbtk ``` Windows is also supported (with the same modules required). Once you run the GUI, it should warn you on what you are missing depending on what you try to do. ## Command line usage (installing through package manager) The GUI can be lanched through the main script: pbtk The following scripts can also be used standalone, without a GUI: pbtk-jar-extract [-h] input_file [output_dir] pbtk-from-binary [-h] input_file [output_dir] pbtk-web-extract [-h] input_url [output_dir] ## Command line usage (local) The GUI can be lanched through the main script: ./gui.py The following scripts can also be used standalone, without a GUI: ./src/extractors/jar_extract.py [-h] input_file [output_dir] ./src/extractors/from_binary.py [-h] input_file [output_dir] ./src/extractors/web_extract.py [-h] input_url [output_dir] ## Typical workflow Let's say you're reverse engineering an Android application. You explored a bit the application with your favorite decompiler, and figured it transports Protobuf as POST data over HTTPS in a typical way. You open PBTK and are greeted in a meaningful manner: ![The welcome screen](https://i.imgur.com/oVsypWN.png) The first step is getting your .protos into text format. If you're targeting an Android app, dropping in an APK and waiting should do the magic work! (unless it's a really exotic implementation) ![Done screen](https://i.imgur.com/uC9dnWV.png) This being done, you jump to `~/.pbtk/protos/<your APK name>` (either through the command line, or the button on the bottom of the welcome screen to open your file browser, the way you prefer). All the app's .protos are indeed here. Back in your decompiler, you stumbled upon the class that constructs data sent to the HTTPS endpoint that interests you. It serializes the Protobuf message by calling a class made of generated code. ![Your decompiler](https://i.imgur.com/x9YAChW.png) This latter class should have a perfect match inside your .protos directory (i.e `com.foo.bar.a.b` will match `com/foo/bar/a/b.proto`). Either way, grepping its name should enable you to reference it. That's great: the next thing is going to **Step 2**, selecting your desired input .proto, and filling some information about your endpoint. ![Endpoint creation form](https://i.imgur.com/jhu68pG.png) You may also give some sample raw Protobuf data, that was sent to this endpoint, captured through mitmproxy or Wireshark, and that you'll paste in a hex-encoded form. **Step 3** is about the fun part of clicking buttons and seeing what happens! You have a tree view representing every field in the Protobuf structure (repeated fields are suffixed by "+", required fields don't have checkboxes). ![Endpoint creation form](https://i.imgur.com/2lVmGoG.png) Just hover a field to have focus. If the field is an integer type, use the mouse wheel to increment/decrement it. Enum information appears on hover too. Here it is! You can determine the meaning of every field with that. If you extracted .protos out of minified code, you can rename fields according to what you notice they mean, by clicking their names. Happy reversing! 👌 🎉 ## Local data storage PBTK stores extracted .proto information into `~/.pbtk/protos/` (or `%APPDATA%\pbtk\protos` on Windows). You can move in, move out, rename, edit or erase data from this directory directly through your regular file browser and text editor, it's the expected way to do it and won't interfere with PBTK. HTTP-based endpoints are stored into `~/.pbtk/endpoints/` as JSON objects. These objects are arrays of pairs of request/response information, which looks like this: ```javascript [{ "request": { "transport": "pburl", "proto": "www.google.com/VectorTown.proto", "url": "https://www.google.com/VectorTown", "pb_param": "pb", "samples": [{ "pb": "!....", "hl": "fr" }] }, "response": { "format": "other" } }] ``` ## Source code structure PBTK uses two kinds of pluggable modules internally: extractors, and transports. * An **extractor** supports extracting .proto structures from a target Protobuf implementation or platform. Extractors are defined in `src/extractors/*.py`. They are defined as a method preceded by a decorator, like this: ```python @register_extractor(name = 'my_extractor', desc = 'Extract Protobuf structures from Foobar code (*.foo, *.bar)', depends={'binaries': ['foobar-decompiler']}) def my_extractor(path): # Load contents of the `path` input file and do your stuff... # Then, yield extracted .protos using a generator: for i in do_your_extraction_work(): yield proto_name + '.proto', proto_contents # Other kinds of information can be yield, such as endpoint information or progress to display. ``` * A **transport** supports a way of deserializing, reserializing and sending Protobuf data over the network. For example, the most commonly used transport is raw POST data over HTTP. Transports are defined in `src/utils/transports.py`. They are defined as a class preceded by a decorator, like this: ```python @register_transport( name = 'my_transport', desc = 'Protobuf as raw POST data', ui_data_form = 'hex strings' ) class MyTransport(): def __init__(self, pb_param, url): self.url = url def serialize_sample(self, sample): # We got a sample of input data from the user. # Verify that it is valid in the form described through "ui_data_form" parameter, fail with an exception or return False otherwise. # Optionally modify this data prior to returning it. bytes.fromhex(sample) return sample def load_sample(self, sample, pb_msg): # Parse input data into the provided Protobuf object. pb_msg.ParseFromString(bytes.fromhex(sample)) def perform_request(self, pb_data, tab_data): # Perform a request using the provided URL and Protobuf object, and optionally other transport-specific side data. return post(url, pb_data.SerializeToString(), headers=USER_AGENT) ``` ## Forthcoming improvements The following could be coming for further releases: * Finishing the automatic fuzzing part. * Support for extracting extensions out of Java code. * Support for the JSPB (main JavaScript) runtime. * If there's any other platform you wish to see supported, just drop an issue and I'll look at it. I've tried to do my best to produce thoroughly readable and commented code (except for parts that are mostly self-describing, like connecting GUI signals) for most modules, so you can contribute. ## Licensing pbtk is released under the [GNU GPL](https://www.gnu.org/licenses/gpl-3.0.html) license (I, hereby, etc.). There's no formalized rule for the letter case of the project name, the rule is just about following your heart ❤
text/markdown
Marin Moulinier
null
null
null
null
null
[]
[]
null
null
>=3.5
[]
[]
[]
[ "protobuf", "requests", "websocket-client", "PySide6" ]
[]
[]
[]
[ "Homepage, https://github.com/marin-m/pbtk", "Issues, https://github.com/marin-m/pbtk/issues", "Changelog, https://github.com/marin-m/pbtk/releases" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:26:56.812750
pbtk-1.0.7.tar.gz
8,293,658
7c/75/525c402cbf221034a26db747e63f612276bae2e19eb7e4fd41596135af1f/pbtk-1.0.7.tar.gz
source
sdist
null
false
2feb49ec424f65a8019ca6a69245d63f
45eea6f96c2c5f7ace904ea13d06be9fdf7bc66d1c5a995c94e11c5cf7a5337f
7c75525c402cbf221034a26db747e63f612276bae2e19eb7e4fd41596135af1f
null
[ "LICENSE" ]
217
2.4
reviewloop
0.3.6
Autonomous review loop - automates the review-fix-push cycle
<div align="center"> <img src="https://raw.githubusercontent.com/fabianboth/autonomous-review-loop/main/assets/review_loop.webp" alt="Autonomous Review Loop" width="450"/> <h1>Autonomous Review Loop</h1> <h3><em>Automate the review-fix-push cycle.</em></h3> </div> <p align="center"> <strong>Pair your review bot with your coding agent: <code>reviewloop</code> delegates the entire review-fix-push cycle so you only step in when human judgment is needed.</strong> </p> <p align="center"> <a href="https://github.com/fabianboth/autonomous-review-loop/actions/workflows/ci.yml"><img src="https://github.com/fabianboth/autonomous-review-loop/actions/workflows/ci.yml/badge.svg" alt="CI"/></a> <a href="https://pypi.org/project/reviewloop/"><img src="https://img.shields.io/pypi/v/reviewloop" alt="PyPI version"/></a> <a href="https://github.com/fabianboth/autonomous-review-loop/blob/main/LICENSE"><img src="https://img.shields.io/github/license/fabianboth/autonomous-review-loop" alt="License"/></a> </p> --- ## Installation ```bash uv tool install reviewloop ``` ```bash reviewloop init ``` ## Getting Started ### Claude Code Select **Claude Code** during `reviewloop init`. Then run this within your claude code session: ```text /reviewloop ``` ### Any Coding Agent Select **Script based** during `reviewloop init`. This creates standalone scripts and a prompt file at `scripts/reviewloop/reviewPrompt.txt`. Feed it to any coding agent (Cursor, Windsurf, etc.) with `@scripts/reviewloop/reviewPrompt.txt` to start the loop. ## How It Works 1. Waits for CI to complete 2. Fetches inline comments and review comments 3. Fixes valid issues and asks you about ambiguous ones 4. Resolves threads and pushes 5. Repeats until no unresolved comments remain ## Features - **Batched decision-making:** Aggregate review requests instead of triaging comments one by one. - **Parallel work:** Continue other tasks while the loop runs in the background. - **Multi-pass resolution:** Iterates automatically until clean. - **Agent-agnostic:** Works with Claude Code, Cursor, Windsurf, or any coding agent. ## Documentation For full documentation, troubleshooting, and advanced usage, visit the [GitHub repository](https://github.com/fabianboth/autonomous-review-loop).
text/markdown
Fabian Both
null
null
null
null
ai, ai-agents, automation, cli, code-review, developer-tools
[ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.12", "Topic :: Software Development :: Quality Assurance", "Typing :: Typed" ]
[]
null
null
>=3.12
[]
[]
[]
[ "readchar", "rich", "typer" ]
[]
[]
[]
[ "Homepage, https://github.com/fabianboth/autonomous-review-loop", "Repository, https://github.com/fabianboth/autonomous-review-loop.git", "Issues, https://github.com/fabianboth/autonomous-review-loop/issues" ]
uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-20T10:26:31.446689
reviewloop-0.3.6-py3-none-any.whl
12,304
a4/08/12718cccffb6f34a7e73ce380067cfdf2a757e8e17320f1bca4fa7e7a531/reviewloop-0.3.6-py3-none-any.whl
py3
bdist_wheel
null
false
6731472249d8c29e9188733de57666da
0895e6b51ee44ec37e999e4f4e1eac84d4b05959c93fd1534c6009cf4b0a10b7
a40812718cccffb6f34a7e73ce380067cfdf2a757e8e17320f1bca4fa7e7a531
MIT
[ "LICENSE" ]
207
2.4
entitysdk
0.12.1
Python library for interacting with the entitycore service
[![Build status][build_status_badge]][build_status_target] [![License][license_badge]][license_target] [![Code coverage][coverage_badge]][coverage_target] [![CodeQL][codeql_badge]][codeql_target] [![PyPI][pypi_badge]][pypi_target] # entitysdk entitysdk is a Python library for interacting with the [entitycore service][entitycore], providing a type-safe and intuitive interface for managing scientific entities, and their associated assets. ## Requirements - Python 3.11 or higher - Network access to entitycore service endpoints ## Installation ```bash pip install entitysdk ``` ## Obtaining a valid access token An access token can be retrieved easily using the obi-auth helper library. ```bash pip install obi-auth ``` ```python from obi_auth import get_token token = get_token(environment="staging") ``` ## Quick Start ```python from uuid import UUID from entitysdk import Client, ProjectContext, models # Initialize client client = Client( project_context=ProjectContext( project_id=UUID("your-project-id"), virtual_lab_id=UUID("your-lab-id") ), environment="staging", token_manager=token ) # Search for morphologies iterator = client.search_entity( entity_type=models.CellMorphology, query={"mtype__pref_label": "L5_TPC:A"}, limit=1, ) morphology = next(iterator) # Upload an asset client.upload_file( entity_id=morphology.id, entity_type=models.CellMorphology, file_path="path/to/file.swc", file_content_type="application/swc", ) ``` ### Authentication - Valid Keycloak access token - Project context with: - Valid project ID (UUID) - Valid virtual lab ID (UUID) Example configuration: ```python from uuid import UUID from entitysdk import ProjectContext project_context = ProjectContext( project_id=UUID("12345678-1234-1234-1234-123456789012"), virtual_lab_id=UUID("87654321-4321-4321-4321-210987654321") ) ``` ## Development ### Requirements - tox/tox-uv ### Clone and run tests ```bash # Clone the repository git clone https://github.com/your-org/entitysdk.git # Run linting, tests, and check-packaging tox ``` ### Auto-generate server schemas Server schemas at src/entitysdk/_server_schemas.py, which are currently used for importing enum types, can be updated with the following tox command: ```bash tox -e generate-server-schemas ``` ### Auto-update json payloads The json payloads in `tests/unit/models/data/extracted` can be automatically updated from entitycore by executing: ```bash tox -e update-traces ``` The command will checkout a clean copy of entitycore, execute tests, extract the traces, and move them to the expected location in entitysdk. It's possible to set a different `ENTITYCORE_BRANCH_OR_TAG` and `ENTITYCORE_CHECKOUT_DIR` if desired. ## Contributing We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details. ## License Copyright (c) 2025 Open Brain Institute Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. [entitycore]: https://github.com/openbraininstitute/entitycore [build_status_badge]: https://github.com/openbraininstitute/entitysdk/actions/workflows/tox.yml/badge.svg [build_status_target]: https://github.com/openbraininstitute/entitysdk/actions [license_badge]: https://img.shields.io/pypi/l/entitysdk [license_target]: https://github.com/openbraininstitute/entitysdk/blob/main/LICENSE.txt [coverage_badge]: https://codecov.io/github/openbraininstitute/entitysdk/coverage.svg?branch=main [coverage_target]: https://codecov.io/github/openbraininstitute/entitysdk?branch=main [codeql_badge]: https://github.com/openbraininstitute/entitysdk/actions/workflows/github-code-scanning/codeql/badge.svg [codeql_target]: https://github.com/openbraininstitute/entitysdk/actions/workflows/github-code-scanning/codeql [pypi_badge]: https://github.com/openbraininstitute/entitysdk/actions/workflows/sdist.yml/badge.svg [pypi_target]: https://pypi.org/project/entitysdk/
text/markdown
null
Open Brain Institute <info@openbraininstitute.org>
null
Open Brain Institute <info@openbraininstitute.org>
null
null
[ "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13" ]
[]
null
null
>=3.10
[]
[]
[]
[ "httpx", "pydantic>=2.12.0", "pydantic-settings", "typing_extensions; python_version < \"3.12\"", "backports.strenum; python_version < \"3.11\"", "h5py; extra == \"staging\"" ]
[]
[]
[]
[ "documentation, https://entitysdk.readthedocs.io/en/stable", "repository, https://github.com/openbraininstitute/entitysdk", "changelog, https://github.com/openbraininstitute/entitysdk/CHANGELOG.md" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:25:42.435540
entitysdk-0.12.1.tar.gz
74,552
26/af/6758be97f52886447fae37bc030233d180b04b40b8a57591360ea2074a35/entitysdk-0.12.1.tar.gz
source
sdist
null
false
3ef63a834772d7eb8785e973f6baea25
2d4ed4d75cc409b494df92f34ba42d48d354ac0435a37500cc564be69b94bdf7
26af6758be97f52886447fae37bc030233d180b04b40b8a57591360ea2074a35
Apache-2.0
[ "LICENSE.txt" ]
164
2.4
cupy
14.0.1
CuPy: NumPy & SciPy for GPU
.. image:: https://raw.githubusercontent.com/cupy/cupy/main/docs/image/cupy_logo_1000px.png :width: 400 CuPy : NumPy & SciPy for GPU ============================ `CuPy <https://cupy.dev/>`_ is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. This package (``cupy``) is a source distribution. For most users, use of pre-build wheel distributions are recommended: - `cupy-cuda13x <https://pypi.org/project/cupy-cuda13x/>`_ (for NVIDIA CUDA 13.x) - `cupy-cuda12x <https://pypi.org/project/cupy-cuda12x/>`_ (for NVIDIA CUDA 12.x) - `cupy-rocm-7-0 <https://pypi.org/project/cupy-rocm-7-0/>`_ (for AMD ROCm 7.0) Please see `Installation Guide <https://docs.cupy.dev/en/latest/install.html>`_ for the detailed instructions.
text/x-rst
null
Seiya Tokui <tokui@preferred.jp>
CuPy Developers
null
null
null
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Intended Audience :: Developers", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Cython", "Topic :: Software Development", "Topic :: Scientific/Engineering", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows" ]
[]
null
null
>=3.10
[]
[]
[]
[ "numpy<2.6,>=2.0", "cuda-pathfinder==1.*,>=1.3.3", "scipy<1.17,>=1.10; extra == \"all\"", "Cython>=3; extra == \"all\"", "optuna>=2.0; extra == \"all\"", "packaging; extra == \"test\"", "pytest>=7.2; extra == \"test\"", "hypothesis<6.55.0,>=6.37.2; extra == \"test\"", "mpmath; extra == \"test\"" ]
[]
[]
[]
[ "Homepage, https://cupy.dev/", "Documentation, https://docs.cupy.dev/", "Bug Tracker, https://github.com/cupy/cupy/issues", "Source Code, https://github.com/cupy/cupy" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:25:27.585673
cupy-14.0.1.tar.gz
3,899,909
18/3c/d3aa4aaf179567f6588dbfb18b29471949fb782fc79fddfd43480b052d71/cupy-14.0.1.tar.gz
source
sdist
null
false
c8135bb8d7a71f6aa9272560d7b10563
4b673ab2d8b2329abe7ae0a7ae6159656044a8eecca56cf0b834b7c907063205
183cd3aa4aaf179567f6588dbfb18b29471949fb782fc79fddfd43480b052d71
MIT
[ "LICENSE", "docs/source/license.rst" ]
3,341
2.4
cupy-rocm-7-0
14.0.1
CuPy: NumPy & SciPy for GPU
.. image:: https://raw.githubusercontent.com/cupy/cupy/main/docs/image/cupy_logo_1000px.png :width: 400 CuPy : NumPy & SciPy for GPU ============================ `CuPy <https://cupy.dev/>`_ is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. This is a CuPy wheel (precompiled binary) package for AMD ROCm 7.0. You need to install `ROCm 7.0 <https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html>`_ to use these packages. If you have another version of ROCm, or want to build from source, refer to the `Installation Guide <https://docs.cupy.dev/en/latest/install.html>`_ for instructions.
text/x-rst
null
Seiya Tokui <tokui@preferred.jp>
CuPy Developers
null
null
null
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Intended Audience :: Developers", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Cython", "Topic :: Software Development", "Topic :: Scientific/Engineering", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows" ]
[]
null
null
>=3.10
[]
[]
[]
[ "numpy<2.6,>=2.0", "scipy<1.17,>=1.10; extra == \"all\"", "Cython>=3; extra == \"all\"", "optuna>=2.0; extra == \"all\"", "packaging; extra == \"test\"", "pytest>=7.2; extra == \"test\"", "hypothesis<6.55.0,>=6.37.2; extra == \"test\"", "mpmath; extra == \"test\"" ]
[]
[]
[]
[ "Homepage, https://cupy.dev/", "Documentation, https://docs.cupy.dev/", "Bug Tracker, https://github.com/cupy/cupy/issues", "Source Code, https://github.com/cupy/cupy" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:25:24.745296
cupy_rocm_7_0-14.0.1-cp314-cp314-manylinux2014_x86_64.whl
73,278,385
25/b2/8437671ac54c4ae02e771b50a894472c67094801bf38e92f992ef4028294/cupy_rocm_7_0-14.0.1-cp314-cp314-manylinux2014_x86_64.whl
cp314
bdist_wheel
null
false
4ca327031bd77c01d0fa97c17665adc2
a4ec04e6039c34f4916cbeab72fa7f1ab38be04c9116364c43b3e8d8cbaffbc4
25b28437671ac54c4ae02e771b50a894472c67094801bf38e92f992ef4028294
MIT
[ "LICENSE", "docs/source/license.rst" ]
392
2.4
pexpect-serialspawn
0.0.5
Interact with serial devices using pexpect
# Serial spawn for pexpect [![Build](https://github.com/antoniovazquezblanco/pexpect-serialspawn/actions/workflows/build.yml/badge.svg)](https://github.com/antoniovazquezblanco/pexpect-serialspawn/actions/workflows/build.yml) [![PyPI](https://img.shields.io/pypi/v/pexpect-serialspawn)](https://pypi.org/project/pexpect-serialspawn/) [![pexpect-serialspawn](https://snyk.io/advisor/python/pexpect-serialspawn/badge.svg)](https://snyk.io/advisor/python/pexpect-serialspawn) Interact with serial devices using pexpect. ## Installation Just use pip :) ``` pip install pexpect-serialspawn ``` ## Usage ```python import serial from pexpect_serialspawn import SerialSpawn # Initialize your serial device ser = serial.Serial('COM1', 115200) # Spawn a pexpect object ss = SerialSpawn(ser, encoding='utf-8') # Use as any other pexpect spawns... ss.sendline('Hello') ss.expect('World') ```
text/markdown
null
Antonio Vázquez Blanco <antoniovazquezblanco@gmail.com>
null
null
null
null
[ "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3" ]
[]
null
null
>=3.0
[]
[]
[]
[ "pexpect", "pyserial" ]
[]
[]
[]
[ "Homepage, https://github.com/antoniovazquezblanco/pexpect-serialspawn", "Bug Tracker, https://github.com/antoniovazquezblanco/pexpect-serialspawn/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:25:18.043889
pexpect_serialspawn-0.0.5.tar.gz
4,780
a9/3f/e80263940f04a4f97bc9f51240692a3a9e8c071a24eb49be03c4f6a50ef5/pexpect_serialspawn-0.0.5.tar.gz
source
sdist
null
false
47bc4af8b6755f2bad0208eae4d9f5bf
da676602b9f0d4cadee294d0605a743b20452b81cc31c0b217098596465be378
a93fe80263940f04a4f97bc9f51240692a3a9e8c071a24eb49be03c4f6a50ef5
null
[]
222
2.4
cupy-cuda13x
14.0.1
CuPy: NumPy & SciPy for GPU
.. image:: https://raw.githubusercontent.com/cupy/cupy/main/docs/image/cupy_logo_1000px.png :width: 400 CuPy : NumPy & SciPy for GPU ============================ `CuPy <https://cupy.dev/>`_ is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. This is a CuPy wheel (precompiled binary) package for CUDA 13.x. You need to install `CUDA Toolkit 13.x <https://developer.nvidia.com/cuda-toolkit-archive>`_ locally to use these packages. Alternatively, you can install this package together with all needed CUDA components from PyPI by passing the ``[ctk]`` tag:: $ pip install cupy-cuda13x[ctk] If you have another version of CUDA, or want to build from source, refer to the `Installation Guide <https://docs.cupy.dev/en/latest/install.html>`_ for instructions.
text/x-rst
null
Seiya Tokui <tokui@preferred.jp>
CuPy Developers
null
null
null
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Intended Audience :: Developers", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Cython", "Topic :: Software Development", "Topic :: Scientific/Engineering", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows" ]
[]
null
null
>=3.10
[]
[]
[]
[ "numpy<2.6,>=2.0", "cuda-pathfinder==1.*,>=1.3.3", "scipy<1.17,>=1.10; extra == \"all\"", "Cython>=3; extra == \"all\"", "optuna>=2.0; extra == \"all\"", "packaging; extra == \"test\"", "pytest>=7.2; extra == \"test\"", "hypothesis<6.55.0,>=6.37.2; extra == \"test\"", "mpmath; extra == \"test\"", "cuda-toolkit[cublas,cudart,cufft,curand,cusolver,cusparse,nvrtc]==13.*; extra == \"ctk\"" ]
[]
[]
[]
[ "Homepage, https://cupy.dev/", "Documentation, https://docs.cupy.dev/", "Bug Tracker, https://github.com/cupy/cupy/issues", "Source Code, https://github.com/cupy/cupy" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:24:54.198033
cupy_cuda13x-14.0.1-cp314-cp314-win_amd64.whl
35,474,838
c1/f5/273193563cdc37cdb22de3b73e7db12819b39fafb73de6bcf7d48f20945e/cupy_cuda13x-14.0.1-cp314-cp314-win_amd64.whl
cp314
bdist_wheel
null
false
d11c6405f6dd5dfa6f68d6fca7980e93
22b50139e05c4612fac905dd6c3390f8687e0e390f0e200d5be14be1726e3d04
c1f5273193563cdc37cdb22de3b73e7db12819b39fafb73de6bcf7d48f20945e
MIT
[ "LICENSE", "docs/source/license.rst" ]
4,359
2.4
mitogen
0.3.42
Library for writing distributed self-replicating programs.
# Mitogen [![PyPI - Version](https://img.shields.io/pypi/v/mitogen)](https://pypi.org/project/mitogen/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/mitogen)](https://pypi.org/project/mitogen/) [![Build Status](https://img.shields.io/github/actions/workflow/status/mitogen-hq/mitogen/tests.yml?branch=master)](https://github.com/mitogen-hq/mitogen/actions?query=branch%3Amaster) <a href="https://mitogen.networkgenomics.com/">Please see the documentation</a>. ![](https://i.imgur.com/eBM6LhJ.gif)
text/markdown
David Wilson
null
null
null
BSD-3-Clause
null
[ "Environment :: Console", "Framework :: Ansible", "Intended Audience :: System Administrators", "Operating System :: MacOS :: MacOS X", "Operating System :: POSIX", "Programming Language :: Python", "Programming Language :: Python :: 2.4", "Programming Language :: Python :: 2.5", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Programming Language :: Python :: Implementation :: CPython", "Topic :: System :: Distributed Computing", "Topic :: System :: Systems Administration" ]
[]
https://github.com/mitogen-hq/mitogen/
null
!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.4
[]
[]
[]
[]
[]
[]
[]
[]
twine/6.2.0 CPython/3.13.11
2026-02-20T10:24:41.965867
mitogen-0.3.42.tar.gz
231,140
ab/76/0a5fc66d4786273c7bab8b2d457027f7ebd246fdd41251d4f6906842256e/mitogen-0.3.42.tar.gz
source
sdist
null
false
618bd87fc89f9a2fac61145b2952010f
7767787e47cafac1aa674d7f6622bd2f764bd9ed0da0c63002df7161783bf46f
ab760a5fc66d4786273c7bab8b2d457027f7ebd246fdd41251d4f6906842256e
null
[ "LICENSE" ]
1,821
2.1
strongdm
16.8.2
strongDM SDK for the Python programming language.
# strongDM SDK for Python This is the official [strongDM](https://www.strongdm.com/) SDK for the Python programming language. Learn more with our [📚strongDM API docs](https://docs.strongdm.com/references/api) or [📓browse the SDK reference](https://strongdm.github.io/strongdm-sdk-python-docs/). ## Installation ```bash $ pip install strongdm ``` strongDM uses [semantic versioning](https://semver.org/). We do not guarantee compatibility between major versions. Be sure to use version constraints to pin your dependency to the desired major version of the strongDM SDK. ## Authentication If you don't already have them you will need to generate a set of API keys, instructions are here: [API Credentials](https://docs.strongdm.com/references/api/api-keys) Add the keys as environment variables; the SDK will need to access these keys for every request. ```bash $ export SDM_API_ACCESS_KEY=<YOUR ACCESS KEY> $ export SDM_API_SECRET_KEY=<YOUR SECRET KEY> ``` ## List Users The following code lists all registered users: ```python import os import strongdm def main(): client = strongdm.Client(os.getenv("SDM_API_ACCESS_KEY"), os.getenv("SDM_API_SECRET_KEY")) users = client.accounts.list('') for user in users: print(user) if __name__ == "__main__": main() ``` ## Useful Links - Documentation: [strongdm package](https://strongdm.github.io/strongdm-sdk-python-docs/) - [Migrating from v2 to v3](https://github.com/strongdm/strongdm-sdk-python/releases/tag/v3.0.0) - [Migrating from Role Grants to Access Rules](https://github.com/strongdm/strongdm-sdk-python/wiki/Migrating-from-Role-Grants-to-Access-Rules) - Examples: [GitHub - strongdm/strongdm-sdk-python-examples](https://github.com/strongdm/strongdm-sdk-python-examples) 1. [Managing Resources](https://github.com/strongdm/strongdm-sdk-python-examples/tree/master/1_managing_resources) 2. [Managing Accounts](https://github.com/strongdm/strongdm-sdk-python-examples/tree/master/2_managing_accounts) 3. [Managing Roles](https://github.com/strongdm/strongdm-sdk-python-examples/tree/master/3_managing_roles) 4. [Managing Gateways](https://github.com/strongdm/strongdm-sdk-python-examples/tree/master/4_managing_gateways) 5. [Auditing](https://github.com/strongdm/strongdm-sdk-python-examples/tree/master/5_auditing) 6. [Managing Access Workflows](https://github.com/strongdm/strongdm-sdk-python-examples/tree/master/6_managing_workflows) ## License [Apache 2](https://github.com/strongdm/strongdm-sdk-python/blob/master/LICENSE) ## Contributing Currently, we are not accepting pull requests directly to this repository, but our users are some of the most resourceful and ambitious folks out there. So, if you have something to contribute, find a bug, or just want to give us some feedback, please email <support@strongdm.com>.
text/markdown
strongDM Team
sdk-feedback@strongdm.com
null
null
apache-2.0
strongDM, sdm, api, automation, security, audit, database, server, ssh, rdp
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Topic :: Security", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3" ]
[]
https://github.com/strongdm/strongdm-sdk-python
https://github.com/strongdm/strongdm-sdk-python/archive/v16.8.2.tar.gz
null
[]
[]
[]
[]
[]
[]
[]
[]
twine/3.7.1 importlib_metadata/8.7.1 pkginfo/1.12.1.2 requests/2.27.1 requests-toolbelt/1.0.0 tqdm/4.67.1 CPython/3.10.12
2026-02-20T10:24:34.871912
strongdm-16.8.2.tar.gz
387,844
27/d7/73c5bb77e769d75cc7164929e8d27371574e84dfa953bf46a6d6c4a63070/strongdm-16.8.2.tar.gz
source
sdist
null
false
09b0bf28c35b7205b14f3fa8ab3c98e6
13d06de19cb48257d74c971a7de37801705c6c25c5259958213eae3e1fddd9f9
27d773c5bb77e769d75cc7164929e8d27371574e84dfa953bf46a6d6c4a63070
null
[]
201
2.4
datatailr
0.1.104
Ready-to-Use Platform That Drives Business Insights
<!-- --8<-- [start:intro] --> <div style="text-align: center;"> <a href="https://www.datatailr.com/" target="_blank"> <img src="https://s3.eu-west-1.amazonaws.com/datatailr.com/assets/datatailr-logo.svg" alt="Datatailr Logo" /> </a> </div> --- **Datatailr empowers your team to streamline analytics and data workflows from idea to production without infrastructure hurdles.** # What is Datatailr? Datatailr is a platform that simplifies the process of building and deploying data applications. It makes it easier to run and maintain large-scale data processing and analytics workloads. <!-- --8<-- [end:intro] --> ## What is this package? This is the Python package for Datatailr, which allows you to interact with the Datatailr platform. It provides the tools to build, deploy, and manage batch jobs, data pipelines, services and analytics applications. Datatailr manages the underlying infrastructure so your applications can be deployed in an easy, secure and scalable way. ## Installation ### Installing the Python package You can install the Datatailr Python package using pip: ```bash pip install datatailr ``` ### Testing the installation ```python import datatailr print(datatailr.__version__) print(datatailr.__provider__) ``` ## Remote CLI (optional) If you install the package outside the Datatailr platform, you can enable the remote `dt` CLI: ```bash datatailr setup-cli ``` Example usage: ```bash datatailr login dt job ls dt user ls dt job save path/to/local/file.json ``` Notes: - Remote CLI configuration inside a virtual environment only applies inside that environment. - The remote CLI cannot be installed inside Datatailr containers; the native CLI is used there. ## Quickstart The following example shows how to create a simple data pipeline using the Datatailr Python package. ```python from datatailr import workflow, task @task() def func_no_args() -> str: return "no_args" @task() def func_with_args(a: int, b: float) -> str: return f"args: {a}, {b}" @workflow(name="MY test DAG") def my_workflow(): for n in range(2): res1 = func_no_args().alias(f"func_{n}") res2 = func_with_args(1, res1).alias(f"func_with_args_{n}") my_workflow(local_run=True) ``` Running this code will create a graph of jobs and execute it. Each node on the graph represents a job, which in turn is a call to a function decorated with `@task()`. Since this is a local run then the execution of each node will happen sequentially in the same process. To take advantage of the datatailr platform and execute the graph at scale, you can run it using the job scheduler as presented in the next section. ## Execution at Scale To execute the graph at scale, you can use the Datatailr job scheduler. This allows you to run your jobs in parallel, taking advantage of the underlying infrastructure. You will first need to separate your function definitions from the DAG definition. This means you should define your functions as a separate module, which can be imported into the DAG definition. ```python # my_module.py from datatailr import task @task() def func_no_args() -> str: return "no_args" @task() def func_with_args(a: int, b: float) -> str: return f"args: {a}, {b}" ``` To use these functions in a batch job, you just need to import them and run in a DAG context: ```python from my_module import func_no_args, func_with_args from datatailr import workflow @workflow(name="MY test DAG") def my_workflow(): for n in range(2): res1 = func_no_args().alias(f"func_{n}") res2 = func_with_args(1, res1).alias(f"func_with_args_{n}") schedule = Schedule(at_hours=0) my_workflow(schedule=schedule) ``` This will submit the entire workflow for execution, and the scheduler will take care of running the jobs in parallel and managing the resources. The workflow in the example above will be scheduled to run daily at 00:00. ___ Visit [our website](https://www.datatailr.com/) for more!
text/markdown
null
Datatailr <info@datatailr.com>
null
null
null
null
[ "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Environment :: Console", "Operating System :: OS Independent", "Topic :: Scientific/Engineering" ]
[]
null
null
>=3.9
[]
[]
[]
[ "requests", "ruff; extra == \"dev\"", "pre-commit; extra == \"dev\"", "mypy; extra == \"dev\"", "types-setuptools; extra == \"dev\"", "toml; extra == \"dev\"", "coverage; extra == \"dev\"", "sphinx-rtd-theme; extra == \"dev\"", "sphinx; extra == \"dev\"", "sphinx-autodoc-typehints; extra == \"dev\"", "sphinx-autosummary; extra == \"dev\"", "sphinx-design; extra == \"dev\"", "sphinx-copybutton; extra == \"dev\"", "myst-parser; extra == \"dev\"" ]
[]
[]
[]
[ "homepage, https://www.datatailr.com/", "documentation, https://docs.datatailr.com/" ]
twine/6.2.0 CPython/3.12.0
2026-02-20T10:24:07.204129
datatailr-0.1.104.tar.gz
116,696
9e/6a/f27f64b883a793a89d4dc2b4b853cbcff6221e7d5bbc3dbeea0f165c1738/datatailr-0.1.104.tar.gz
source
sdist
null
false
66a04b2580bc87b7e139d71586d2b036
6230cca39331026cf20f1fd29a31908eb377d2885278e1785cf8fcc418bf3613
9e6af27f64b883a793a89d4dc2b4b853cbcff6221e7d5bbc3dbeea0f165c1738
MIT
[ "LICENSE" ]
226
2.2
blosc2-grok
0.3.4
Grok (JPEG2000 codec) plugin for Blosc2.
# Blosc2 grok A plugin of the excellent [grok library](https://github.com/GrokImageCompression/grok) for Blosc2. grok is a JPEG2000 codec, and with this plugin, you can use it as yet another codec in applications using Blosc2. See an example of use at: https://github.com/Blosc/blosc2_grok/blob/main/examples/params.py ## Installation For using `blosc2_grok` you will first have to install its wheel: ```shell pip install blosc2-grok -U ``` ## Usage ```python import blosc2 import numpy as np import blosc2_grok from PIL import Image # Set the params for the grok codec kwargs = {} kwargs['cod_format'] = blosc2_grok.GrkFileFmt.GRK_FMT_JP2 kwargs['quality_mode'] = "dB" kwargs['quality_layers'] = np.array([5], dtype=np.float64) blosc2_grok.set_params_defaults(**kwargs) # Define the compression and decompression parameters for Blosc2. # Disable the filters and do not split blocks (these won't work with grok). cparams = { 'codec': blosc2.Codec.GROK, 'filters': [], 'splitmode': blosc2.SplitMode.NEVER_SPLIT, } # Read the image im = Image.open("examples/kodim23.png") # Convert the image to a numpy array np_array = np.asarray(im) # Transform the numpy array to a blosc2 array. This is where compression happens, and # the HTJ2K codec is called. bl_array = blosc2.asarray( np_array, chunks=np_array.shape, blocks=np_array.shape, cparams=cparams, urlpath="examples/kodim23.b2nd", mode="w", ) # Print information about the array, see the compression ratio (cratio) print(bl_array.info) ``` ## Parameters for compression The following parameters are available for compression for grok, with their defaults. Most of them are named after the ones in the [Pillow library](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#jpeg-2000-saving) and have the same meaning. The ones that are not in Pillow are marked with a `*` and you can get more information about them in the [grok documentation](https://github.com/GrokImageCompression/grok/wiki/3.-grk_compress), or by following the provided links. For those marked with a ``**``, you can get more information in the [grok.h header](https://github.com/GrokImageCompression/grok/blob/a84ac2592e581405a976a00cf9e6f03cab7e2481/src/lib/core/grok.h#L975 ). 'tile_size': (0, 0), 'tile_offset': (0, 0), 'quality_mode': None, 'quality_layers': np.zeros(0, dtype=np.float64), 'progression': "LRCP", 'num_resolutions': 6, 'codeblock_size': (64, 64), 'irreversible': False, 'precinct_size': (0, 0), 'offset': (0, 0), 'mct': 0, * 'numgbits': 2, # Equivalent to -N, -guard_bits * 'roi_compno': -1, # Together with 'roi_shift' it is equivalent to -R, -ROI * 'roi_shift': 0, * 'decod_format': GrkFileFmt.GRK_FMT_UNK, * 'cod_format': GrkFileFmt.GRK_FMT_UNK, * 'rsiz': GrkProfile.GRK_PROFILE_NONE, # Equivalent to -Z, -rsiz * 'framerate': 0, * 'apply_icc_': False, # Equivalent to -f, -apply_icc * 'rateControlAlgorithm': GrkRateControl.BISECT, * 'num_threads': 0, * 'deviceId': 0, # Equivalent to -G, -device_id * 'duration': 0, # Equivalent to -J, -duration * 'repeats': 1, # Equivalent to -e, -repetitions * 'mode': GrkMode.DEFAULT, # Equivalent to -M, -mode * 'verbose': False, # Equivalent to -v, -verbose ** 'enableTilePartGeneration': False, # See header of grok.h above ** 'max_cs_size': 0, # See header of grok.h above ** 'max_comp_size': 0, # See header of grok.h above *Note: * when using the `blosc2_grok` plugin from C, the structure used for setting the parameters uses the `grok` parameters names. You can see an example in https://github.com/Blosc/leaps-examples/blob/main/c-compression/compress-tomo.c#L110 . ### codec_meta as rates quality mode As a simpler way to activate the rates quality mode, if you set the `codec_meta` from the `cparams` to an integer different from 0, the rates quality mode will be activated with a rate value equal to `codec_meta` / 10. If `cod_format` is not specified, the default will be used. The `codec_meta` has priority to the `rates` param set with the `blosc2_grok.set_params_defaults()`. Please note that only rates < 25.6 are supported with this notation. ```python import blosc2 cparams = { 'codec': blosc2.Codec.GROK, 'codec_meta': 5 * 10, # cratio will be 5 'filters': [], 'splitmode': blosc2.SplitMode.NEVER_SPLIT, } ``` ## Notes When using `blosc2_grok`, there are some restrictions that you have to keep in mind. * The minimum supported image size is around 256 bytes, so an image with less size will fail to be compressed. * The maximum datatype precision is of 16 bits. * Although floats from 16 or fewer bits of precision seem to work, we recommend using integer data when possible. ## More examples See the [examples](examples/) directory for more examples. ## Thanks Thanks to Marta Iborra, from the Blosc Development Team, for doing most of the job in making this plugin possible, and J. David Ibáñez and Francesc Alted for the initial contributions. Also, thanks to Aaron Boxer, the original author of the [grok library](https://github.com/GrokImageCompression/grok), for his help in ironing out issues for making this interaction possible. That's all folks! The Blosc Development Team
text/markdown
null
Blosc Development Team <contact@blosc.org>
null
null
GNU Affero General Public License version 3
plugin, blosc2
[ "Programming Language :: Python :: 3", "Programming Language :: C", "Programming Language :: C++", "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Intended Audience :: Information Technology", "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: System :: Archiving :: Compression", "Operating System :: Microsoft :: Windows", "Operating System :: Unix" ]
[]
null
null
null
[]
[]
[]
[ "blosc2>=4.0.0.b1", "pytest; extra == \"h5py-test\"" ]
[]
[]
[]
[ "Homepage, https://github.com/Blosc/blosc2_grok", "Issues, https://github.com/Blosc/blosc2_grok/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:24:04.146780
blosc2_grok-0.3.4-cp312-cp312-win_amd64.whl
3,881,192
d3/35/26b4d534df19f60b4861a8714cfb8be28ccd5f9cb8a92c572e7d46bb8584/blosc2_grok-0.3.4-cp312-cp312-win_amd64.whl
cp312
bdist_wheel
null
false
ba514c87fd66a07365674b18fa6a804e
78095b887a8291d76be7fa80cc58f8e077c0c9be8b9bd005584aa9464edf4c4a
d33526b4d534df19f60b4861a8714cfb8be28ccd5f9cb8a92c572e7d46bb8584
null
[]
85
2.4
aitracer
1.3.0
AITracer SDK - Monitor your AI/LLM applications
# AITracer Python SDK AIエージェント/LLMアプリの実行ログ・コスト・パフォーマンスを監視するためのPython SDK。 ## 本番環境 - **API エンドポイント**: `https://api.aitracer.co` - **ダッシュボード**: `https://app.aitracer.co` SDKはデフォルトで `https://api.aitracer.co` に接続します。 ## インストール ```bash pip install aitracer ``` ## クイックスタート ### 基本的な使い方 ```python from aitracer import AITracer from openai import OpenAI # AITracerを初期化 tracer = AITracer( api_key="at-xxxxxxxx", project="my-chatbot" ) # OpenAIクライアントをラップ client = tracer.wrap_openai(OpenAI()) # 普通に使う(自動でログが送信される) response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}] ) ``` ### Anthropic対応 ```python from aitracer import AITracer from anthropic import Anthropic tracer = AITracer( api_key="at-xxxxxxxx", project="my-chatbot" ) # Anthropicクライアントをラップ client = tracer.wrap_anthropic(Anthropic()) response = client.messages.create( model="claude-3-5-sonnet-20241022", max_tokens=1024, messages=[{"role": "user", "content": "Hello!"}] ) ``` ### Google Gemini対応 ```python from aitracer import AITracer import google.generativeai as genai tracer = AITracer( api_key="at-xxxxxxxx", project="my-chatbot" ) # Geminiを設定 genai.configure(api_key="your-google-api-key") model = genai.GenerativeModel("gemini-1.5-flash") # Geminiモデルをラップ model = tracer.wrap_gemini(model) response = model.generate_content("Hello!") print(response.text) ``` ストリーミングも対応しています: ```python response = model.generate_content("Write a story...", stream=True) for chunk in response: print(chunk.text, end="") ``` ### トレース機能 複数のAPI呼び出しをグループ化して追跡できます。 ```python with tracer.trace("user-query-123") as trace: # この中のAPI呼び出しは同じtrace_idでグループ化される response1 = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Summarize this..."}] ) response2 = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Translate to Japanese..."}] ) # メタデータを追加 trace.set_metadata({ "user_id": "user-456", "feature": "summarization" }) # タグを追加 trace.add_tag("production") ``` ### ストリーミング対応 ストリーミングレスポンスも自動的にログされます。 ```python stream = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Write a story..."}], stream=True ) for chunk in stream: print(chunk.choices[0].delta.content or "", end="") # ストリーム完了後に自動的にログが送信される ``` ## 設定オプション ```python tracer = AITracer( # 必須 api_key="at-xxxxxxxx", # または AITRACER_API_KEY 環境変数 project="my-chatbot", # または AITRACER_PROJECT 環境変数 # オプション base_url="https://api.aitracer.co", # APIエンドポイント(デフォルト) sync=False, # True: 同期送信(Lambda用) flush_on_exit=True, # 終了時に未送信ログをフラッシュ batch_size=10, # バッチサイズ flush_interval=5.0, # 自動フラッシュ間隔(秒) enabled=True, # False: ログ無効化(テスト用) ) ``` ### Lambda/サーバーレス環境 サーバーレス環境では `sync=True` を使用してください。 ```python tracer = AITracer( api_key="at-xxxxxxxx", project="my-lambda", sync=True # 同期送信 ) ``` ### 環境変数 ```bash export AITRACER_API_KEY=at-xxxxxxxx export AITRACER_PROJECT=my-chatbot ``` ```python # 環境変数から自動で読み込み tracer = AITracer() ``` ## 手動ログ 自動ラッパーを使わずに手動でログを送信することもできます。 ```python tracer.log( model="gpt-4o", provider="openai", input_data={"messages": [{"role": "user", "content": "Hello"}]}, output_data={"content": "Hi there!"}, input_tokens=10, output_tokens=5, latency_ms=150, status="success", metadata={"user_id": "user-123"}, tags=["production"] ) ``` ## フラッシュとシャットダウン ```python # 未送信のログを即座に送信 tracer.flush() # シャットダウン(フラッシュ + リソース解放) tracer.shutdown() ``` ## ライセンス Copyright (c) 株式会社HARO. All rights reserved.
text/markdown
null
"HARO Inc." <info@haro.co.jp>
null
null
null
ai, anthropic, llm, monitoring, observability, openai
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
null
null
>=3.9
[]
[]
[]
[ "anthropic>=0.18.0", "httpx>=0.25.0", "openai>=1.0.0", "mypy>=1.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "pytest>=7.0.0; extra == \"dev\"", "ruff>=0.1.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://aitracer.co", "Documentation, https://docs.aitracer.co", "Repository, https://github.com/haro-inc/aitracer" ]
twine/6.2.0 CPython/3.10.2
2026-02-20T10:23:47.441506
aitracer-1.3.0.tar.gz
133,350
b0/d6/4653caf275a66a9480cef7025a885c7ae561bca5fc37472c76ffdc38a42c/aitracer-1.3.0.tar.gz
source
sdist
null
false
ef67849629d09326de2723f1a8e535da
4cd8b6753827aa82af67b965996e1f2fbc40da18d82a880d939071ac30380882
b0d64653caf275a66a9480cef7025a885c7ae561bca5fc37472c76ffdc38a42c
MIT
[ "LICENSE" ]
213
2.4
cupy-cuda12x
14.0.1
CuPy: NumPy & SciPy for GPU
.. image:: https://raw.githubusercontent.com/cupy/cupy/main/docs/image/cupy_logo_1000px.png :width: 400 CuPy : NumPy & SciPy for GPU ============================ `CuPy <https://cupy.dev/>`_ is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. This is a CuPy wheel (precompiled binary) package for CUDA 12.x. You need to install `CUDA Toolkit 12.x <https://developer.nvidia.com/cuda-toolkit-archive>`_ locally to use these packages. Alternatively, you can install this package together with all needed CUDA components from PyPI by passing the ``[ctk]`` tag:: $ pip install cupy-cuda12x[ctk] If you have another version of CUDA, or want to build from source, refer to the `Installation Guide <https://docs.cupy.dev/en/latest/install.html>`_ for instructions.
text/x-rst
null
Seiya Tokui <tokui@preferred.jp>
CuPy Developers
null
null
null
[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Science/Research", "Intended Audience :: Developers", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Cython", "Topic :: Software Development", "Topic :: Scientific/Engineering", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows" ]
[]
null
null
>=3.10
[]
[]
[]
[ "numpy<2.6,>=2.0", "cuda-pathfinder==1.*,>=1.3.3", "scipy<1.17,>=1.10; extra == \"all\"", "Cython>=3; extra == \"all\"", "optuna>=2.0; extra == \"all\"", "packaging; extra == \"test\"", "pytest>=7.2; extra == \"test\"", "hypothesis<6.55.0,>=6.37.2; extra == \"test\"", "mpmath; extra == \"test\"", "cuda-toolkit[cublas,cudart,cufft,curand,cusolver,cusparse,nvrtc]==12.*; extra == \"ctk\"" ]
[]
[]
[]
[ "Homepage, https://cupy.dev/", "Documentation, https://docs.cupy.dev/", "Bug Tracker, https://github.com/cupy/cupy/issues", "Source Code, https://github.com/cupy/cupy" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:23:42.684122
cupy_cuda12x-14.0.1-cp314-cp314-win_amd64.whl
96,822,848
e5/a3/80ff83dcad1ac61741714d97fce5a3ef42c201bb40005ec5cc413e34d75f/cupy_cuda12x-14.0.1-cp314-cp314-win_amd64.whl
cp314
bdist_wheel
null
false
c7c9c3a75a4038693dd8c16d1eecef57
cafe62131caef63b5e90b71b617bb4bf47d7bd9e11cccabea8104db1e01db02e
e5a380ff83dcad1ac61741714d97fce5a3ef42c201bb40005ec5cc413e34d75f
MIT
[ "LICENSE", "docs/source/license.rst" ]
100,276
2.4
fabrix-ai
1.5.1
Graph-based agent framework powered by oauth-codex
# Fabrix Language: English | [한국어](README.ko.md) API Guides: [English](docs/api.md) | [한국어](docs/api.ko.md) ## Overview Fabrix is a graph-based agent framework built on top of `oauth-codex>=2.3.0`. It provides a structured execution graph with streaming events for tool-driven workflows. ## Key Features - Graph-based 3-state execution: `reasoning`, `tool_call`, `response` - Structured state outputs powered by Pydantic models - Sequential tool execution with strict payload validation - Async streaming event API for step-by-step observability - Multimodal input with explicit message models: `TextMessage`, `ImageMessage` ## Installation ```bash pip install fabrix-ai ``` ## Quickstart ```python import asyncio from pydantic import BaseModel from fabrix import Agent from fabrix.events import ( ReasoningEvent, ResponseEvent, TaskFailedEvent, ToolEvent, ) from fabrix.messages import TextMessage from fabrix.tools import ToolOutput class AddInput(BaseModel): a: int b: int def add_numbers(payload: AddInput) -> ToolOutput: return ToolOutput.json({"sum": payload.a + payload.b}) async def main() -> None: agent = Agent( instructions="You are a precise assistant.", model="gpt-5.3-codex", tools=[add_numbers], ) messages = [TextMessage(text="Use add_numbers to compute 3 + 9")] async for event in agent.run_stream(messages=messages): print(f"[step={event.step}] {event.event_type}") if isinstance(event, ReasoningEvent): print("reasoning:", event.reasoning) print("focus:", event.focus) elif isinstance(event, ToolEvent): if event.phase == "start": print("tool call:", event.tool_name, event.arguments) elif event.result is not None: print("tool result:", event.result.model_dump()) elif isinstance(event, ResponseEvent): if event.response is not None: print("response:", event.response) if event.parts is not None: print("parts:", [part.model_dump(mode="json") for part in event.parts]) if event.response is None and event.parts is None: print("response: <empty>") elif isinstance(event, TaskFailedEvent): print("failed:", event.error_code, event.message) asyncio.run(main()) ``` ## Message Models Fabrix input is now `list[TextMessage | ImageMessage]`. - `TextMessage(role: str = "user", text: str)` - `ImageMessage(role: str = "user", image: str | Path | bytes, text: str | None = None)` - Unknown message fields are rejected at construction time. `ImageMessage.image` accepts: - remote URL (`https://...`) - local path (`Path` or string path) - raw bytes (`bytes`), normalized to a data URL for model calls ## Multimodal Input ```python from fabrix.messages import ImageMessage, TextMessage messages = [ TextMessage(text="Describe this screenshot"), ImageMessage(image="https://example.com/screenshot.png"), TextMessage(text="Focus on errors"), ] async for event in agent.run_stream(messages=messages): ... ``` ## Tool Contract Fabrix accepts tools in this shape: ```python def tool(payload: BaseModel) -> ToolOutput: ... ``` - The tool must accept exactly one parameter. - The parameter type must be a Pydantic `BaseModel`. - The return type must be `ToolOutput` (breaking in v1.2.0). - Runtime arguments must be a JSON object matching payload fields. - Extra argument keys are rejected. - Both sync and async tools are supported. - `ToolOutput.image(...)` keeps `http(s)`/`data:` values as-is. - `ToolOutput.image(...)` normalizes `file://`, local paths, and bytes to local absolute file references. - Tool-call argument strictness is enforced by model `output_schema` with `strict_output=True`. - Prompt policy and runtime context are no longer duplicated; runtime control context is appended as a final control message. - During LLM history serialization, reasoning/tool_call/response (and legacy `tool_result`) records are preserved, and local image references are re-normalized to data URLs. ## Event Stream `run_stream(...)` yields these event types: - `reasoning` - `tool` (`phase="start"` / `phase="finish"`) - `response` - `task_failed` `reasoning` is a step-level decision trace / plan summary, not raw internal chain-of-thought. `response` events now support both `response: str | None` and `parts` (structured text/image/json parts); both fields may be `None` for an empty response event. Terminate by setting `next_state=null` in `response` state. ## Migration (Breaking) `run_task_stream(task, images, context)` has been removed. - Before: `agent.run_task_stream(task=..., images=..., context=...)` - After: `agent.run_stream(messages=[...])` Mapping: - `task` text -> `TextMessage(text="...")` - `images` -> `ImageMessage(image="..." | Path(...) | b"...")` - `context` -> include serialized context in `TextMessage.text` Tool migration: - Before: tool returns `str` / `dict` / scalar / arbitrary JSON-like objects - After: tool must return `ToolOutput` (for example `ToolOutput.text(...)`, `ToolOutput.json(...)`, `ToolOutput.image(...)`) ## Documentation - API usage guide (English): [`docs/api.md`](docs/api.md) - API 사용 가이드 (한국어): [`docs/api.ko.md`](docs/api.ko.md) - Korean README: [`README.ko.md`](README.ko.md) ## Examples - Minimal quickstart: [`examples/minimal/quickstart.py`](examples/minimal/quickstart.py) - Multimodal vision: [`examples/minimal/multimodal.py`](examples/minimal/multimodal.py) - Data workflow: [`examples/advanced/data_workflow.py`](examples/advanced/data_workflow.py) - Incident response workflow: [`examples/advanced/incident_response.py`](examples/advanced/incident_response.py) ## Notes - Public runtime entry point is `fabrix.Agent`. - Execution defaults are fixed internally: `max_steps=128` and no public per-tool timeout option. - On successful completion, the stream ends right after the final `response` event (`next_state=null` in response state). - If `max_steps` is reached, the stream ends without emitting an additional terminal event.
text/markdown
Fabrix
null
null
null
MIT
agent, graph, llm, oauth-codex, tools
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Software Development :: Libraries" ]
[]
null
null
>=3.12
[]
[]
[]
[ "oauth-codex>=2.3.0", "pydantic>=2.8", "pytest-asyncio>=0.24; extra == \"dev\"", "pytest>=8.3; extra == \"dev\"", "ruff>=0.8; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/smturtle2/fabrix", "Repository, https://github.com/smturtle2/fabrix", "Issues, https://github.com/smturtle2/fabrix/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:23:34.390963
fabrix_ai-1.5.1.tar.gz
39,265
35/fe/f88317bbf6a1058d1d96b7dd9e8565224abd8eaf6fa82dd6b67f0198a2de/fabrix_ai-1.5.1.tar.gz
source
sdist
null
false
e4d554aef3148bc5ea157d7838aa9e3f
fcbc1d9643c355613a2bbcc446a7daaad011fa19500033a07d6b2e8c90c7d68e
35fef88317bbf6a1058d1d96b7dd9e8565224abd8eaf6fa82dd6b67f0198a2de
null
[ "LICENSE" ]
223
2.4
neutron-lib
3.24.0
Neutron shared routines and utilities
=========== neutron-lib =========== .. image:: https://governance.openstack.org/tc/badges/neutron-lib.svg .. Change things from this point on Neutron shared routines and utilities. * Free software: Apache license * Documentation: https://docs.openstack.org/neutron-lib/latest/ * Source: https://opendev.org/openstack/neutron-lib * Bugs: https://bugs.launchpad.net/neutron * Release notes: https://docs.openstack.org/releasenotes/neutron-lib/
text/x-rst
null
OpenStack <openstack-discuss@lists.openstack.org>
null
null
Apache-2.0
null
[ "Development Status :: 5 - Production/Stable", "Environment :: OpenStack", "Intended Audience :: Information Technology", "Intended Audience :: System Administrators", "License :: OSI Approved :: Apache Software License", "Operating System :: POSIX :: Linux", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3 :: Only" ]
[]
null
null
>=3.10
[]
[]
[]
[ "pbr>=4.0.0", "SQLAlchemy>=1.2.0", "pecan>=1.0.0", "keystoneauth1>=3.14.0", "netaddr>=0.7.18", "stevedore>=5.6.0", "os-ken>=0.3.0", "oslo.concurrency>=3.26.0", "oslo.config>=8.0.0", "oslo.context>=2.22.0", "oslo.db>=12.1.0", "oslo.i18n>=3.20.0", "oslo.log>=4.3.0", "oslo.messaging>=14.2.0", "oslo.policy>=4.5.0", "oslo.serialization>=2.25.0", "oslo.service>=1.24.0", "oslo.utils>=7.0.0", "oslo.versionedobjects>=1.31.2", "osprofiler>=1.4.0", "setproctitle>=1.1.10", "WebOb>=1.7.1", "os-traits>=0.9.0", "debtcollector>=3.0.0", "osprofiler>=1.4.0; extra == \"osprofiler\"" ]
[]
[]
[]
[ "Bug Tracker, https://bugs.launchpad.net/neutron", "Documentation, https://docs.openstack.org/neutron-lib", "Source Code, https://opendev.org/openstack/neutron-lib" ]
twine/6.2.0 CPython/3.11.14
2026-02-20T10:22:05.407638
neutron_lib-3.24.0.tar.gz
555,702
d1/2a/a755a649b05786d345b78be6a2ce80ee53d7f847fd17424e8e81cfe99eb7/neutron_lib-3.24.0.tar.gz
source
sdist
null
false
93da185474b2418dfa07ffcf87e4c10c
96861925534f3c3a16b0e3de9e6fe3953f58b2b5f695386d8b71aa598bdc649b
d12aa755a649b05786d345b78be6a2ce80ee53d7f847fd17424e8e81cfe99eb7
null
[ "LICENSE" ]
1,301
2.1
odoo-addon-edi-stock-oca
16.0.1.1.2
Define EDI Configuration for Stock
.. image:: https://odoo-community.org/readme-banner-image :target: https://odoo-community.org/get-involved?utm_source=readme :alt: Odoo Community Association ============= Edi Stock Oca ============= .. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! This file is generated by oca-gen-addon-readme !! !! changes will be overwritten. !! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !! source digest: sha256:ee95e1fe47181f0700521094fcd18c5122f87473e3e5dc054996c1b984645e2a !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! .. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png :target: https://odoo-community.org/page/development-status :alt: Beta .. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png :target: http://www.gnu.org/licenses/agpl-3.0-standalone.html :alt: License: AGPL-3 .. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fedi--framework-lightgray.png?logo=github :target: https://github.com/OCA/edi-framework/tree/16.0/edi_stock_oca :alt: OCA/edi-framework .. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png :target: https://translation.odoo-community.org/projects/edi-framework-16-0/edi-framework-16-0-edi_stock_oca :alt: Translate me on Weblate .. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png :target: https://runboat.odoo-community.org/builds?repo=OCA/edi-framework&target_branch=16.0 :alt: Try me on Runboat |badge1| |badge2| |badge3| |badge4| |badge5| This module intends to create a base to be extended by local edi rules for stock. In order to add a new integration for an stock picking, you need to create a listener: .. code-block:: python class MyEventListener(Component): _name = "stock.picking.event.listener.demo" _inherit = "base.event.listener" _apply_on = ["stock.picking"] def on_validate(self, picking): """Add your code here about creation of record""" **Table of contents** .. contents:: :local: Bug Tracker =========== Bugs are tracked on `GitHub Issues <https://github.com/OCA/edi-framework/issues>`_. In case of trouble, please check there if your issue has already been reported. If you spotted it first, help us to smash it by providing a detailed and welcomed `feedback <https://github.com/OCA/edi-framework/issues/new?body=module:%20edi_stock_oca%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_. Do not contact contributors directly about support or help with technical issues. Credits ======= Authors ~~~~~~~ * Odoo Community Association Contributors ~~~~~~~~~~~~ * Alba Riera <alba.riera@creublanca.es> Maintainers ~~~~~~~~~~~ This module is maintained by the OCA. .. image:: https://odoo-community.org/logo.png :alt: Odoo Community Association :target: https://odoo-community.org OCA, or the Odoo Community Association, is a nonprofit organization whose mission is to support the collaborative development of Odoo features and promote its widespread use. This module is part of the `OCA/edi-framework <https://github.com/OCA/edi-framework/tree/16.0/edi_stock_oca>`_ project on GitHub. You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
null
Odoo Community Association,Odoo Community Association (OCA)
support@odoo-community.org
null
null
AGPL-3
null
[ "Programming Language :: Python", "Framework :: Odoo", "Framework :: Odoo :: 16.0", "License :: OSI Approved :: GNU Affero General Public License v3" ]
[]
https://github.com/OCA/edi-framework
null
>=3.10
[]
[]
[]
[ "odoo-addon-component-event<16.1dev,>=16.0dev", "odoo-addon-edi-oca<16.1dev,>=16.0dev", "odoo<16.1dev,>=16.0a" ]
[]
[]
[]
[]
twine/6.2.0 CPython/3.12.3
2026-02-20T10:21:38.846466
odoo_addon_edi_stock_oca-16.0.1.1.2-py3-none-any.whl
25,135
85/3b/42667067ef96e74d948e86c879128fe372b2d328f281cb8eae47bfc4ebd6/odoo_addon_edi_stock_oca-16.0.1.1.2-py3-none-any.whl
py3
bdist_wheel
null
false
c2ca0cb20fbd324211c89d2ed2383212
91e8c43e919f0a311f24a2b944431b9779c4495e80a3a02691e816d28fdad888
853b42667067ef96e74d948e86c879128fe372b2d328f281cb8eae47bfc4ebd6
null
[]
77
2.4
tcheckerpy
2026.2.20
A Python Interface for TChecker
# tcheckerpy **tcheckerpy** is a Python interface to the [TChecker](https://github.com/Echtzeitsysteme/tchecker/) model checker, allowing you to analyze and compare timed automata models directly from Python. It provides access to the following TChecker tools: - tck-compare - tck-liveness - tck-reach - tck-simulate - tck-syntax For detailed documentation, refer to the [TChecker Wiki](https://github.com/ticktac-project/tchecker/wiki/Using-TChecker). ## Installation You can install `tcheckerpy` via PyPI: ```bash pip install tcheckerpy ``` ## Usage Example ```python # import required tools from tcheckerpy.tools import tck_reach, tck_syntax # read declaration of timed automata network from .txt or .tck file into string with open(system_declaration_path) as file: system = file.read() # raise error if syntax is incorrect tck_syntax.check(system) # perform reachability analysis result, stats, certificate = tck_reach.reach(system, tck_reach.Algorithm.REACH, certificate = tck_reach.Certificate.GRAPH) print(result) print(stats) print(certificate) ``` Example output (based on [ad94.txt](https://github.com/Echtzeitsysteme/tchecker/blob/master/examples/ad94.txt)): ``` False MEMORY_MAX_RSS 41116 REACHABLE false RUNNING_TIME_SECONDS 6.1474e-05 VISITED_STATES 7 VISITED_TRANSITIONS 8 digraph ad94_fig10 { 0 [initial="true", intval="", labels="", vloc="<l0>", zone="(0<=x && 0<=y)"] 1 [intval="", labels="", vloc="<l1>", zone="(1<x && 0<=y && 1<x-y)"] 2 [intval="", labels="", vloc="<l1>", zone="(0<=x && 0<=y && 0<=x-y)"] 3 [intval="", labels="", vloc="<l2>", zone="(1<x && 1<=y)"] 4 [intval="", labels="", vloc="<l2>", zone="(1<=x && 1<=y)"] 5 [intval="", labels="green", vloc="<l3>", zone="(1<x && 0<y && x-y<1)"] 6 [intval="", labels="green", vloc="<l3>", zone="(0<=x && 0<=y && x-y<1)"] 0 -> 2 [vedge="<P@a>"] 1 -> 3 [vedge="<P@b>"] 2 -> 4 [vedge="<P@b>"] 2 -> 6 [vedge="<P@c>"] 5 -> 1 [vedge="<P@a>"] 5 -> 5 [vedge="<P@d>"] 6 -> 2 [vedge="<P@a>"] 6 -> 5 [vedge="<P@d>"] } ```
text/markdown
null
Alexander Lieb <alexander.lieb@online.de>, Jacob Deuchert <jacob.deuchert@stud.tu-darmstadt.de>, Paula Troszt <paula.troszt@gmx.de>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: POSIX :: Linux" ]
[]
null
null
>=3.9
[]
[]
[]
[]
[]
[]
[]
[ "Homepage, https://github.com/Echtzeitsysteme/tcheckerpy", "Issues, https://github.com/Echtzeitsysteme/tcheckerpy/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:21:32.381988
tcheckerpy-2026.2.20.tar.gz
1,162,652
78/f0/64a0f94449641d4e29cfc53601ef8af690cd615aa0b51fa31297baad6b43/tcheckerpy-2026.2.20.tar.gz
source
sdist
null
false
1cca0a887a68a2069db678aa9ff04907
dad96c0ee60f7e18a0fe84a169536d065a4edd3b1c40174f1e4a51b912026537
78f064a0f94449641d4e29cfc53601ef8af690cd615aa0b51fa31297baad6b43
MIT
[ "LICENSE" ]
216
2.4
lhab
0.3
Library Hallucinations Adversarial Benchmark — evaluate LLM code generation for hallucinated libraries.
# LHAB - ***L***ibrary ***H***allucinations ***A***dversarial ***B***enchmark Evaluate LLM code generation for hallucinated (non-existent) libraries. Part of the research paper *Library Hallucinations in LLMs: Risk Analysis Grounded in Developer Queries*. Full dataset and leaderboard available on [HuggingFace](https://huggingface.co/datasets/itsluketwist/LHAB). Source code on [GitHub](https://github.com/itsluketwist/realistic-library-hallucinations). ## *install* ```shell pip install lhab ``` ## *usage* The package exposes three functions: - **`lhab.load_dataset()`** — load the bundled benchmark dataset, returns a dictionary of splits (`control`, `describe`, `specify`), each containing a list of task records. - **`lhab.evaluate_responses(responses_file)`** — evaluate LLM responses against the benchmark, detecting hallucinated libraries. Saves results to a JSON file and returns a dictionary with statistics per split and type, plus all hallucinated library names. - **`lhab.download_pypi_data()`** — download the latest PyPI package list for ground truth validation. Called automatically on first evaluation if the data is not already present. ```python import lhab dataset = lhab.load_dataset() # {"control": [...], "describe": [...], "specify": [...]} results = lhab.evaluate_responses("your_responses.jsonl") # {"control": {...}, "describe": {...}, "specify": {...}, "hallucinations": {...}} ``` A CLI command is also available: ```shell lhab-eval your_responses.jsonl ```
text/markdown
null
Lukas Twist <itsluketwist@gmail.com>
null
null
null
null
[ "Programming Language :: Python", "Programming Language :: Python :: 3" ]
[]
null
null
>=3.11
[]
[]
[]
[ "llm-codegen-research", "requests", "bs4", "pre-commit; extra == \"dev\"", "pytest; extra == \"dev\"", "uv; extra == \"dev\"" ]
[]
[]
[]
[]
uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-20T10:21:30.856002
lhab-0.3-py3-none-any.whl
336,177
0b/b4/c7e5c5176673d51ff971e4fa4507d48de5cc1b72a21f4318796ed98d2ea5/lhab-0.3-py3-none-any.whl
py3
bdist_wheel
null
false
0f730aeb5eb04e061fa788361b01fd54
01333bad37c0fd5f028cb121f4f0b774b79c4b322da6ca8f2fec0c8930aa82df
0bb4c7e5c5176673d51ff971e4fa4507d48de5cc1b72a21f4318796ed98d2ea5
null
[ "LICENSE" ]
217
2.4
indestructibleeco-ai
1.0.0
AI generation + YAML governance service for IndestructibleEco platform
# IndestructibleEco AI Service AI generation and YAML governance service for the IndestructibleEco platform. ## Features - Multi-engine inference routing (vLLM, TGI, Ollama, SGLang, TensorRT-LLM, DeepSpeed, LMDeploy) - OpenAI-compatible endpoints - Async job queue with priority scheduling - Vector alignment via quantum-bert-xxl-v1 - .qyaml governance with 5-phase validation - gRPC internal high-performance endpoint ## Installation ```bash pip install indestructibleeco-ai ``` ## License Apache-2.0
text/markdown
IndestructibleEco Platform Team
platform-team@indestructibleeco.io
null
null
Apache-2.0
ai, inference, llm, vllm, governance
[ "Development Status :: 5 - Production/Stable", "Framework :: FastAPI", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Scientific/Engineering :: Artificial Intelligence" ]
[]
null
null
<4.0,>=3.11
[]
[]
[]
[ "celery<5.5.0,>=5.4.0", "fastapi<0.112.0,>=0.111.0", "grpcio<1.65.0,>=1.64.1", "grpcio-tools<1.65.0,>=1.64.1", "numpy<1.27.0,>=1.26.4", "prometheus-client<0.21.0,>=0.20.0", "pydantic<2.8.0,>=2.7.4", "redis<5.1.0,>=5.0.7", "uvicorn[standard]<0.31.0,>=0.30.1" ]
[]
[]
[]
[ "Homepage, https://github.com/indestructibleorg/indestructibleeco", "Repository, https://github.com/indestructibleorg/indestructibleeco" ]
twine/6.2.0 CPython/3.12.3
2026-02-20T10:20:44.354492
indestructibleeco_ai-1.0.0.tar.gz
30,590
45/5b/aff445a0218858fd5e0549b599e96a10b6ed6068bfda3ca47e0e70b00e90/indestructibleeco_ai-1.0.0.tar.gz
source
sdist
null
false
7ebe06d87aa6db9014edbc3ef187e832
ccc8406a292d98b699c54f729e5a99482716ef0a5c6f808bd80fd6aa1b161510
455baff445a0218858fd5e0549b599e96a10b6ed6068bfda3ca47e0e70b00e90
null
[]
229
2.4
akida-ort
1.0.0
Akida ONNX Runtime
# Akida Custom Operators for Onnx Runtime Custom ONNX Runtime operators for Akida neural network hardware acceleration. ## Overview This project provides custom ONNX operators that enable execution of neural network models on Brainchip Akida hardware through ONNX Runtime. It supports both INT8 and INT32 output types with 2 custom operators AkidaOpInt8 and AkidaOpInt32. ## Requirements ### Hardware - Akida-compatible device (FPGA v2) ### Software - ONNX Runtime 1.23.0 (automatically downloaded during build) - Akida Python package ## Installation ### 1. Build from Source ```bash # Clone the repository git clone <repository-url> cd AkidaORT # Build the custom operator library ./build_akida_ort.sh python3.11 # The compiled library will be available at: # build/akida_ort_lib.so # The wheel will be available at: # dist/akida_ort-*.whl ``` ### 2. Install Package ```bash # Install from wheel pip install dist/akida_ort-*.whl ``` ## Usage ### Basic Example with AkidaOpInt8 ```python import numpy as np import onnx import onnxruntime as ort import akida import akida_ort # 1. Create and compile your Akida model model = akida.Model() model.add(akida.InputConv2D((64, 64, 3), kernel_size=3, filters=16)) model.add(akida.Conv2D(kernel_size=3, filters=32, activation=akida.ActivationType.ReLU)) model.add(akida.Dense1D(units=10, output_bits=8)) # output_bits == 8 then use AkidaOpInt8 # 2. Map to hardware (first device found) and save program hw_model = akida.Model(model.layers) hw_model.map(akida.devices()[0], hw_only=True) program = hw_model.sequences[0].program # Save the binary program to send it to the operator with open("program.bin", "wb") as f: f.write(program) # 3. Create ONNX model with AkidaOpInt8 operator onnx_model = onnx.parser.parse_model(""" <ir_version: 8, opset_import: ["com.brainchip": 1]> model (uint8[1, 64, 64, 3] X) => (int8[1, ?, ?, ?] Y) { Y = com.brainchip.AkidaOpInt8<program_path="program.bin">(X) } """) # 4. Run inference with ONNX Runtime so = ort.SessionOptions() so.register_custom_ops_library(akida_ort.get_library_path()) sess = ort.InferenceSession( onnx_model.SerializeToString(), so, providers=["CPUExecutionProvider"] ) # 5. Execute inference inputs = {"X": np.random.randint(0, 255, (1, 64, 64, 3), dtype=np.uint8)} outputs = sess.run(None, inputs) print(f"Output shape: {outputs[0].shape}, dtype: {outputs[0].dtype}") ```
text/markdown
Nicolas Guilbaud
nguilbaud@brainchip.com
null
null
Proprietary
null
[]
[]
https://onnx.brainchip.com/
null
>=3.10
[]
[]
[]
[ "akida", "onnxruntime~=1.23.0" ]
[]
[]
[]
[]
twine/5.1.0 CPython/3.11.9
2026-02-20T10:19:45.590553
akida_ort-1.0.0-cp312-cp312-manylinux_2_28_x86_64.whl
8,100,637
c6/60/b8d2edb56715dae7f33b8a84a80c0d45ea4d51f01307055204e6d15a7524/akida_ort-1.0.0-cp312-cp312-manylinux_2_28_x86_64.whl
cp312
bdist_wheel
null
false
9c85c7548fbf304a1aa76e366821a821
0401fbe9a99fb8b33dede729cd24fd359223d55e18e9db6c1680b63ad7844dd4
c660b8d2edb56715dae7f33b8a84a80c0d45ea4d51f01307055204e6d15a7524
null
[]
234
2.4
gaitkit
1.2.4
Universal gait event detection toolkit — 10 methods including BIKE, with C-accelerated backends
# gaitkit (Python) Python package for gait event detection from motion-capture data. ## Install Current status: PyPI release available (latest stable on PyPI). Install from local source: ```bash python -m pip install -e ./python python -m pip install -e "./python[all]" # optional extras ``` Once published: ```bash pip install gaitkit ``` Optional extras: ```bash pip install "gaitkit[all]" # onnx + deep + viz ``` ## Quick Start ```python import gaitkit trial = gaitkit.load_example("healthy") result = gaitkit.detect(trial, method="bike") print(result.summary()) # Optional: combine C3D markers with an external angle file result2 = gaitkit.detect("trial_07.c3d", method="bike", angles="res_angles_t.mat") ``` DeepEvent weights are downloaded automatically on first DeepEvent use and cached in `~/.cache/gaitkit/`. ## Proprietary JSON I/O (MyoGait-like) The compatibility API accepts a proprietary JSON payload with `angles.frames` and can export back a MyoGait-compatible `events` JSON. ```python import gaitkit out = gaitkit.detect_events_structured("bike", "myogait_output_no_events.json", fps=100.0) paths = gaitkit.export_detection(out, "outputs/trial_07", formats=("json", "myogait")) print(paths) ``` Input fields recognized per frame: - `frame_idx` - `hip_L`, `knee_L`, `ankle_L` - `hip_R`, `knee_R`, `ankle_R` - `pelvis_tilt`, `trunk_angle` - optional `landmark_positions` MyoGait-compatible output file (`*_myogait_events.json`) contains: - `events.method` - `events.fps` - `events.left_hs`, `events.right_hs`, `events.left_to`, `events.right_to` as arrays of `{frame, time, confidence}`. ## Project - Repository: https://github.com/IDMDataHub/gaitkit - Issue tracker: https://github.com/IDMDataHub/gaitkit/issues - Reproducibility: https://github.com/IDMDataHub/gaitkit/blob/master/REPRODUCIBILITY.md ## Development testing ```bash python -m pip install -e ./python python -m unittest -v ``` ## Build distributions ```bash python -m pip install build cd python python -m build ```
text/markdown
null
Frédéric Fer <f.fer@institut-myologie.org>
null
null
null
gait, biomechanics, motion-capture, heel-strike, toe-off, event-detection
[ "Development Status :: 4 - Beta", "Intended Audience :: Science/Research", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: C", "Topic :: Scientific/Engineering :: Medical Science Apps." ]
[]
null
null
>=3.10
[]
[]
[]
[ "numpy>=1.23", "scipy>=1.10", "pandas>=1.5", "ezc3d>=1.5", "onnxruntime>=1.14; extra == \"onnx\"", "tensorflow>=2.12; extra == \"deep\"", "h5py; extra == \"deep\"", "matplotlib>=3.7; extra == \"viz\"", "pytest>=7.0; extra == \"test\"", "coverage>=7.0; extra == \"test\"", "onnxruntime>=1.14; extra == \"all\"", "tensorflow>=2.12; extra == \"all\"", "h5py; extra == \"all\"", "matplotlib>=3.7; extra == \"all\"" ]
[]
[]
[]
[ "Homepage, https://github.com/IDMDataHub/gaitkit", "Documentation, https://github.com/IDMDataHub/gaitkit#readme", "Repository, https://github.com/IDMDataHub/gaitkit", "Issues, https://github.com/IDMDataHub/gaitkit/issues", "Changelog, https://github.com/IDMDataHub/gaitkit/blob/master/CHANGELOG.md" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:19:16.880666
gaitkit-1.2.4.tar.gz
11,899,880
df/d5/d569eea54bc7217ffed60e5afaccc476ca0972d64d9d9376ab711d0d37c7/gaitkit-1.2.4.tar.gz
source
sdist
null
false
8d6337f0a6652b1f4b61bb38244fbf2f
bcbae4c7ef6207622af4ac8ee63c16bf43f2fc29e2288453056c18fb8af049d2
dfd5d569eea54bc7217ffed60e5afaccc476ca0972d64d9d9376ab711d0d37c7
MIT
[ "LICENSE" ]
262
2.4
mld-sdk
0.7.0
MLD Plugin SDK - Build analysis plugins for the MLD platform
# MLD SDK (Python) SDK for building analysis plugins that integrate with the MLD platform. > **Full Documentation:** See the [comprehensive docs](../../docs/index.md) for detailed API reference and guides. > - [API Reference](../../docs/python/api-reference.md) > - [Plugin Development Guide](../../docs/python/plugin-guide.md) > - [Exception Handling](../../docs/python/exceptions.md) ## Installation ```bash # From PyPI (when published) uv add mld-sdk # From git uv add git+https://github.com/EstrellaXD/mld#subdirectory=sdk ``` ## Quick Start Create a plugin by implementing the `AnalysisPlugin` interface: ```python from mld_sdk import AnalysisPlugin, PluginMetadata, PluginCapabilities from fastapi import APIRouter router = APIRouter() @router.get("/hello") async def hello(): return {"message": "Hello from my plugin!"} class MyPlugin(AnalysisPlugin): @property def metadata(self) -> PluginMetadata: return PluginMetadata( name="My Plugin", version="1.0.0", description="My analysis plugin", analysis_type="metabolomics", routes_prefix="/my-plugin", capabilities=PluginCapabilities( requires_auth=True, requires_experiments=True, ), ) def get_routers(self): return [(router, "")] async def initialize(self, context=None): self._context = context async def shutdown(self): pass ``` ## Plugin Package Structure ``` mld-plugin-example/ ├── pyproject.toml ├── README.md └── src/mld_plugin_example/ ├── __init__.py └── plugin.py ``` ### pyproject.toml ```toml [project] name = "mld-plugin-example" version = "1.0.0" dependencies = ["mld-sdk>=1.0.0"] [project.entry-points."mld.plugins"] example = "mld_plugin_example.plugin:MyPlugin" [build-system] requires = ["hatchling"] build-backend = "hatchling.build" [tool.hatch.build.targets.wheel] packages = ["src/mld_plugin_example"] ``` The entry point `mld.plugins` is how the platform discovers your plugin. ## Platform Context When running integrated with the platform, your plugin receives a `PlatformContext` that provides access to: - Authentication dependencies (`get_current_user_dependency()`) - Repositories for experiments, samples, users, etc. - Platform configuration ```python async def initialize(self, context=None): self._context = context if context: # Running integrated - use platform services self.experiment_repo = context.get_experiment_repository() else: # Running standalone pass ``` ## Installation Commands ```bash # Install from GitHub uv add git+https://github.com/org/mld-plugin-example # Install specific version uv add git+https://github.com/org/mld-plugin-example@v1.0.0 # Install from PyPI uv add mld-plugin-example # Install local plugin for development uv add --editable ./my-plugin ```
text/markdown
null
MorscherLab <morscher@chem.ethz.ch>
null
null
null
null
[ "Development Status :: 4 - Beta", "Framework :: FastAPI", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.12" ]
[]
null
null
>=3.12
[]
[]
[]
[ "fastapi>=0.109.0", "pydantic>=2.0.0", "aiosqlite>=0.19.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest>=7.0.0; extra == \"dev\"", "sqlmodel>=0.0.16; extra == \"dev\"", "aiosqlite>=0.19.0; extra == \"local-db\"", "sqlmodel>=0.0.16; extra == \"local-db\"" ]
[]
[]
[]
[ "Homepage, https://github.com/MorscherLab/mld", "Documentation, https://github.com/MorscherLab/mld/tree/main/packages/sdk-python#readme", "Repository, https://github.com/MorscherLab/mld" ]
uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-20T10:18:39.652229
mld_sdk-0.7.0.tar.gz
35,798
73/1f/554f25c5405fa3eff3b76ecf6dee61fb42ac9c9e727434c5f02ff21079a5/mld_sdk-0.7.0.tar.gz
source
sdist
null
false
cbf81a37fa474f60ca1150571838b38b
6262a9e7afbedc2bbd4b85b8772c2d40d39cd9245ae7be6b6fbc4efff7cd6d2e
731f554f25c5405fa3eff3b76ecf6dee61fb42ac9c9e727434c5f02ff21079a5
MIT
[]
245
2.4
mankinds-eval
0.1.1
Open source Python library providing evaluation methods for AI systems
# mankinds-eval **Open source Python library for AI evaluation** [Documentation](https://mankinds-io.github.io/mankinds-eval) | [Metrics and Features](#metrics-and-features) | [Quick Start](#quick-start) | [Examples](https://github.com/mankinds-io/mankinds-eval/tree/main/examples) --- **mankinds-eval** is a modular, open-source evaluation framework for LLM applications. It provides a library of evaluation methods that you assemble to build custom scorers tailored to your specific use cases. Whether you're building RAG pipelines, chatbots, AI agents, or any LLM-based application, mankinds-eval lets you combine heuristic checks (fast, free, deterministic), ML-based analysis, and LLM-as-Judge evaluations. Run everything locally or use external providers. You control the trade-offs. --- ## Metrics and Features - Large variety of evaluation methods powered by heuristics, ML models (running locally), or any LLM provider: - **Heuristic methods:** - ExactMatch, FuzzyMatch, RegexMatch - ContainsAll, ContainsAny - BLEU, ROUGE - TextLength, WordCount, SentenceCount - JSONValid, JSONSchema - NoRefusal - **ML methods** (local models via transformers): - EmbeddingsSimilarity - SentimentAnalysis - Toxicity - PIIDetection - LanguageDetection - ZeroShotClassification - **LLM-as-Judge methods:** - Faithfulness - AnswerRelevancy - Coherence - Helpfulness - Correctness - SingleCriterionJudge (custom criteria) - MultiCriteriaJudge (weighted scoring) - PairwiseJudge (A vs B comparison) - ConsensusJudge (multi-LLM voting) - Compose methods into pipelines with aggregation modes (all, any, weighted, sequential). - Pre-built presets for common scenarios: RAGScorer, SafetyScorer. - Load evaluation data from CSV, JSONL, JSON, or HuggingFace Datasets. - Export results to JSON or HTML scorecards. - CLI for CI/CD integration. - Define scorers in Python or YAML configuration files. --- ## Quick Start ### Installation ```bash pip install mankinds-eval ``` ### Writing your first evaluation Create a file `evaluate.py`: ```python from mankinds_eval import Scorer from mankinds_eval.methods.heuristic import FuzzyMatch, ROUGE scorer = Scorer( name="qa_scorer", methods=[ FuzzyMatch(threshold=0.7), ROUGE(threshold=0.4), ] ) test_case = { "input": "What if these shoes don't fit?", "output": "You have 30 days to get a full refund at no extra cost.", "expected": "We offer a 30-day full refund at no extra costs.", } results = scorer.run_sync([test_case]) print(results.summary) ``` Run it: ```bash python evaluate.py ``` Let's break down what happened: - The `input` field contains the user query, `output` is the LLM response you want to evaluate. - The `expected` field is the reference answer used by methods like FuzzyMatch and ROUGE. - `FuzzyMatch` computes string similarity using token-based matching. A score >= 0.7 passes. - `ROUGE` measures overlap between the output and expected text. A score >= 0.4 passes. - All method scores range from 0 to 1. The `threshold` determines if the evaluation passes. --- ## Evaluating with LLM-as-Judge For semantic evaluation that goes beyond string matching, use LLM-as-Judge methods. These use an LLM to evaluate outputs based on criteria you define. ```bash pip install mankinds-eval[llm] export OPENAI_API_KEY="your-api-key" ``` ```python from mankinds_eval import Scorer from mankinds_eval.methods.llm import Faithfulness, AnswerRelevancy scorer = Scorer( name="rag_evaluator", methods=[ Faithfulness(provider="openai", threshold=0.7), AnswerRelevancy(provider="openai", threshold=0.7), ] ) test_case = { "input": "What is the refund policy?", "output": "You can get a full refund within 30 days.", "context": "All customers are eligible for a 30-day full refund at no extra cost.", } results = scorer.run_sync([test_case]) ``` - `Faithfulness` checks if the output is grounded in the provided context (no hallucinations). - `AnswerRelevancy` checks if the output actually addresses the input question. - The `provider` parameter specifies which LLM to use (openai, anthropic, etc.). --- ## Evaluating with ML Models For local evaluation without API calls, use ML methods that run transformer models on your machine. ```bash pip install mankinds-eval[ml] ``` ```python from mankinds_eval import Scorer from mankinds_eval.methods.ml import Toxicity, EmbeddingsSimilarity scorer = Scorer( name="safety_check", methods=[ Toxicity(threshold=0.5), EmbeddingsSimilarity(threshold=0.8), ] ) results = scorer.run_sync([test_case]) ``` - `Toxicity` uses a local model to detect harmful content. Lower scores are better. - `EmbeddingsSimilarity` computes semantic similarity using sentence embeddings. --- ## Evaluating a Dataset Evaluate multiple samples at once by passing a list or loading from a file: ```python from mankinds_eval import Scorer, load_samples from mankinds_eval.methods.heuristic import ExactMatch scorer = Scorer(name="batch_eval", methods=[ExactMatch()]) # From a list samples = [ {"input": "Q1", "output": "A", "expected": "A"}, {"input": "Q2", "output": "B", "expected": "C"}, ] results = scorer.run_sync(samples) # Or from a file samples = load_samples("data.jsonl") results = scorer.run_sync(samples) # Export results results.to_json("results.json") results.to_html("scorecard.html") ``` Supported formats: CSV, JSONL, JSON, HuggingFace Datasets. --- ## Using Config Files Define scorers in YAML for reproducibility and CI/CD integration: ```yaml # scorer.yaml name: qa_scorer methods: - type: heuristic.FuzzyMatch threshold: 0.7 algorithm: token_set_ratio - type: heuristic.ROUGE threshold: 0.4 - type: llm.Faithfulness provider: openai threshold: 0.7 ``` Load and run: ```python from mankinds_eval import Scorer scorer = Scorer.from_config("scorer.yaml") results = scorer.run_sync("data.jsonl") ``` Or use the CLI: ```bash mankinds-eval run -c scorer.yaml -d data.jsonl -o results.json --html scorecard.html ``` --- ## Composite Pipelines Combine methods with different aggregation logic: ```python from mankinds_eval import Scorer from mankinds_eval.methods import CompositeMethod from mankinds_eval.methods.heuristic import TextLength, NoRefusal, FuzzyMatch quality_gate = CompositeMethod( name="quality_gate", methods=[ TextLength(min_length=50, max_length=500), NoRefusal(), FuzzyMatch(threshold=0.6), ], mode="all", # all checks must pass ) scorer = Scorer(name="eval", methods=[quality_gate]) results = scorer.run_sync(data) ``` Aggregation modes: - `all` - all methods must pass (AND logic) - `any` - at least one method must pass (OR logic) - `weighted` - weighted average of scores - `sequential` - methods run in order, sharing results --- ## Pre-built Presets Use presets for common evaluation scenarios: ```python from mankinds_eval import Scorer from mankinds_eval.methods.presets import RAGScorer, SafetyScorer # RAG evaluation: Faithfulness + Relevancy + Coherence rag_methods = RAGScorer.create(provider="openai", threshold=0.7) # Safety evaluation: Toxicity + PII + Refusal detection safety_methods = SafetyScorer.create(check_toxicity=True, check_pii=True) scorer = Scorer(name="full_eval", methods=rag_methods + safety_methods) ``` --- ## Available Methods | Category | Method | Description | | --------- | ------------------------ | ---------------------------------------------------------------- | | Heuristic | `ExactMatch` | Exact string comparison | | Heuristic | `FuzzyMatch` | Fuzzy string similarity (Levenshtein, Jaro-Winkler, token-based) | | Heuristic | `RegexMatch` | Regular expression matching | | Heuristic | `ContainsAll` | Check if output contains all keywords | | Heuristic | `ContainsAny` | Check if output contains any keyword | | Heuristic | `BLEU` | BLEU score for translation quality | | Heuristic | `ROUGE` | ROUGE score for summarization | | Heuristic | `TextLength` | Validate text length | | Heuristic | `JSONValid` | Validate JSON syntax | | Heuristic | `JSONSchema` | Validate against JSON Schema | | Heuristic | `NoRefusal` | Detect LLM refusals | | ML | `EmbeddingsSimilarity` | Semantic similarity via embeddings | | ML | `Toxicity` | Detect toxic content | | ML | `SentimentAnalysis` | Analyze sentiment | | ML | `PIIDetection` | Detect PII/NER entities | | ML | `LanguageDetection` | Detect/verify language | | ML | `ZeroShotClassification` | Zero-shot text classification | | LLM | `Faithfulness` | Check if response is grounded in context | | LLM | `AnswerRelevancy` | Check if response addresses the question | | LLM | `Coherence` | Evaluate logical flow and clarity | | LLM | `Helpfulness` | Evaluate practical utility | | LLM | `Correctness` | Compare against expected answer | | LLM | `SingleCriterionJudge` | Custom single criterion evaluation | | LLM | `MultiCriteriaJudge` | Multi-criteria weighted scoring | | LLM | `PairwiseJudge` | Compare two responses | | LLM | `ConsensusJudge` | Multi-LLM consensus evaluation | --- ## Data Format Sample structure for evaluation: ```python { "input": "User question or prompt", "output": "AI response to evaluate", "expected": "Optional expected response", "context": "Optional RAG context", "conversation": [{"role": "user", "content": "..."}], # Optional "metadata": {} # Optional } ``` --- ## Development ```bash git clone https://github.com/mankinds-io/mankinds-eval.git cd mankinds-eval pip install -e ".[dev]" pytest # Run tests ruff check . # Lint mypy mankinds_eval # Type check ``` --- ## License Apache 2.0 - See [LICENSE](LICENSE) for details.
text/markdown
null
Mankinds team <team@mankinds.io>
null
null
null
llm, evaluation, ai, testing, rag, nlp, machine-learning, llm-as-judge, ai-evaluation, deepeval, ragas
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Scientific/Engineering :: Artificial Intelligence", "Typing :: Typed", "Natural Language :: English" ]
[]
null
null
>=3.9
[]
[]
[]
[ "pyyaml>=6.0", "rapidfuzz>=3.0", "sacrebleu>=2.0", "rouge-score>=0.1.2", "typer>=0.9.0", "jsonschema>=4.0", "langdetect>=1.0.9", "rich>=13.0.0", "tenacity>=8.0.0", "torch>=2.0; extra == \"ml\"", "transformers>=4.30.0; extra == \"ml\"", "sentence-transformers>=2.2.0; extra == \"ml\"", "litellm>=1.0.0; extra == \"llm\"", "mankinds-eval[llm,ml]; extra == \"all\"", "pytest>=7.0.0; extra == \"dev\"", "pytest-asyncio>=0.21.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "ruff>=0.1.0; extra == \"dev\"", "mypy>=1.0.0; extra == \"dev\"", "mkdocs>=1.5.0; extra == \"dev\"", "mkdocs-material>=9.0.0; extra == \"dev\"", "mkdocstrings[python]>=0.24.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/mankinds-io/mankinds-eval", "Documentation, https://mankinds-io.github.io/mankinds-eval", "Repository, https://github.com/mankinds-io/mankinds-eval", "Changelog, https://github.com/mankinds-io/mankinds-eval/blob/main/CHANGELOG.md", "Issues, https://github.com/mankinds-io/mankinds-eval/issues" ]
twine/6.2.0 CPython/3.12.10
2026-02-20T10:18:18.519759
mankinds_eval-0.1.1.tar.gz
77,052
24/b6/3ba77d37de587fce8c05b111c02d65d6488c70781eedd501e4a9df703093/mankinds_eval-0.1.1.tar.gz
source
sdist
null
false
0577eec626654ff119a59a693880452f
2b01b3f660ac2c6e958032a4fd1424b1d41abff61dde092c43ea34cf39f50b90
24b63ba77d37de587fce8c05b111c02d65d6488c70781eedd501e4a9df703093
Apache-2.0
[ "LICENSE" ]
219
2.4
drs4
0.3.0
Control and data acquisition software for FINER/DRS4
# DRS4 [![Release](https://img.shields.io/pypi/v/drs4?label=Release&color=cornflowerblue&style=flat-square)](https://pypi.org/project/drs4/) [![Python](https://img.shields.io/pypi/pyversions/drs4?label=Python&color=cornflowerblue&style=flat-square)](https://pypi.org/project/drs4/) [![Downloads](https://img.shields.io/pypi/dm/drs4?label=Downloads&color=cornflowerblue&style=flat-square)](https://pepy.tech/project/drs4) [![DOI](https://img.shields.io/badge/DOI-10.5281/zenodo.18709559-cornflowerblue?style=flat-square)](https://doi.org/10.5281/zenodo.18709559) [![Tests](https://img.shields.io/github/actions/workflow/status/finerreceiver/drs4/tests.yaml?label=Tests&style=flat-square)](https://github.com/finerreceiver/drs4/actions) Control and data acquisition for FINER/DRS4 ## Installation ```python pip install drs4 ```
text/markdown
null
Akio Taniguchi <a-taniguchi@mail.kitami-it.ac.jp>
null
null
MIT License Copyright (c) 2024-2026 Akio Taniguchi Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
astronomy, drs4, finer, lmt, millimeter, python, radio-astronomy, single-dish, submillimeter
[ "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
<3.15,>=3.10
[]
[]
[]
[ "dateparser<2,>=1", "fire<1,>=0.7", "matplotlib<4,>=3", "numpy<3,>=2", "pillow<11,>=10", "tqdm<5,>=4", "typing-extensions<5,>=4", "xarray-dataclasses<2,>=1", "xarray<2027,>=2024", "zarr<3,>=2" ]
[]
[]
[]
[ "homepage, https://finerreceiver.github.io/drs4", "repository, https://github.com/finerreceiver/drs4" ]
uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
2026-02-20T10:18:14.933273
drs4-0.3.0-py3-none-any.whl
25,693
3f/2f/4d6ee35103bd31843de944e8b8228372a2ba635259cde24b02f1eb8bdc2d/drs4-0.3.0-py3-none-any.whl
py3
bdist_wheel
null
false
ace5b94cdc594cb2ed18da225e673c75
ec943986902c333a399fbcf40b33e0a24b2186b6ea55e5497a45edcea6763db4
3f2f4d6ee35103bd31843de944e8b8228372a2ba635259cde24b02f1eb8bdc2d
null
[ "LICENSE" ]
196
2.4
onnx2akida
0.7.0
Evaluates the compatibility of an ONNX model with Akida.
# onnx2akida Evaluates the compatibility of an ONNX model with Akida
null
Brainchip
null
null
Johan Mejia <jmejia@brainchip.com>
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "setuptools>=77.0.3", "packaging>=24.2", "setuptools_scm", "cnn2snn~=2.19.0", "tqdm", "onnx_ir==0.1.9", "akida_ort; sys_platform == \"linux\"", "pytest; extra == \"test\"", "pytest-console-scripts; extra == \"test\"" ]
[]
[]
[]
[ "Documentation, https://onnx.brainchip.com" ]
twine/5.1.0 CPython/3.11.9
2026-02-20T10:18:13.948947
onnx2akida-0.7.0.tar.gz
30,956
14/84/83720b400c642e9426d0395630b0bf8573544584517a360b9122f1693077/onnx2akida-0.7.0.tar.gz
source
sdist
null
false
3e5cc7a81cb6298261ab5933cf3a2e32
d4e2bdace9f07f301efdc638447a9983d894ad7cd657736080d370d59475da75
148483720b400c642e9426d0395630b0bf8573544584517a360b9122f1693077
null
[]
141
2.4
ccs-digitalmarketplace-apiclient
37.15.0
API clients for Digital Marketplace Data API and Search API.
Digital Marketplace API client ========================= ![Python 3.11](https://img.shields.io/badge/python-3.11-blue.svg) ![Python 3.12](https://img.shields.io/badge/python-3.12-blue.svg) ![Python 3.13](https://img.shields.io/badge/python-3.13-blue.svg) ![Python 3.14](https://img.shields.io/badge/python-3.14-blue.svg) [![PyPI version](https://badge.fury.io/py/ccs-digitalmarketplace-apiclient.svg)](https://badge.fury.io/py/ccs-digitalmarketplace-apiclient) ## What's in here? API clients for Digital Marketplace [Data API](https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-api) and [Search API](https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-search-api). Originally was part of [Digital Marketplace Utils](https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-utils). ## Running the tests Install Python dependencies: ``` make bootstrap invoke requirements-dev ``` Run the tests: ``` invoke test ``` ## Usage examples ```python data_client = apiclient.DataAPIClient(api_url, api_access_token) services = data_client.find_services_iter(framework=frameworks) ``` ## Releasing a new version To update the package version, edit the `__version__ = ...` string in `dmapiclient/__init__.py`, commit and push the change and wait for CI to create a new version tag. Once the tag is available on GitHub, the new version can be used by the apps by adding the following line to the app `requirements.txt` (replacing `X.Y.Z` with the current version number): ``` git+https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-apiclient.git@X.Y.Z#egg=ccs-digitalmarketplace-apiclient==X.Y.Z ``` When changing a major version number consider adding a record to the `CHANGELOG.md` with a description of the change and an example of the upgrade process for the client apps. ## Pre-commit hooks This project has a [pre-commit hook][pre-commit hook] to do some general file checks and check the `pyproject.toml`. Follow the [Quick start][pre-commit quick start] to see how to set this up in your local checkout of this project. ## Licence Unless stated otherwise, the codebase is released under [the MIT License][mit]. This covers both the codebase and any sample code in the documentation. The documentation is [&copy; Crown copyright][copyright] and available under the terms of the [Open Government 3.0][ogl] licence. [mit]: LICENCE [copyright]: http://www.nationalarchives.gov.uk/information-management/re-using-public-sector-information/uk-government-licensing-framework/crown-copyright/ [ogl]: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/ [pre-commit hook]: https://pre-commit.com/ [pre-commit quick start]: https://pre-commit.com/#quick-start
text/markdown
GDS Developers, CCS Developers
null
null
null
null
null
[]
[]
null
null
<3.15,>=3.11
[]
[]
[]
[ "requests<3,>=2.18.4", "Flask>=3.0.3", "ruff==0.15.1; extra == \"dev\"", "mypy==1.19.1; extra == \"dev\"", "pytest==9.0.2; extra == \"dev\"", "pytest-cov==7.0.0; extra == \"dev\"", "requests-mock==1.12.1; extra == \"dev\"", "types-requests; extra == \"dev\"", "pre-commit==4.5.1; extra == \"dev\"", "ccs-digitalmarketplace-test-utils==7.7.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-apiclient", "Repository, https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-apiclient.git", "Issues, https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-apiclient/issues", "Changelog, https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-apiclient/CHANGELOG.md" ]
twine/6.2.0 CPython/3.11.14
2026-02-20T10:17:49.392771
ccs_digitalmarketplace_apiclient-37.15.0.tar.gz
24,972
a7/a0/9fff2e15e5d67f610747f2827da8b48cf4bc65dd5601c0008e389e4e41f4/ccs_digitalmarketplace_apiclient-37.15.0.tar.gz
source
sdist
null
false
a52b0d3d7effa378544c419409d8bdbc
ffe4a4ac3ef1ba75521ee841159aa2eef625659899a616e24adb3c4de3422e8c
a7a09fff2e15e5d67f610747f2827da8b48cf4bc65dd5601c0008e389e4e41f4
null
[ "LICENCE" ]
206
2.4
akida-models
1.13.1
Akida Models
# Akida models This package contains a zoo of TensorFlow/Keras defined models that can be quantized and that are compatible for Akida conversion.
text/markdown
Kevin Tsiknos
ktsiknos@brainchip.com
null
null
Apache 2.0
null
[]
[]
https://doc.brainchipinc.com
null
>=3.10
[]
[]
[]
[ "cnn2snn~=2.19.0", "quantizeml~=1.2.2", "scipy", "opencv-python", "mtcnn==0.1.1", "imaug", "trimesh", "tqdm", "tensorflow-datasets", "librosa", "soundata", "pydub", "pytest; extra == \"test\"", "pytest-rerunfailures; extra == \"test\"", "pytest-xdist; extra == \"test\"" ]
[]
[]
[]
[]
twine/5.1.0 CPython/3.11.9
2026-02-20T10:17:37.528018
akida_models-1.13.1.tar.gz
178,934
17/e8/713896af6601a4da0e76db4c6882bb28bf4a997fe57fb2709e70e8599a8a/akida_models-1.13.1.tar.gz
source
sdist
null
false
0ffdd5b1da6108fb2052deaedc0d75fd
fc7bf8a4a88d91f8acae8efd8c845f06c871c23cdca321c19148d7a28bcca1c7
17e8713896af6601a4da0e76db4c6882bb28bf4a997fe57fb2709e70e8599a8a
null
[]
151
2.4
datapizza-ai-core
0.0.22
Core components for the datapizza-ai framework
This is the core of datapizza-ai framework
text/markdown
null
null
null
null
MIT
null
[ "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3" ]
[]
null
null
<3.14,>=3.10.0
[]
[]
[]
[ "jinja2>=3.1.6", "jsonref>=1.1.0", "mcp>=1.20.0", "opentelemetry-api>=1.33.1", "opentelemetry-sdk>=1.33.1", "pydantic<3.0.0,>=2.10.5", "python-dotenv>=1.0.1", "pyyaml>=6.0.2", "rich>=14.1.0", "typing-extensions<5.0.0,>=4.12.2" ]
[]
[]
[]
[]
python-httpx/0.28.1
2026-02-20T10:17:29.628436
datapizza_ai_core-0.0.22-py3-none-any.whl
105,660
a9/b9/7a29eb0b0499050578db8f00c59e44e1fc8b319e3257c4d9cd6a6b12ce8e/datapizza_ai_core-0.0.22-py3-none-any.whl
py3
bdist_wheel
null
false
db85fae5cbea7594fb1c0444f1bb4bc6
25fac31936896e91f4b35b3a8b7d9d6fa49c54b8fa78827de80e007ef6e43d32
a9b97a29eb0b0499050578db8f00c59e44e1fc8b319e3257c4d9cd6a6b12ce8e
null
[]
310
2.4
quantizeml
1.2.3
Base layers and quantization tools
# QuantizeML Framework for quantizing Deep-learning models. This supports the quantization using low-bitwidth weights and outputs of CNN models.
text/markdown
Kevin Tsiknos
ktsiknos@brainchip.com
Kevin Tsiknos
ktsiknos@brainchip.com
Apache 2.0
null
[]
[]
https://doc.brainchipinc.com
null
>=3.10
[]
[]
[]
[ "tf_keras~=2.19.0", "onnxruntime~=1.23.0", "onnxscript~=0.4.0", "onnx_ir<=0.1.12,>=0.1.9", "onnxruntime_extensions<0.14.0", "scipy", "pytest; extra == \"test\"", "pytest-rerunfailures; extra == \"test\"", "pytest-console-scripts; extra == \"test\"", "pytest-xdist; extra == \"test\"", "tensorflow_datasets; extra == \"test\"", "keras_hub; extra == \"test\"", "tensorflow_text~=2.19.0; sys_platform == \"linux\" and extra == \"test\"", "matplotlib; extra == \"analysis\"", "tensorboardX; extra == \"analysis\"" ]
[]
[]
[]
[]
twine/5.1.0 CPython/3.11.9
2026-02-20T10:17:10.162663
quantizeml-1.2.3.tar.gz
179,069
cf/c9/e49171a242c693d551eb5d9499ebcf676c6d8bbf83d6f96b082c30f914c7/quantizeml-1.2.3.tar.gz
source
sdist
null
false
511c2babcf1e0973ffb4b74282ba40a0
b0caa2bcdff7865b4e44c604effb3297794e508afc56e7387205e93dbdba2b0c
cfc9e49171a242c693d551eb5d9499ebcf676c6d8bbf83d6f96b082c30f914c7
null
[]
163
2.4
cnn2snn
2.19.1
Keras to Akida CNN Converter
# CNN2SNN toolkit The Brainchip CNN2SNN toolkit provides a means to convert Convolutional Neural Networks (CNN) that were trained using Deep Learning methods to a low-latency and low-power Spiking Neural Network (SNN) for use with the Akida Runtime.
text/markdown
Johan Mejia
jmejia@brainchip.com
null
null
Apache 2.0
null
[]
[]
https://doc.brainchipinc.com
null
>=3.10
[]
[]
[]
[ "tf_keras~=2.19.0", "tensorflow~=2.19.0", "akida==2.19.1", "quantizeml~=1.2.2" ]
[]
[]
[]
[]
twine/5.1.0 CPython/3.11.9
2026-02-20T10:16:45.390555
cnn2snn-2.19.1.tar.gz
83,379
91/f5/0035095e9cfd1dad7667eab2e4d171b8ba0dd4b334b1a6453dbff0f3ea1b/cnn2snn-2.19.1.tar.gz
source
sdist
null
false
ac374d5f108e2a3f9f8c5fb042f90516
b2e5fc8f1ec1992fe797404af2bb516423316ed84aae88b4cb63cc3fd1d8bcf2
91f50035095e9cfd1dad7667eab2e4d171b8ba0dd4b334b1a6453dbff0f3ea1b
null
[]
158
2.4
leeroo-kapso
0.1.5
A Knowledge-grounded framework for Autonomous Program Synthesis and Optimization
<h1 align="center">Kapso</h1> <h4 align="center">A Knowledge-grounded framework for Autonomous AI/ML Program Synthesis and Optimization</h4> <p align="center"> <a href="https://docs.leeroo.com">Learn more</a> · <a href="https://discord.gg/hqVbPNNEZM">Join Discord</a> · <a href="https://leeroo.com">Website</a> </p> <p align="center"> <a href="https://pypi.org/project/leeroo-kapso/"><img src="https://img.shields.io/pypi/v/leeroo-kapso?color=blue" alt="PyPI"></a> <a href="https://discord.gg/hqVbPNNEZM"><img src="https://dcbadge.limes.pink/api/server/hqVbPNNEZM?style=flat" alt="Discord"></a> <a href="https://github.com/leeroo-ai/kapso"><img src="https://img.shields.io/github/commit-activity/m/leeroo-ai/kapso" alt="GitHub commit activity"></a> <a href="https://www.ycombinator.com/companies/leeroo"><img src="https://img.shields.io/badge/Y%20Combinator-X25-orange?logo=ycombinator&logoColor=white" alt="Y Combinator X25"></a> </p> <p align="center"> If you like this project, please support us by giving it a star ⭐ </p> > **Early Access**: [Sign up](https://docs.google.com/forms/d/e/1FAIpQLSfk0RjtZaZFXq3-tclZhnz40E_mNzPSI1RHhBQWzswbNwp8Ug/viewform) for the **hosted version of Kapso**. <p align="center"> <img src="https://api.leeroo.com/storage/v1/object/public/opensource/framework.png" alt="Kapso Framework Architecture" width="700"> </p> --- ## News - **[Leeroopedia MCP Integration](https://leeroopedia.com)**: Kapso now connects to **Leeroopedia MCP** — your ML & Data Knowledge Wiki. Learnt by AI, built by AI, for AI. A centralized playbook of best practices and expert-level knowledge for Machine Learning and Data domains. Kapso agents use it during ideation and implementation to search knowledge, build plans, diagnose failures, and more. - **[Moltbook Agents 🦞](https://www.moltbook.com/)**: Build AI agents that optimize other agents and debate on Moltbook! [Get started →](moltbook_bot/README.md) - **Technical Report**: Our technical report is now available! [Read the paper](https://arxiv.org/abs/2601.21526) - **#1 on [MLE-Bench](benchmarks/mle/README.md)**: KAPSO achieved top ranking among open-source systems on Kaggle ML competitions (MLE Benchmark). <img src="https://api.leeroo.com/storage/v1/object/public/opensource/mle_benchmark.png" alt="MLE-Bench Results" width="600"> - **#1 on [ALE-Bench](benchmarks/ale/README.md)**: KAPSO achieved top ranking on long-horizon algorithmic discovery problems (ALE Benchmark). <img src="https://api.leeroo.com/storage/v1/object/public/opensource/ale_benchmark.png" alt="ALE-Bench Results" width="600"> ## What is KAPSO? KAPSO combines **iterative experimentation** with a **knowledge base** of best practices and tricks to discover ML/AI code improvements. It automates the cycle of **designing**, **testing**, and **refining** algorithms, eventually adapting the optimized solution for **deployment** on your chosen infrastructure. ### The Four Pillars | Pillar | Method | Description | |--------|--------|-------------| | **Evolve** | `.evolve()` | Run iterative experiments to build software for a goal. Uses tree search, coding agents, and KG context to generate and refine solutions. | | **Learn** | `.learn()` | Ingest knowledge from repositories, past solutions, or research results. Extracts patterns and best practices into the Knowledge Graph. | | **Research** | `.research()` | Run deep web research to gather ideas and implementation references. Returns structured findings you can feed into the knowledge base or use as context for evolving solutions. | | **Deploy** | `.deploy()` | Turn a solution into running software. Supports local execution, Docker containers, or cloud platforms like Modal. | ## 🚀 Quickstart ### Installation **From PyPI (recommended)** ```bash pip install leeroo-kapso ``` **From source (for development or to access wiki knowledge data)** ```bash git clone https://github.com/leeroo-ai/kapso.git cd kapso # Pull Git LFS files (wiki knowledge data) git lfs install git lfs pull # Create conda environment (recommended) conda create -n kapso python=3.12 conda activate kapso # Install in development mode pip install -e . ``` **Leeroopedia MCP (optional)** — connect Kapso to [Leeroopedia](https://leeroopedia.com), a curated ML/AI knowledge base. Sign up at [leeroopedia.com](https://leeroopedia.com) for an API key, then: ```bash pip install leeroopedia-mcp echo 'LEEROOPEDIA_API_KEY=kpsk_your_key_here' >> .env ``` ### Basic Usage ```python from kapso import Kapso, Source, DeployStrategy # Initialize Kapso # If you have a Knowledge Graph, pass kg_index; otherwise just use Kapso() kapso = Kapso(kg_index="data/indexes/legal_contracts.index") # Research: Gather domain-specific techniques from the web # mode: "idea" | "implementation" | "study" (can pass multiple as list) # depth: "light" | "deep" (default: "deep") findings = kapso.research( "RLHF and DPO fine-tuning for legal contract analysis", mode=["idea", "implementation"], depth="deep", ) # Learn: Ingest knowledge from repositories and research into the KG kapso.learn( Source.Repo("https://github.com/huggingface/trl"), *findings.ideas, # List[Source.Idea] *findings.implementations, # List[Source.Implementation] wiki_dir="data/wikis", ) # Evolve: Build a solution through experimentation # Use research results as context via to_string() solution = kapso.evolve( goal="Fine-tune Llama-3.1-8B for legal clause risk classification, target F1 > 0.85", data_dir="./data/cuad_dataset", output_path="./models/legal_risk_v1", context=[findings.to_string()], ) # Deploy: Turn solution into running deployed_program deployed_program = kapso.deploy(solution, strategy=DeployStrategy.MODAL) deployed_program.stop() ``` For detailed integration steps, see the [Quickstart](https://docs.leeroo.com/docs/quickstart) and [Installation](https://docs.leeroo.com/docs/installation) guides. ## Examples | Example | Description | |---------|-------------| | [**CUDA Optimization**](examples/cuda_optimization/README.md) | Optimize CUDA kernels for GPU performance | | [**PyTorch Optimization**](examples/pytorch_optimization/README.md) | Optimize PyTorch operations for speedup | | [**ML Model Development**](examples/ml_model_development/README.md) | Improve ML model accuracy on tabular data | | [**Prompt Engineering**](examples/prompt_engineering/README.md) | Optimize prompts for better LLM performance | | [**Agentic Scaffold**](examples/agentic_scaffold/README.md) | Optimize agentic AI workflows | ## Supported Benchmarks | Benchmark | Description | |-----------|-------------| | [**MLE-Bench**](benchmarks/mle/README.md) | Kaggle ML competitions — tabular, image, text, audio problems | | [**ALE-Bench**](benchmarks/ale/README.md) | AtCoder algorithmic optimization — C++ solution generation | ## 📚 Documentation & Support - **Full Documentation**: [docs.leeroo.com](https://docs.leeroo.com) - **Community**: [Discord](https://discord.gg/hqVbPNNEZM) - **Website**: [leeroo.com](https://leeroo.com) ## Contributing We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details on how to get started. ## Citation If you use Kapso in your research, please cite: ```bibtex @misc{nadaf2026kapsoknowledgegroundedframeworkautonomous, title={KAPSO: A Knowledge-grounded framework for Autonomous Program Synthesis and Optimization}, author={Alireza Nadafian and Alireza Mohammadshahi and Majid Yazdani}, year={2026}, eprint={2601.21526}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2601.21526}, } ```
text/markdown
null
Leeroo Team <team@leeroo.com>
null
null
MIT
ai, ml, optimization, code-generation, llm, agents
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Topic :: Scientific/Engineering :: Artificial Intelligence" ]
[]
null
null
>=3.10
[]
[]
[]
[ "GitPython", "aider-chat>=0.35.0", "PyYAML", "python-dotenv", "neo4j", "weaviate-client>=4.0.0", "openai>=1.0.0", "cryptography<46.0.0", "neo4j; extra == \"mle\"", "openai; extra == \"mle\"", "pytest>=7.0.0; extra == \"dev\"", "pytest-cov>=4.0.0; extra == \"dev\"", "black>=23.0.0; extra == \"dev\"", "flake8>=6.0.0; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://leeroo.com", "Documentation, https://docs.leeroo.com", "Repository, https://github.com/leeroo-ai/kapso", "Issues, https://github.com/leeroo-ai/kapso/issues" ]
twine/6.2.0 CPython/3.10.19
2026-02-20T10:16:43.815895
leeroo_kapso-0.1.5.tar.gz
415,394
e3/7f/65d1c801110e5db3bb4eb516538736eacd657cfa877ea2f3374d1f99e84e/leeroo_kapso-0.1.5.tar.gz
source
sdist
null
false
a5c9ed5579eef95972850c5c00d7858b
6365ede11afa9dd8d54780edd1335902c165b8820befb4e96e8f77c8038a5fee
e37f65d1c801110e5db3bb4eb516538736eacd657cfa877ea2f3374d1f99e84e
null
[ "LICENSE" ]
213
2.4
akida
2.19.1
Akida Execution Engine
# Akida Execution Engine The Akida Execution Engine is an interface to the Brainchip Akida Neural Processor. To allow the development of Akida models without an actual Akida hardware, it includes a software backend that simulates the Akida Neural Processor.
text/markdown
Matthieu Hernandez
hmatthieu@brainchip.com
null
null
Proprietary
null
[]
[]
https://doc.brainchipinc.com
null
>=3.10
[]
[]
[]
[ "numpy" ]
[]
[]
[]
[]
twine/5.1.0 CPython/3.11.9
2026-02-20T10:16:42.515408
akida-2.19.1-cp312-cp312-win_amd64.whl
2,195,983
66/37/0d7b5e3ab4d786fd461f34d0236109d943b09988f0c9d70b57564d8b2b38/akida-2.19.1-cp312-cp312-win_amd64.whl
cp312
bdist_wheel
null
false
7ed57882863fb41ab1e923286b271c95
ac8f22f70b1c254a2572c3394870d263ca6860a90ce57d1479027b51a53e77af
66370d7b5e3ab4d786fd461f34d0236109d943b09988f0c9d70b57564d8b2b38
null
[]
663
2.4
sysame
0.0.22
Transport Modelling Helper Package
# SysAME Welcome to the **SysAME** project! This repository provides a comprehensive library for transport modeling and simulation support. ## Installation 1. pip: ``` pip install sysame ``` 1. uv: ``` uv add sysame ``` # SysAME Custom License Copyright (c) 2025 sysame.com ## Permission and Usage This license applies to the SysAME software package, which consists of: 1. Open interface components (primarily Python APIs and related documentation) 2. Closed-source compiled components (primarily written in Rust and C++) ### You are free to: - Use the software for any purpose, including commercial use - Modify and distribute the open interface components - Create and distribute derivative works that interface with the software ### Under the following conditions: #### 1. Attribution Requirement You must provide appropriate citation and acknowledgment when using SysAME in any academic, research, commercial, or other work. The citation should include: - The name of the software (SysAME) - The author/organisation - The website or repository URL - The version of the software used A suggested citation format is: ``` Ishtaiwi, M.. (2026). SysAME [Software]. Version 0.0.22. Retrieved from sysame.com. ``` #### 2. Core Implementation Protection The compiled Rust/C++ components of SysAME are provided in binary form only and are not open source. You are expressly prohibited from: - Reverse engineering, decompiling, or disassembling the compiled components - Attempting to extract, modify, or recreate the source code of the compiled components - Distributing modified versions of the compiled components #### 3. Redistribution If you redistribute SysAME or works derived from it, you must include: - This license text - All attribution notices - A clear notice that your work uses SysAME and is subject to this license ## Disclaimer of Warranty THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## Enforcement Violation of the terms of this license, particularly regarding the protection of compiled components or the attribution requirements, will constitute a breach of this license and may result in legal action. ## Contact For licensing questions or permissions beyond the scope of this license, please contact help@sysame.com
text/markdown
null
"M.I." <help@sysame.com>
null
null
null
null
[]
[]
null
null
>=3.12
[]
[]
[]
[ "contextily>=1.6.2", "dbfread>=2.0.7", "lxml>=6.0.2", "matplotlib>=3.10.3", "mypy>=1.15.0", "networkx>=3.4.2", "numpy>=2.2.5", "openmatrix>=0.3.5.0", "pandas>=2.2.3", "polars>=1.29.0", "py7zr>=0.22.0", "pyarrow>=20.0.0", "pytest>=8.3.5", "scipy>=1.15.3", "seaborn>=0.13.2", "shapely>=2.1.0" ]
[]
[]
[]
[]
uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
2026-02-20T10:16:34.833141
sysame-0.0.22-py3-none-any.whl
92,576
95/44/32059658da06fc0dfb89bcdbde69cf540be0be856cee40676b7fb131bc52/sysame-0.0.22-py3-none-any.whl
py3
bdist_wheel
null
false
6b1b097bf5b4cfadcb0c48dee5c7b8b2
fbbfb0de65c2d2df77b373f882b000259b5082d4c7097e41e7ef9bb531fc377e
954432059658da06fc0dfb89bcdbde69cf540be0be856cee40676b7fb131bc52
null
[ "LICENSE.md" ]
87
2.4
filehash-tool
2.2.7
本CLI工具用于计算、存储、验证磁盘文件的hash值,可检查文件是否被篡改。
This tool only provides Simplified Chinese user interface. ### 简介 本CLI工具用于计算、存储、验证磁盘文件的hash值,可检查文件是否被篡改。 有以下功能: - 将磁盘文件的**hash值**、**大小**、**路径**、**登记时间**、**数据库ID**保存到数据库的hash链中 - 验证数据库中的记录,检查记录的路径、hash值、大小 - 验证磁盘文件,将hash值、大小、路径与数据库中的记录对比 - 仅打印数据库中的记录 - 仅计算磁盘文件的hash值 建议配置环境变量使用。目前大多数下载网站提供`sha256`值,因此默认使用`sha256`算法。 ### CLI命令 路径使用[glob](https://docs.python.org/3/library/glob.html)语法:`*` `?` `[]` `**` 在Linux/macOS下路径须加引号,使用`/`分隔符。 🟢添加文件信息到数据库: ```shell filehash add 'E:\software\python-3.13.12-amd64.exe' filehash a 'E:\software\*.exe' filehash a 'E:\software\**\*.exe' # **遍历子目录 ``` 🟠验证数据库中的记录: ```shell # 如果磁盘文件已被删除,会提示文件不存在,但不会中止。 filehash verify_record '*\python*.exe' filehash vr '*\software\*.exe' filehash vr '*.exe' filehash vr '*' # 验证数据库中所有文件 ``` 🔵验证磁盘上的文件: ```shell # 如果数据库中没有该hash值、大小的数据,程序会报错、中止。 # 此功能可Windows/Linux双系统时使用 filehash verify_file 'E:\software\python-3.13.12-amd64.exe' filehash vf 'E:\software\**\*.exe' ``` 🟤在终端运行filehash,打印完整的帮助信息: ``` PS E:\> filehash usage: filehash [-h] [-m HASH_METH] [-n] [--db-dir DIR] [--backup-dir DIR] [--backup-size SIZE] [CMD] [PATH] 文件hash校验。版本: 2.2.7 https://pypi.org/project/filehash-tool positional arguments: CMD 命令 PATH 路径,使用glob语法,*表示所有文件,dir/**/*.exe遍历子目录。 options: -h, --help show this help message and exit -m, --hash-meth HASH_METH 创建数据库时使用的hash算法,覆盖FILEHASH_HASH_METH环境变量。 -n, --no-space 打印hash时,不添加空格。 --db-dir DIR 数据库目录,覆盖FILEHASH_DB_DIR环境变量。 --backup-dir DIR 备份保存的数据库目录,覆盖FILEHASH_BACKUP_DIR环境变量。 --backup-size SIZE 备份保存的数据库数量,覆盖FILEHASH_BACKUP_SIZE环境变量。 可用命令: add/a 登记文件到数据库 verify_record/vr 验证数据库中的记录 print_record/pr 打印数据库中的记录 print_existing_record/per 打印数据库中尚存在的记录 verify_file/vf 验证磁盘文件 print_file/pf 计算文件hash值 (不加载数据库) 保证存在的hash算法: blake2b, blake2s, md5, sha1, sha224, sha256, sha384, sha3_224, sha3_256, sha3_384, sha3_512, sha512, shake_128, shake_256 其它可用的hash算法: md5-sha1, ripemd160, sha512_224, sha512_256, sm3 当前创建数据库使用的hash算法: sha256 ```
text/markdown
null
Ma Lin <malincns@163.com>
null
null
null
null
[ "Programming Language :: Python :: 3", "Development Status :: 5 - Production/Stable", "Topic :: System :: Archiving", "Topic :: System :: Archiving :: Backup", "Topic :: System :: Shells", "Topic :: Utilities", "Operating System :: Unix", "Operating System :: POSIX", "Operating System :: MacOS", "Operating System :: Microsoft :: Windows" ]
[]
null
null
>=3.7
[]
[]
[]
[ "colorama" ]
[]
[]
[]
[ "Repository, https://bitbucket.org/wjssz/filehash" ]
twine/6.2.0 CPython/3.13.12
2026-02-20T10:15:47.011611
filehash_tool-2.2.7.tar.gz
12,126
4a/44/9911659a052ff9c506d9a58a80144b4b748eaafff1488572fd94324af05e/filehash_tool-2.2.7.tar.gz
source
sdist
null
false
81b00443dd5a796df319a6486a4a143a
acd14ad475faaa6fffe155607a8825e6e0e199923b4867f9f3d77c26007d1bd5
4a449911659a052ff9c506d9a58a80144b4b748eaafff1488572fd94324af05e
MIT
[]
218
2.4
protein-quest
1.1.1
Search/retrieve/filter proteins and protein structures
# protein-quest [![Documentation](https://img.shields.io/badge/Documentation-bonvinlab.org-blue?style=flat-square&logo=gitbook)](https://www.bonvinlab.org/protein-quest/) [![CI](https://github.com/haddocking/protein-quest/actions/workflows/ci.yml/badge.svg)](https://github.com/haddocking/protein-quest/actions/workflows/ci.yml) [![Research Software Directory Badge](https://img.shields.io/badge/rsd-00a3e3.svg)](https://www.research-software.nl/software/protein-quest) [![bio.tools](https://img.shields.io/badge/bio.tools-protein--quest-009fdf.svg)](https://bio.tools/protein-quest) [![PyPI](https://img.shields.io/pypi/v/protein-quest)](https://pypi.org/project/protein-quest/) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.16941288.svg)](https://doi.org/10.5281/zenodo.16941288) [![Poster DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.17910832.svg)](https://doi.org/10.5281/zenodo.17910832) [![Codacy Badge](https://app.codacy.com/project/badge/Coverage/7a3f3f1fe64640d583a5e50fe7ba828e)](https://app.codacy.com/gh/haddocking/protein-quest/coverage?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_coverage) [![FAIR checklist badge](https://fairsoftwarechecklist.net/badge.svg)](https://fairsoftwarechecklist.net/v0.2?f=31&a=32113&i=32121&r=133) [![fair-software.eu](https://img.shields.io/badge/fair--software.eu-%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F%20%20%E2%97%8F-green)](https://fair-software.eu) [![Copy/paste detector](https://raw.githubusercontent.com/kucherenko/jscpd/refs/tags/v3.5.10/assets/jscpd-badge.svg?sanitize=true)](https://github.com/kucherenko/jscpd/) Python package to search/retrieve/filter proteins and protein structures. It uses - [Uniprot Sparql endpoint](https://sparql.uniprot.org/) to search for proteins and their measured or predicted 3D structures. - [Uniprot taxonomy](https://www.uniprot.org/taxonomy?query=*) to search for taxonomy. - [QuickGO](https://www.ebi.ac.uk/QuickGO/api/index.html) to search for Gene Ontology terms. - [gemmi](https://project-gemmi.github.io/) to work with macromolecular models. - [dask-distributed](https://docs.dask.org/en/latest/) to compute in parallel. - [rocrate-action-recorder](https://rocrate-action-recorder.readthedocs.io/) for provenance tracking. The package is used by - [protein-detective](https://github.com/haddocking/protein-detective) An example workflow: ```mermaid graph TB; taxonomy[/Search taxon/] -. taxon_ids .-> searchuniprot[/Search UniprotKB/] goterm[/Search GO term/] -. go_ids .-> searchuniprot[/Search UniprotKB/] searchuniprot --> |uniprot_accessions|searchpdbe[/Search PDBe/] searchuniprot --> |uniprot_accessions|searchaf[/Search Alphafold/] searchuniprot -. uniprot_accessions .-> searchemdb[/Search EMDB/] searchuniprot -. uniprot_accessions .-> searchuniprotdetails[/Search UniProt details/] searchintactionpartners[/Search interaction partners/] -.-x |uniprot_accessions|searchuniprot searchcomplexes[/Search complexes/] searchpdbe -->|pdb_ids|fetchpdbe[Retrieve PDBe] searchaf --> |uniprot_accessions|fetchad(Retrieve AlphaFold) searchemdb -. emdb_ids .->fetchemdb[Retrieve EMDB] fetchpdbe -->|mmcif_files| chainfilter{{Filter on chain of uniprot}} chainfilter --> |mmcif_files| residuefilter{{Filter on chain length}} fetchad -->|mmcif_files| confidencefilter{{Filter out low confidence}} confidencefilter --> |mmcif_files| ssfilter{{Filter on secondary structure}} residuefilter --> |mmcif_files| ssfilter ssfilter -. mmcif_files .-> convert2cif([Convert to cif]) ssfilter -. mmcif_files .-> convert2uniprot_accessions([Convert to UniProt accessions]) classDef dashedBorder stroke-dasharray: 5 5; goterm:::dashedBorder taxonomy:::dashedBorder searchemdb:::dashedBorder fetchemdb:::dashedBorder searchintactionpartners:::dashedBorder searchcomplexes:::dashedBorder searchuniprotdetails:::dashedBorder convert2cif:::dashedBorder convert2uniprot_accessions:::dashedBorder ``` (Dotted nodes and edges are side-quests.) ## Install ```shell pip install protein-quest ``` Or to use the latest development version: ```shell pip install git+https://github.com/haddocking/protein-quest.git ``` ## Usage The main entry point is the `protein-quest` command line tool which has multiple subcommands to perform actions. To use programmaticly, see the [Jupyter notebooks](https://www.bonvinlab.org/protein-quest/notebooks) and [API documentation](https://www.bonvinlab.org/protein-quest/autoapi/protein_quest/). While downloading or copying files it uses a global cache (located at `~/.cache/protein-quest`) and hardlinks to save disk space and improve speed. This behavior can be customized with the `--no-cache`, `--cache-dir`, and `--copy-method` command line arguments. ### Search Uniprot accessions ```shell protein-quest search uniprot \ --taxon-id 9606 \ --reviewed \ --subcellular-location-uniprot "nucleus" \ --subcellular-location-go GO:0005634 \ --molecular-function-go GO:0003677 \ --limit 100 \ uniprot_accs.txt ``` ([GO:0005634](https://www.ebi.ac.uk/QuickGO/term/GO:0005634) is "Nucleus" and [GO:0003677](https://www.ebi.ac.uk/QuickGO/term/GO:0003677) is "DNA binding") ### Search for PDBe structures of uniprot accessions ```shell protein-quest search pdbe uniprot_accs.txt pdbe.csv ``` `pdbe.csv` file is written containing the the PDB id and chain of each uniprot accession. ### Search for Alphafold structures of uniprot accessions ```shell protein-quest search alphafold uniprot_accs.txt alphafold.csv ``` ### Search for EMDB structures of uniprot accessions ```shell protein-quest search emdb uniprot_accs.txt emdbs.csv ``` ### To retrieve PDB structure files ```shell protein-quest retrieve pdbe pdbe.csv downloads-pdbe/ ``` ### To retrieve AlphaFold structure files ```shell protein-quest retrieve alphafold alphafold.csv downloads-af/ ``` For each entry downloads the cif file. ### To retrieve EMDB volume files ```shell protein-quest retrieve emdb emdbs.csv downloads-emdb/ ``` ### To filter AlphaFold structures on confidence Filter AlphaFoldDB structures based on confidence (pLDDT). Keeps entries with requested number of residues which have a confidence score above the threshold. Also writes pdb files with only those residues. ```shell protein-quest filter confidence \ --confidence-threshold 50 \ --min-residues 100 \ --max-residues 1000 \ ./downloads-af ./filtered ``` ### To filter PDBe files on chain of uniprot accession Make PDBe files smaller by only keeping first chain of found uniprot entry and renaming to chain A. ```shell protein-quest filter chain \ pdbe.csv \ ./downloads-pdbe ./filtered-chains ``` ### To filter PDBe files on nr of residues ```shell protein-quest filter residue \ --min-residues 100 \ --max-residues 1000 \ ./filtered-chains ./filtered ``` ### To filter on secondary structure To filter on structure being mostly alpha helices and have no beta sheets. See the following [notebook](https://www.bonvinlab.org/protein-detective/SSE_elements.html) to determine the ratio of secondary structure elements. ```shell protein-quest filter secondary-structure \ --ratio-min-helix-residues 0.5 \ --ratio-max-sheet-residues 0.0 \ --write-stats filtered-ss-stats.csv \ ./filtered-chains ./filtered-ss ``` ### Search Taxonomy ```shell protein-quest search taxonomy "Homo sapiens" - ``` ### Search Gene Ontology (GO) You might not know what the identifier of a [Gene Ontology](https://geneontology.org/) term is at `protein-quest search uniprot`. You can use following command to search for a Gene Ontology (GO) term. ```shell protein-quest search go --limit 5 --aspect cellular_component apoptosome - ``` ### Search for interaction partners Use <https://www.ebi.ac.uk/complexportal> to find interaction partners of given UniProt accession. ```shell protein-quest search interaction-partners Q05471 interaction-partners-of-Q05471.txt ``` The `interaction-partners-of-Q05471.txt` file contains uniprot accessions (one per line). ### Search for complexes Given Uniprot accessions search for macromolecular complexes at <https://www.ebi.ac.uk/complexportal> and return the complex entries and their members. ```shell echo Q05471 | protein-quest search complexes - complexes.csv ``` The `complexes.csv` looks like ```csv query_protein,complex_id,complex_url,complex_title,members Q05471,CPX-2122,https://www.ebi.ac.uk/complexportal/complex/CPX-2122,Swr1 chromatin remodelling complex,P31376;P35817;P38326;P53201;P53930;P60010;P80428;Q03388;Q03433;Q03940;Q05471;Q06707;Q12464;Q12509 ``` ### Search for UniProt details To get details (like protein name, sequence length, organism) for a list of UniProt accessions. ```shell protein-quest search uniprot-details uniprot_accs.txt uniprot_details.csv ``` The `uniprot_details.csv` looks like: ```csv uniprot_accession,uniprot_id,sequence_length,reviewed,protein_name,taxon_id,taxon_name A0A087WUV0,ZN892_HUMAN,522,True,Zinc finger protein 892,9606,Homo sapiens ``` ### Convert structure files to .cif format Some tools (for example [powerfit](https://github.com/haddocking/powerfit)) only work with `.cif` files and not `*.cif.gz` or `*.bcif` files. ```shell protein-quest convert structures --format cif --output-dir ./filtered-cif ./filtered-ss ``` ### Convert structure files to UniProt accessions After running some filters you might want to know which UniProt accessions are still present in the filtered structures. ```shell protein-quest convert uniprot ./filtered-ss uniprot_accs.filtered.txt ``` ## Provenance You can use `protein-quest --prov ...` to store provenance information of your CLI invocations in a [Research Object crate](https://www.researchobject.org/ro-crate/) file called ro-crate-metadata.json. ## Model Context Protocol (MCP) server Protein quest can also help LLMs like Claude Sonnet 4 by providing a [set of tools](https://modelcontextprotocol.io/docs/learn/server-concepts#tools-ai-actions) for protein structures. ![Protein Quest MCP workflow](https://github.com/haddocking/protein-quest/raw/main/docs/protein-quest-mcp.png) To run mcp server you have to install the `mcp` extra with: ```shell pip install protein-quest[mcp] ``` The server can be started with: ```shell protein-quest mcp ``` The mcp server contains an prompt template to search/retrieve/filter candidate structures. ## Shell autocompletion The `protein-quest` command line tool supports shell autocompletion using [shtab](https://docs.iterative.ai/shtab). Initialize for bash shell with: ```shell mkdir -p ~/.local/share/bash-completion/completions protein-quest --print-completion bash > ~/.local/share/bash-completion/completions/protein-quest ``` Initialize for zsh shell with: ```shell mkdir -p ~/.local/share/zsh/site-functions protein-quest --print-completion zsh > ~/.local/share/zsh/site-functions/_protein-quest fpath=("$HOME/.local/share/zsh/site-functions" $fpath) autoload -Uz compinit && compinit ``` ## Contributing For development information and contribution guidelines, please see [CONTRIBUTING.md](CONTRIBUTING.md).
text/markdown
null
null
null
null
null
alphafold, mmcif, pdb, protein, protein structure, uniprot
[ "Development Status :: 5 - Production/Stable", "Environment :: Console", "Framework :: AsyncIO", "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Natural Language :: English", "Operating System :: MacOS", "Operating System :: POSIX", "Operating System :: POSIX :: Linux", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Scientific/Engineering :: Bio-Informatics", "Topic :: Scientific/Engineering :: Chemistry", "Typing :: Typed" ]
[]
null
null
>=3.13
[]
[]
[]
[ "aiofiles>=24.1.0", "aiohttp-retry>=2.9.1", "aiohttp[speedups]>=3.11.18", "attrs>=25.3.0", "cattrs[orjson]>=24.1.3", "dask>=2025.5.1", "distributed>=2025.5.1", "gemmi>=0.7.4", "mmcif>=0.92.0", "platformdirs>=4.3.8", "psutil>=7.0.0", "rich-argparse>=1.7.1", "rich>=14.0.0", "rocrate-action-recorder>=0.2.0", "shtab>=1.7.2", "sparqlwrapper>=2.0.0", "tqdm>=4.67.1", "yarl>=1.20.1", "fastmcp>=2.11.3; extra == \"mcp\"", "pydantic>=2.12.0; extra == \"mcp\"" ]
[]
[]
[]
[ "Homepage, https://github.com/haddocking/protein-quest", "Issues, https://github.com/haddocking/protein-quest/issues", "Documentation, https://www.bonvinlab.org/protein-quest/", "Source, https://github.com/haddocking/protein-quest" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:13:20.152742
protein_quest-1.1.1.tar.gz
59,249
dc/d8/008a2ad0749dff1abcde655b65251e536451cb10714e574316f20ab76c59/protein_quest-1.1.1.tar.gz
source
sdist
null
false
d1b85c4dd1e85fdc06bffd6cb313800f
f189dc02a9b7a1e7b602f1d5734145c1db683a1353424a404cbff39a5bce808b
dcd8008a2ad0749dff1abcde655b65251e536451cb10714e574316f20ab76c59
null
[ "LICENSE" ]
206
2.4
payaza
0.1.0
Unofficial Python SDK for the Payaza Africa API
# Payaza Python SDK > Unofficial Python SDK for the [Payaza Africa](https://payaza.africa) API. [![Python](https://img.shields.io/badge/python-3.8%2B-blue)](https://www.python.org) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE) --- ## Installation ```bash pip install payaza ``` --- ## Quick start ## Contributing PRs are welcome! Please open an issue first to discuss what you'd like to change. --- ## ⚠️ Disclaimer This is an **unofficial** SDK and is not affiliated with or endorsed by Payaza Africa Limited. Endpoint paths and request/response shapes are based on the public documentation at [docs.payaza.africa](https://docs.payaza.africa/developers/apis) — always cross-check against the latest official docs and update accordingly. --- ## License MIT © Shagbaor Agber
text/markdown
null
Shagbaor Agber <dxtlive@gmail.com>
null
null
null
payaza, payments, africa, fintech, sdk
[ "Development Status :: 3 - Alpha", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
null
null
>=3.8
[]
[]
[]
[ "requests>=2.28", "pytest>=7; extra == \"dev\"", "responses>=0.25; extra == \"dev\"", "pytest-cov; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/0xAfterSnow/payaza-python", "Documentation, https://docs.payaza.africa/developers/apis", "Bug Tracker, https://github.com/0xAfterSnow/payaza-python/issues" ]
twine/6.2.0 CPython/3.13.9
2026-02-20T10:13:12.218042
payaza-0.1.0.tar.gz
3,637
8e/6f/65732e5cf82337244f246138661c44ba7657bdd641da7abc2acdf68023fa/payaza-0.1.0.tar.gz
source
sdist
null
false
ffd771f78ea644484226502c262328a2
c56a5d3e49329df78d8a356006fd29836ab9671d7f144bb3f8feee803fc23e2c
8e6f65732e5cf82337244f246138661c44ba7657bdd641da7abc2acdf68023fa
MIT
[ "LICENSE" ]
232
2.4
iredis
1.16.0
Terminal client for Redis with auto-completion and syntax highlighting.
<p align="center"> <img width="100" height="100" src="https://raw.githubusercontent.com/laixintao/iredis/master/docs/assets/logo.png" /> </p> <h3 align="center">Interactive Redis: A Cli for Redis with AutoCompletion and Syntax Highlighting.</h3> <p align="center"> <a href="https://github.com/laixintao/iredis/actions"><img src="https://github.com/laixintao/iredis/actions/workflows/test.yaml/badge.svg?branch=master" alt="Github Action"></a> <a href="https://badge.fury.io/py/iredis"><img src="https://badge.fury.io/py/iredis.svg" alt="PyPI version"></a> <img src="https://badgen.net/badge/python/3.10%20%7C%203.11%20%7C%203.12" alt="Python version"> <a href="https://pepy.tech/project/iredis"><img src="https://pepy.tech/badge/iredis" alt="Download stats"></a> </p> <p align="center"> <img src="./docs/assets/demo.svg" alt="demo"> </p> IRedis is a terminal client for redis with auto-completion and syntax highlighting. IRedis lets you type Redis commands smoothly, and displays results in a user-friendly format. IRedis is an alternative for redis-cli. In most cases, IRedis behaves exactly the same as redis-cli. Besides, it is safer to use IRedis on production servers than redis-cli: IRedis will prevent accidentally running dangerous commands, like `KEYS *` (see [Redis docs / Latency generated by slow commands](https://redis.io/topics/latency#latency-generated-by-slow-commands)). ## Features - Advanced code completion. If you run command `KEYS` then run `DEL`, IRedis will auto-complete your command based on `KEYS` result. - Command validation. IRedis will validate command while you are typing, and highlight errors. E.g. try `CLUSTER MEET IP PORT`, IRedis will validate IP and PORT for you. - Command highlighting, fully based on redis grammar. Any valid command in IRedis shell is a valid redis command. - Human-friendly result display. - _pipeline_ feature, you can use your favorite shell tools to parse redis' response, like `get json | jq .`. - Support pager for long output. - Support connection via URL, `iredis --url redis://example.com:6379/1`. - Support cluster, IRedis will auto reissue command for `MOVED` response in cluster mode. - Store server configuration: `iredis -d prod-redis` (see [dsn](#using-dsn) for more). - `peek` command to check the key's type then automatically call `get`/`lrange`/`sscan`, etc, depending on types. You don't need to call the `type` command then type another command to get the value. `peek` will also display the key's length and memory usage. - <kbd>Ctrl</kbd> + <kbd>C</kbd> to cancel the current typed command, this won't exit IRedis, exactly like bash behaviour. Use <kbd>Ctrl</kbd> + <kbd>D</kbd> to send a EOF to exit IRedis. - <kbd>Ctrl</kbd> + <kbd>R</kbd> to open **reverse-i-search** to search through your command history. - Auto suggestions. (Like [fish shell](http://fishshell.com/).) - Support `--encode=utf-8`, to decode Redis' bytes responses. - Command hint on bottom, include command syntax, supported redis version, and time complexity. - Official docs with built-in `HELP` command, try `HELP SET`! - Written in pure Python, but IRedis was packaged into a single binary with [PyOxidizer](https://github.com/indygreg/PyOxidizer), you can use cURL to download and run, it just works, even you don't have a Python interpreter. - You can change the cli prompt using `--prompt` option or set via `~/.iredisrc` config file. - Hide password for `AUTH` command. - Says "Goodbye!" to you when you exit! - For full features, please see: [iredis.xbin.io](https://www.iredis.xbin.io) ## Install ### Pip Install via pip: ``` pip install iredis ``` [pipx](https://github.com/pipxproject/pipx) is recommended: ``` pipx install iredis ``` ### Brew For Mac users, you can install iredis via brew 🍻 ``` brew install iredis ``` ### Linux You can also use your Linux package manager to install IRedis, like `apt` in Ubuntu (Only available on Ubuntu 21.04+). ```shell apt install iredis ``` [![Packaging status](https://repology.org/badge/vertical-allrepos/iredis.svg)](https://repology.org/project/iredis/versions) ### Download Binary Or you can download the executable binary with cURL(or wget), untar, then run. It is especially useful when you don't have a python interpreter(E.g. the [official Redis docker image](https://hub.docker.com/_/redis/) which doesn't have Python installed.): ``` wget https://github.com/laixintao/iredis/releases/download/v1.15.2/iredis.tar.gz \ && tar -xzf iredis.tar.gz \ && ./iredis ``` (Check the [release page](https://github.com/laixintao/iredis/releases) if you want to download an old version of IRedis.) Please note that the single binary build only support until IRedis v1.15.2, all versions before IRedis v1.15.2 has the single binary build that you can download directly. After v1.15.2, as the PyOxidizer [is no longer maintained]( https://gregoryszorc.com/blog/2024/03/17/my-shifting-open-source-priorities/), IRedis doesn't do the single binary build either. ## Usage Once you install IRedis, you will know how to use it. Just remember, IRedis supports similar options like redis-cli, like `-h` for redis-server's host and `-p` for port. ``` $ iredis --help Usage: iredis [OPTIONS] [CMD]... IRedis: Interactive Redis When no command is given, IRedis starts in interactive mode. Examples: - iredis - iredis -d dsn - iredis -h 127.0.0.1 -p 6379 - iredis -h 127.0.0.1 -p 6379 -a <password> - iredis --url redis://localhost:7890/3 Type "help" in interactive mode for information on available commands and settings. Options: -h TEXT Server hostname (default: 127.0.0.1). -p TEXT Server port (default: 6379). -s, --socket TEXT Server socket (overrides hostname and port). -n INTEGER Database number.(overwrites dsn/url's db number) -u, --username TEXT User name used to auth, will be ignore for redis version < 6. -a, --password TEXT Password to use when connecting to the server. --url TEXT Use Redis URL to indicate connection(Can set with env `IREDIS_URL`), Example: redis:/ /[[username]:[password]]@localhost:6379/0 rediss://[[username]:[password]]@localhost:6 379/0 unix://[[username]:[password]]@/pa th/to/socket.sock?db=0 -d, --dsn TEXT Use DSN configured into the [alias_dsn] section of iredisrc file. (Can set with env `IREDIS_DSN`) --newbie / --no-newbie Show command hints and useful helps. --iredisrc TEXT Config file for iredis, default is ~/.iredisrc. --decode TEXT decode response, default is No decode, which will output all bytes literals. --client_name TEXT Assign a name to the current connection. --raw / --no-raw Use raw formatting for replies (default when STDOUT is not a tty). However, you can use --no-raw to force formatted output even when STDOUT is not a tty. --rainbow / --no-rainbow Display colorful prompt. --shell / --no-shell Allow to run shell commands, default to True. --pager / --no-pager Using pager when output is too tall for your window, default to True. --verify-ssl [none|optional|required] Set the TLS certificate verification strategy --prompt TEXT Prompt format (supported interpolations: {client_name}, {db}, {host}, {path}, {port}, {username}, {client_addr}, {client_id}). --version Show the version and exit. --help Show this message and exit. ``` ### Using DSN IRedis support storing server configuration in config file. Here is a DSN config: ``` [alias_dsn] dev=redis://localhost:6379/4 staging=redis://username:password@staging-redis.example.com:6379/1 ``` Put this in your `iredisrc` then connect via `iredis -d staging` or `iredis -d dev`. ### Change The Default Prompt You can change the prompt str, the default prompt is: ```shell 127.0.0.1:6379> ``` Which is rendered by `{host}:{port}[{db}]> `, you can change this via `--prompt` option or change [iredisrc](https://github.com/laixintao/iredis/blob/master/iredis/data/iredisrc) config file. The prompwt string uses python string format engine, supported interpolations: - `{client_name}` - `{db}` - `{host}` - `{path}` - `{port}` - `{username}` - `{client_addr}` - `{client_id}` The `--prompt` utilize [Python String format engine](https://docs.python.org/3/library/string.html#formatstrings), so as long as it is a valid string formatter, it will work( anything that `"<your prompt>".format(...)` accepts). For example, you can limit your Redis server host name's length to 5 by setting `--prompt` to `iredis --prompt '{host:.5s}'`. ### Configuration IRedis supports config files. Command-line options will always take precedence over config. Configuration resolution from highest to lowest precedence is: - _Options from command line_ - `$PWD/.iredisrc` - `~/.iredisrc` (this path can be changed with `iredis --iredisrc $YOUR_PATH`) - `/etc/iredisrc` - default config in IRedis package. You can copy the _self-explained_ default config here: https://raw.githubusercontent.com/laixintao/iredis/master/iredis/data/iredisrc And then make your own changes. (If you are using an old versions of IRedis, please use the config file below, and change the version in URL): https://raw.githubusercontent.com/laixintao/iredis/v1.0.4/iredis/data/iredisrc ### Keys IRedis support unix/readline-style REPL keyboard shortcuts, which means keys like <kbd>Ctrl</kbd> + <kbd>F</kbd> to forward work. Also: - <kbd>Ctrl</kbd> + <kbd>D</kbd> (i.e. EOF) to exit; you can also use the `exit` command. - <kbd>Ctrl</kbd> + <kbd>L</kbd> to clear screen; you can also use the `clear` command. - <kbd>Ctrl</kbd> + <kbd>X</kbd> <kbd>Ctrl</kbd> + <kbd>E</kbd> to open an editor to edit command, or <kbd>V</kbd> in vi-mode. ## Development ### Release Strategy IRedis is built and released by `GitHub Actions`. Whenever a tag is pushed to the `master` branch, a new release is built and uploaded to pypi.org, it's very convenient. Thus, we release as often as possible, so that users can always enjoy the new features and bugfixes quickly. Any bugfix or new feature will get at least a patch release, whereas big features will get a minor release. ### Setup Environment IRedis favors [poetry](https://github.com/sdispater/poetry) as package management tool. To setup a develop environment on your computer: First, install poetry (you can do it in a python's virtualenv): ``` pip install poetry ``` Then run (which is similar to `pip install -e .`): ``` poetry install ``` **Be careful running testcases locally, it may flush you db!!!** ### Code style Code is formatted with [black](https://github.com/psf/black). After modifying code, run: ```bash black . ``` We recommend installing [pre-commit](https://pre-commit.com/) so black and flake8 run automatically on each `git commit`: ```bash poetry run pre-commit install ``` ### Development Logs This is a command-line tool, so we don't write logs to stdout. You can `tail -f ~/.iredis.log` to see logs, the log is pretty clear, you can see what actually happens from log files. ### Catch Up with Latest Redis-doc IRedis use a git submodule to track current-up-to-date redis-doc version. To catch up with latest: 1. Git pull in redis-doc 2. Copy doc files to `/data`: `cp -r redis-doc/commands* iredis/data` 3. Prettier markdown`prettier --prose-wrap always iredis/data/commands/*.md --write` 4. Check the diff, update IRedis' code if needed. ## Related Projects - [redis-tui](https://github.com/mylxsw/redis-tui) If you like iredis, you may also like other cli tools by [dbcli](https://www.dbcli.com/): - [pgcli](https://www.pgcli.com) - Postgres Client with Auto-completion and Syntax Highlighting - [mycli](https://www.mycli.net) - MySQL/MariaDB/Percona Client with Auto-completion and Syntax Highlighting - [litecli](https://litecli.com) - SQLite Client with Auto-completion and Syntax Highlighting - [mssql-cli](https://github.com/dbcli/mssql-cli) - Microsoft SQL Server Client with Auto-completion and Syntax Highlighting - [athenacli](https://github.com/dbcli/athenacli) - AWS Athena Client with Auto-completion and Syntax Highlighting - [vcli](https://github.com/dbcli/vcli) - VerticaDB client - [iredis](https://github.com/laixintao/iredis/) - Client for Redis with AutoCompletion and Syntax Highlighting IRedis is build on the top of [prompt_toolkit](https://github.com/jonathanslenders/python-prompt-toolkit), a Python library (by [Jonathan Slenders](https://twitter.com/jonathan_s)) for building rich commandline applications.
text/markdown
laixintao
laixintao1995@163.com
null
null
BSD-3-Clause
Redis, key-value store, Commandline tools, Redis Client
[ "Development Status :: 4 - Beta", "Environment :: Console", "Environment :: Console :: Curses", "Environment :: MacOS X", "Intended Audience :: Developers", "License :: OSI Approved :: BSD License", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Database" ]
[]
null
null
<4.0,>=3.10
[]
[]
[]
[ "Pygments<3,>=2", "click<9.0,>=8.0", "configobj<6.0,>=5.0", "mistune<4.0,>=3.0", "packaging<25.0,>=24.2", "prompt_toolkit<4,>=3", "python-dateutil<3.0.0,>=2.8.2", "redis<8.0.0,>=5.0.0" ]
[]
[]
[]
[ "Homepage, https://github.com/laixintao/iredis", "Repository, https://github.com/laixintao/iredis" ]
poetry/2.3.2 CPython/3.12.12 Linux/6.11.0-1018-azure
2026-02-20T10:12:54.184169
iredis-1.16.0-py3-none-any.whl
388,123
cd/d2/4642adb5d0dbdc75e647c60217d9194e20f6a41f1de258de72af6b8447e9/iredis-1.16.0-py3-none-any.whl
py3
bdist_wheel
null
false
79bf21733c8b4b9fa966e1205ef9cf95
8b51086e5cae9e3d136a74cd9b35c5b75491dcb333d2dae476078094df4cc5d2
cdd24642adb5d0dbdc75e647c60217d9194e20f6a41f1de258de72af6b8447e9
null
[ "LICENSE" ]
356
2.4
dataenginex
0.3.5
DataEngineX - Core framework for data engineering projects
# dataenginex `dataenginex` is the core DataEngineX framework package for building observable, production-ready data and API services. It provides: - FastAPI application primitives and API extensions - Middleware for structured logging, metrics, and tracing - Data quality and validation utilities - Lakehouse and warehouse building blocks - Reusable ML support modules for model-serving workflows ## Install ```bash pip install dataenginex ``` ## Package Scope This package is the core library from the DEX monorepo. `careerdex` and `weatherdex` are maintained in the same repository but are not part of this package release flow. ## Quick Usage ```python from dataenginex import __version__ print(__version__) ``` ## Source and Docs - Repository: https://github.com/data-literate/DEX - CI/CD guide: `docs/CI_CD.md` - Release notes: `packages/dataenginex/src/dataenginex/RELEASE_NOTES.md`
text/markdown
Jay
jayapal.myaka99@gmail.com
null
null
MIT
null
[ "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14" ]
[]
null
null
>=3.11
[]
[]
[]
[ "email-validator>=2.0.0", "fastapi>=0.128.4", "httpx>=0.28.0", "loguru>=0.7.3", "opentelemetry-api>=1.39.0", "opentelemetry-exporter-otlp>=1.39.0", "opentelemetry-instrumentation-fastapi>=0.60b1", "opentelemetry-sdk>=1.39.0", "prometheus-client>=0.24.0", "python-dotenv>=1.2.0", "python-json-logger>=4.0.0", "pyyaml>=6.0.2", "structlog>=25.5.0", "uvicorn>=0.40.0" ]
[]
[]
[]
[]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:12:22.001597
dataenginex-0.3.5.tar.gz
39,253
07/c0/4ccac001fe958b14d9820f3b960fc0089560000420553732d449b494642e/dataenginex-0.3.5.tar.gz
source
sdist
null
false
5280d2ab9497e4889b1eed4d1c547f26
bc9778839355637ca4970f95c75d8affdfa269b7a251509189f28a9029a0cbb0
07c04ccac001fe958b14d9820f3b960fc0089560000420553732d449b494642e
null
[]
213
2.4
uup-dump-api-py
0.1.5
Python Wrapper of UUP Dump API
# UUP Dump API - Python Module A Python wrapper for the [UUP Dump API](https://uupdump.net/) with comprehensive logging and exception handling. ## Features - ✅ Complete API coverage for all UUP Dump endpoints - ✅ Comprehensive exception handling with custom error types - ✅ Detailed logging at multiple levels (DEBUG, INFO, WARNING, ERROR) - ✅ Automatic retry and timeout handling - ✅ Type hints for better IDE support - ✅ User-friendly error messages mapped from API error codes ## Quick Start ```python from adapter import RestAdapter from exceptions import UUPDumpAPIError # Create API client api = RestAdapter(timeout=10) try: # Search for Windows 11 updates result = api.listid(search="Windows 11", sortByDate=True) if 'response' in result: builds = result['response']['builds'] print(f"Found {len(builds)} updates") except UUPDumpAPIError as e: print(f"Error: {e}") ``` ## Logging The module uses Python's built-in `logging` module for comprehensive logging. ### Basic Logging Setup ```python import logging from adapter import RestAdapter # Create adapter with INFO level logging (default) api = RestAdapter(log_level=logging.INFO) # Or use DEBUG for detailed troubleshooting api = RestAdapter(log_level=logging.DEBUG) ``` ### Custom Logging Configuration ```python import logging # Configure logging manually logging.basicConfig( level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', filename='uup_api.log' # Log to file ) api = RestAdapter() ``` ### Logging Levels - **DEBUG**: Detailed information including request parameters and response status - **INFO**: General information about operations (default) - **WARNING**: Warning messages for potentially problematic situations - **ERROR**: Error messages when operations fail ### Example Log Output ``` 2024-01-15 10:30:45 - adapter - INFO - Initialized UUP Dump API adapter (base_url=https://api.uupdump.net, timeout=10s) 2024-01-15 10:30:45 - adapter - INFO - Listing updates (search='Windows 11', sortByDate=True) 2024-01-15 10:30:45 - adapter - DEBUG - Making GET request to https://api.uupdump.net/listid.php with params: {'search': 'Windows 11', 'sortByDate': '1'} 2024-01-15 10:30:46 - adapter - DEBUG - Response status: 200 2024-01-15 10:30:46 - adapter - DEBUG - Request successful ``` ## Exception Handling The module provides custom exceptions for different error scenarios. ### Exception Hierarchy ``` UUPDumpAPIError (base exception) ├── UUPDumpHTTPError # HTTP request failures ├── UUPDumpTimeoutError # Request timeouts ├── UUPDumpConnectionError # Connection failures ├── UUPDumpValidationError # Invalid parameters └── UUPDumpResponseError # API-level errors ``` ### Exception Usage ```python from adapter import RestAdapter from exceptions import ( UUPDumpAPIError, UUPDumpResponseError, UUPDumpTimeoutError, UUPDumpConnectionError ) api = RestAdapter(timeout=5) try: result = api.fetchupd(arch="amd64", ring="Retail") except UUPDumpResponseError as e: # API returned an error (e.g., NO_UPDATE_FOUND, UNKNOWN_ARCH) print(f"API Error: {e}") print(f"Error Code: {e.error_code}") except UUPDumpTimeoutError as e: # Request timed out print(f"Timeout: {e}") except UUPDumpConnectionError as e: # Could not connect to API print(f"Connection Error: {e}") except UUPDumpAPIError as e: # Catch-all for any other errors print(f"Unexpected Error: {e}") ``` ### API Error Codes The module automatically maps API error codes to human-readable messages: | Error Code | Description | |------------|-------------| | `UNKNOWN_ARCH` | Invalid architecture specified | | `UNKNOWN_RING` | Invalid ring/channel specified | | `NO_UPDATE_FOUND` | No update matching criteria | | `UNSUPPORTED_LANG` | Unsupported language | | `NO_FILES` | No files available for update | | `WU_REQUEST_FAILED` | Windows Update request failed | | ... | See `exceptions.py` for complete list | ## API Methods ### `listid(search, sortByDate)` List available updates. ```python result = api.listid(search="Windows 11", sortByDate=True) ``` ### `fetchupd(arch, ring, flight, build, ...)` Fetch update information from Windows Update. ```python result = api.fetchupd( arch="amd64", ring="Retail", flight="Mainline", build="22621", sku=48 ) ``` ### `get_files(updateId, usePack, desiredEdition, ...)` Get file list for an update. ```python result = api.get_files( updateId="your-uuid-here", usePack="en-us", desiredEdition="professional" ) ``` ### `list_editions(lang, updateId)` List available editions. ```python result = api.list_editions(lang="en-us", updateId="your-uuid-here") ``` ### `list_langs(updateId, returnInfo)` List available languages. ```python result = api.list_langs(updateId="your-uuid-here") ``` ### `update_info(updateId, onlyInfo, ignoreFiles)` Get detailed update information. ```python result = api.update_info(updateId="your-uuid-here", ignoreFiles=True) ``` ### `api_version()` Get API version. ```python result = api.api_version() ``` ## Complete Example ```python import logging from adapter import RestAdapter from exceptions import UUPDumpAPIError, UUPDumpResponseError # Configure logging logging.basicConfig(level=logging.INFO) # Create API client api = RestAdapter(timeout=10) try: # 1. Search for updates print("Searching for updates...") search_result = api.listid(search="23H2", sortByDate=True) if 'response' in search_result: builds = search_result['response']['builds'] update_id = list(builds.keys())[0] # 2. Get update info print(f"Getting info for {update_id}...") info = api.update_info(updateId=update_id) # 3. List languages print("Listing languages...") langs = api.list_langs(updateId=update_id) # 4. List editions print("Listing editions...") editions = api.list_editions(lang="en-us", updateId=update_id) print("Success!") except UUPDumpResponseError as e: print(f"API Error [{e.error_code}]: {e}") except UUPDumpAPIError as e: print(f"Error: {e}") ``` ## Error Handling Best Practices 1. **Always catch specific exceptions first**, then fall back to the base exception: ```python try: result = api.fetchupd(...) except UUPDumpResponseError as e: # Handle API errors except UUPDumpTimeoutError as e: # Handle timeouts except UUPDumpAPIError as e: # Handle any other errors ``` 2. **Check for error codes** in API responses: ```python try: result = api.fetchupd(arch="invalid") except UUPDumpResponseError as e: if e.error_code == "UNKNOWN_ARCH": print("Please use: amd64, x86, arm64, or all") ``` 3. **Use appropriate logging levels**: - Development: `logging.DEBUG` - Production: `logging.INFO` or `logging.WARNING` 4. **Set reasonable timeouts** based on your use case: ```python # For interactive applications api = RestAdapter(timeout=10) # For background tasks api = RestAdapter(timeout=30) ``` ## License MIT License ## Contributing Contributions are welcome! Please feel free to submit a Pull Request.
text/markdown
null
null
null
null
MIT
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "requests>=2.31.0" ]
[]
[]
[]
[ "Homepage, https://github.com/Cairnstew/uup-dump-api-py", "Issues, https://github.com/Cairnstew/uup-dump-api-py/issues" ]
uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"NixOS","version":"25.11","id":"xantusia","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
2026-02-20T10:12:09.243227
uup_dump_api_py-0.1.5.tar.gz
13,994
73/e2/20b8e790e1375bee6220db091fd36ff85d9580ad17920f2ac5ffed452ecc/uup_dump_api_py-0.1.5.tar.gz
source
sdist
null
false
78c3b2eb536d6a62c8fb0864e8bcc6e4
15cd68e2f3a4929daa64e97e2dafef397c3643eed22c911aa587b1af4095ac5c
73e220b8e790e1375bee6220db091fd36ff85d9580ad17920f2ac5ffed452ecc
null
[ "LICENSE" ]
219
2.4
bblocks-client
0.0.2
OGC Blocks client
# bblocks-client A Python client library for working with [OGC Blocks](https://ogcincubator.github.io/bblocks-docs/) registers. OGC Blocks are reusable specification components that support validation, semantic annotation via JSON-LD, and federation across distributed registers. ## Installation ```bash pip install bblocks_client ``` Optional extras for validation and semantic uplift: ```bash # JSON Schema validation pip install bblocks_client[jsonschema] # SHACL validation and semantic uplift (RDF/JSON-LD) pip install bblocks_client[rdf] # All extras pip install bblocks_client[all] ``` ## Usage ### Loading a register ```python from ogc.bblocks.register import load_register register = load_register("https://example.org/bblocks/register.json") # Look up a block by identifier bblock = register.get_item_summary("my.org.bblock-id") ``` Imported registers (dependencies) are loaded automatically. Pass `load_dependencies=False` to skip this. ### Accessing block metadata `get_item_summary()` returns a `BuildingBlockSummary` with lightweight metadata. `get_item_full()` fetches the full `BuildingBlock`, including examples and semantic uplift configuration. ```python bblock = register.get_item_full("my.org.bblock-id") print(bblock.name) print(bblock.status) # Status enum: stable, experimental, etc. print(bblock.depends_on) # Set of dependency identifiers print(bblock.ld_context) # URL to JSON-LD context print(bblock.schema) # Dict of media-type -> schema URL ``` ### JSON Schema validation Requires `bblocks_client[jsonschema]`. ```python from ogc.bblocks.validate import validate_json result = validate_json(bblock, {"type": "Feature", ...}) if not result.valid: print(result.exception) # Or raise directly: result.raise_for_invalid() ``` ### SHACL validation Requires `bblocks_client[rdf]`. ```python from rdflib import Graph from ogc.bblocks.validate import validate_shacl graph = Graph().parse("data.ttl") result = validate_shacl(bblock, graph) print(result.report) result.raise_for_invalid() ``` ### Semantic uplift (JSON to RDF) Requires `bblocks_client[rdf]`. ```python from ogc.bblocks.semantic_uplift import uplift_json rdf_graph = uplift_json(bblock, {"name": "Alice", ...}) print(rdf_graph.serialize(format="turtle")) ``` This applies the block's JSON-LD context to the input data, producing an RDF graph. Pre- and post-processing steps (jq, SPARQL, SHACL rules) defined in the block are applied automatically. ## License Apache 2.0
text/markdown
null
Alejandro Villar <avillar@ogc.org>
null
null
null
ogc, ogc blocks, ogc rainbow
[ "Development Status :: 4 - Beta", "Topic :: Scientific/Engineering", "Topic :: Utilities", "Topic :: Software Development :: Libraries" ]
[]
null
null
>=3.7
[]
[]
[]
[ "dacite~=1.9.2", "pyyaml~=6.0", "jsonschema~=4.26.0; extra == \"jsonschema\"", "rdflib~=7.0; extra == \"rdf\"", "pyshacl~=0.28; extra == \"rdf\"", "jq~=1.11.0; extra == \"rdf\"", "bblocks_client[jsonschema]; extra == \"all\"", "bblocks_client[rdf]; extra == \"all\"" ]
[]
[]
[]
[ "Homepage, https://github.com/ogcincubator/bblocks-docs/", "Documentation, https://github.com/ogcincubator/bblocks-docs/", "Repository, https://github.com/ogcincubator/bblocks-client-python.git" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:11:49.228689
bblocks_client-0.0.2.tar.gz
14,992
cf/8a/fc390a1315c750996f51c28fc6e4b035bdbcc32c3b42ab82f84b9ed59c33/bblocks_client-0.0.2.tar.gz
source
sdist
null
false
3e11dffef4b81ba49a0a268ab6c46475
7f2a4e3c17cb549cb9d18ccb932496a598a471b72910bfbf8a638f4d1d4353ea
cf8afc390a1315c750996f51c28fc6e4b035bdbcc32c3b42ab82f84b9ed59c33
Apache-2.0
[ "LICENSE" ]
222
2.4
pyxll-jupyter
0.7.1
Adds Jupyter notebooks to Microsoft Excel using PyXLL.
# PyXLL-Jupyter Integration for Jupyter notebooks and Microsoft Excel. See the [Python Jupyter Notebooks in Excel](https://www.pyxll.com/blog/python-jupyter-notebooks-in-excel/) blog post for more details. ## Requirements - PyXLL >= 5.1.0 - Jupyter >= 1.0.0 - notebook >= 6.0.0 - PySide2, or PySide6 for Python >= 3.10 - pywin32 >= 301, for `%xl_get`/`%xl_set` magic functions ### Optional - jupyterlab >= 4.0.0 ## Installation To install this package use: pip install pyxll-jupyter Once installed a "Jupyter Notebook" button will be added to the PyXLL ribbon tab in Excel, so long as you have PyXLL 5 or above already installed. When using Jupyter in Excel the Python kernel runs inside the Excel process using PyXLL. You can interact with Excel from code in the Jupyter notebook, and write Excel functions using the PyXLL decorators @xl_menu and @xl_macro etc. As the kernel runs in the existing Python interpreter in the Excel process it is not possible to restart the kernel or to use other Python versions or other languages. ## Configuration To configure, add any of the following settings to your pyxll.cfg file. You do not need to set all of these, only the ones you wish to change:: [JUPYTER] ; Workbook settings use_workbook_dir = 0 notebook_dir = C:\Path\To\Your\Documents subcommand = notebook ; Browser settings qt = allow_cookies = 1 private_browser = 0 cache_path = storage_path = ; Other settings timeout = 60 disable_ribbon = 0 pause_on_focus_lost = 1 If *use_workbook_dir* is set and the current workbook is saved then Jupyter will open in the same folder as the current workbook. *notebook_dir* can be set to an existing folder that will be used as the root documents folder the Jupyter opens in. The *subcommand* option can be used to switch the Jupyter subcommand used to launch the Jupyter web server. It can be set to either `notebook` for the default Jupyter notebook interface, or `lab` if using Jupyterlab *(experimental)*. *qt* can be used to switch which Qt implementation is used. Possible values are 'PySide6', 'PyQt6', 'PySide2', and 'PyQt5'. By default, whichever Qt implementation is installed will be used. *allow_cookies* will prevent the Qt browser from saving cookies if set to 0. *private_browser* will prevent the Qt browser from using any previously stored data or saving any data from the browser session. *cache_path* can be set to an existing folder for the browser to save cached data. By default this will be the Qt browser's default cached data path. *storage_path* can be set to an existing folder for the browser to save persistent storage data. By default this will be the Qt browser's default persistent storage path. *timeout* is the maximum number of seconds of inactivity to wait for when starting the Jupyter server process. If you are getting timeout errors then increasing this may help. If *disable_ribbon* is set then the ribbon button to start Jupyter will not be shown, however Jupyter may still be opened using the "OpenJupyterNotebook" macro. If *pause_on_focus_lost* is set then the Jupyter kernel will be paused whenever no Jupyter tasks panes are focused. If Jupyter is opened in a web browser this has no effect and the kernel will not be paused. ## Experimental JupyterLab Support Jupyterlab can be used instead of the default Jupyter Notebook interface by specifying `subcommand = lab` in the ``[JUPYTER]`` section of the pyxll.cfg file. This requires Jupyterlab >= 4.0.0 to be installed. At the time of writing, version 4 of Jupyterlab is in pre-release and can be installed using: pip install --pre jupyterlab ### Qt The pyxll-jupyter package uses the Qt [QWebEngineView](https://doc.qt.io/qt-5/qwebengineview.html) widget, and by default will use the [PySide2](https://pypi.org/project/PySide2/) package for Python <= 3.9 or the [PySide6](https://pypi.org/project/PySide6/) package for Python >= 3.10. This can be changed to use [PyQt5](https://www.riverbankcomputing.com/software/pyqt/) by setting `qt = PyQt5` in the `JUPYTER` section of the config. You will need to have both the `pyqt5` and `pyqtwebengine` packages installed if using this option. Both can be installed using pip as follows: pip install pyqt5 pyqtwebengine ## Magic Functions The following magic functions are available in addition to the standard Jupyter magic functions: ``` %xl_get [-c CELL] [-t TYPE] [-x] Get the current selection in Excel into Python. optional arguments: -c CELL, --cell CELL Address of cell to get value of. -t TYPE, --type TYPE Datatype to convert the value to. -x, --no-auto-resize Don't auto-resize the range. ``` ``` %xl_set [-c CELL] [-t TYPE] [-f FORMATTER] [-x] value Set a value to the current selection in Excel. positional arguments: value Value to set in Excel. optional arguments: -c CELL, --cell CELL Address of cell to get value of. -t TYPE, --type TYPE Datatype to convert the value to. -f FORMATTER, --formatter FORMATTER PyXLL Formatter to use when setting the value. -x, --no-auto-resize Don't auto-resize the range. ``` ``` %xl_plot [-n NAME] [-c CELL] [-w WIDTH] [-h HEIGHT] figure Plot a figure to Excel in the same way as pyxll.plot. The figure is exported as an image and inserted into Excel as a Picture object. If the --name argument is used and the picture already exists then it will not be resized or moved. positional arguments: figure Figure to plot. optional arguments: -n NAME, --name NAME Name of the picture object in Excel to use. -c CELL, --cell CELL Address of cell to use when creating the Picture in Excel. -w WIDTH, --width WIDTH Width in points to use when creating the Picture in Excel. -h HEIGHT, --height HEIGHT Height in points to use when creating the Picture in Excel. ``` ## Opening from VBA You can open the Jupyter notebook from VBA using the ``OpenJupyterNotebook`` macro, called via VBA's ``Run`` method. For example:: Run "OpenJupyterNotebook" The macro takes two arguments, the initial path and a boolean to open in a browser rather than in Excel task pane if True. The initial path can either be a valid path, or an empty string. For example, to open Jupyter in a browser with the default path you would run the macro as follows:: Run "OpenJupyterNotebook", "", True For more information about installing and using PyXLL see https://www.pyxll.com. Copyright (c) PyXLL Ltd
text/markdown
null
Tony Roberts <tony@pyxll.com>
null
null
null
null
[ "Programming Language :: Python :: 3", "Operating System :: Microsoft :: Windows" ]
[]
null
null
null
[]
[]
[]
[ "pyxll>=5.1.0", "jupyter>=1.0.0", "jupyter-client>=6.0.0", "notebook>=6.0.0", "packaging; python_version >= \"3.10\"", "PySide2; python_version < \"3.10\"", "PySide6!=6.4.2; python_version >= \"3.10\"", "pywin32>=301" ]
[]
[]
[]
[ "Repository, https://github.com/pyxll/pyxll-jupyter", "Issues, https://github.com/pyxll/pyxll-jupyter/issues" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:11:45.019802
pyxll_jupyter-0.7.1.tar.gz
46,267
e3/1f/e313a8d7296a91c9d59c77e78717ec93af18a27538a5c1c43c15daa7db01/pyxll_jupyter-0.7.1.tar.gz
source
sdist
null
false
fce662877ec872be135c443292876012
2fff0157cbc6efb8c6c26a2b6b8390a9d03ddab2a3acb7afc26d551015475d66
e31fe313a8d7296a91c9d59c77e78717ec93af18a27538a5c1c43c15daa7db01
MIT
[ "LICENSE.md" ]
220
2.4
usecortex-ai
0.5.8
The official Python SDK for the Cortex AI platform.
# Cortex AI Python SDK - [usecortex.ai](https://www.usecortex.ai/) The official Python SDK for the Cortex AI platform. Build powerful, context-aware AI applications in your Python applications. **Cortex** is your plug-and-play memory infrastructure. It powers intelligent, context-aware retrieval for any AI app or agent. Whether you’re building a customer support bot, research copilot, or internal knowledge assistant. [Learn more about the SDK from our docs](https://docs.usecortex.ai/) ## Core features * **Dynamic retrieval and querying** that always retrieve the most relevant context * **Built-in long-term memory** that evolves with every user interaction * **Personalization hooks** for user preferences, intent, and history * **Developer-first SDK** with the most flexible APIs and fine-grained controls ## Getting started ### Installation ```bash pip install usecortex-ai ``` ### Client setup We provide both synchronous and asynchronous clients. Use **`AsyncCortexAI`** when working with async/await patterns, and **`CortexAI`** for traditional synchronous workflows. Client initialization does not trigger any network requests, so you can safely create as many client instances as needed. Both clients expose the exact same set of methods. ```python import os from usecortex_ai import CortexAI, AsyncCortexAI api_key = os.environ["CORTEX_API_KEY"] # Set your Cortex API key in the environment variable CORTEX_API_KEY. Optional, but recommended. # Sync client client = CortexAI(token=api_key) # Async client (for async/await usage) async_client = AsyncCortexAI(token=api_key) ``` ### Create a Tenant You can consider a `tenant` as a single database that can have internal isolated collections called `sub-tenants`. [Know more about the concept of tenant here](https://docs.usecortex.ai/essentials/multi-tenant) ```python def create_tenant(): return client.tenant.create(tenant_id="my-company") ``` ### Ingest Your Data When you index your data, you make it ready for retrieval from Cortex using natural language. ```python # ingest in your knowledge base with open("a.pdf", 'rb') as f1, open("b.pdf", 'rb') as f2: files = [ ("a.pdf", f1), ("b.pdf", f2) ] upload_result = client.upload.knowledge( tenant_id="tenant_123", files=files, file_metadata=[ { "id": "doc_a", "tenant_metadata": {"dept": "sales"}, "document_metadata": {"author": "Alice"} }, { "id": "doc_b", "tenant_metadata": {"dept": "marketing"}, "document_metadata": {"author": "Bob"} } ] )) # Ingest user memories from cortex import CortexClient client = CortexClient(api_key="your_api_key") # Simple text memory result = client.user_memory.add( memories=[ { "text": "User prefers detailed explanations and dark mode", "infer": True, "user_name": "John" } ], tenant_id="tenant-01", sub_tenant_id="", upsert=True ) # Markdown content markdown_result = client.user_memory.add( memories=[ { "text": "# Meeting Notes\n\n## Key Points\n- Budget approved", "is_markdown": True, "infer": False, "title": "Meeting Notes" } ], tenant_id="tenant-01", sub_tenant_id="", upsert=True ) # User-assistant pairs with inference conversation_result = client.user_memory.add( memories=[ { "user_assistant_pairs": [ {"user": "What are my preferences?", "assistant": "You prefer dark mode."}, {"user": "How do I like reports?", "assistant": "Weekly summaries with charts."} ], "infer": True, "user_name": "John", "custom_instructions": "Extract user preferences" } ], tenant_id="tenant-01", sub_tenant_id="", upsert=True ) ``` **For a more detailed explanation** of document upload, including supported file formats, processing pipeline, metadata handling, and advanced configuration options, refer to the [Ingest Knowledge endpoint](https://docs.usecortex.ai/api-reference/endpoint/add-knowledge-memories). ### Search ```python # Semantic Recall results = client.recall.full_recall( query="Which mode does user prefer", tenant_id="tenant_1234", sub_tenant_id="sub_tenant_4567", alpha=0.8, recency_bias=0 ) # Get ingested data (memories + knowledge base) all_sources = client.data.list_data( tenant_id="tenant_1234", sub_tenant_id="sub_tenant_4567" ) ``` **For a more detailed explanation** of search and retrieval, including query parameters, scoring mechanisms, result structure, and advanced search features, refer to the [Search endpoint documentation](https://docs.usecortex.ai/api-reference/endpoint/search). ## SDK Method Structure & Type Safety Our SDKs follow a predictable pattern that mirrors the API structure while providing full type safety. > **Method Mapping** : `client.<group>.<function_name>` mirrors `api.usecortex.ai/<group>/<function_name>` > > For example: `client.upload.upload_text()` corresponds to `POST /upload/upload_text` The SDKs provide exact type parity with the API specification: - **Request Parameters** : Every field documented in the API reference (required, optional, types, validation rules) is reflected in the SDK method signatures - **Response Objects** : Return types match the exact JSON schema documented for each endpoint - **Error Types** : Exception structures mirror the error response formats from the API - **Nested Objects** : Complex nested parameters and responses maintain their full structure and typing > This means you can rely on your IDE’s autocomplete and type checking. If a parameter is optional in the API docs, it’s optional in the SDK. If a response contains a specific field, your IDE will know about it. Our SDKs are built in such a way that your IDE will automatically provide **autocompletion, type-checking, inline documentation with examples, and compile time validation** for each and every method. > > Just hit **Cmd+Space/Ctrl+Space!** ## Links - **Homepage:** [usecortex.ai](https://www.usecortex.ai/) - **Documentation:** [docs.usecortex.ai](https://docs.usecortex.ai/) ## Our docs Please refer to our [API reference](https://docs.usecortex.ai/api-reference/introduction) for detailed explanations of every API endpoint, parameter options, and advanced use cases. ## Support If you have any questions or need help, please reach out to our support team at [founders@usecortex.ai](mailto:founders@usecortex.ai).
text/markdown
null
Soham Ratnaparkhi <soham@usecortex.ai>
null
null
Copyright (c) 2024 Cortex AI All Rights Reserved. PROPRIETARY AND CONFIDENTIAL This software is the proprietary and confidential property of Cortex AI ("the Company"). Permission is hereby granted to users to install and use this software as part of the Cortex AI service, subject to the terms and conditions of the service agreement entered into with the Company. You may not, without the express written permission of the Company: 1. Copy, modify, or create derivative works of the software. 2. Distribute, sell, rent, lease, sublicense, or otherwise transfer the software to any third party. 3. Reverse engineer, decompile, or disassemble the software, except and only to the extent that such activity is expressly permitted by applicable law notwithstanding this limitation. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
cortex, ai, sdk, api, generative ai, rag
[ "Development Status :: 5 - Production/Stable", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Intended Audience :: Developers", "Topic :: Software Development :: Libraries :: Python Modules", "Typing :: Typed" ]
[]
null
null
>=3.10
[]
[]
[]
[ "httpx>=0.24", "pydantic<3,>=1.10" ]
[]
[]
[]
[ "Homepage, https://www.usecortex.ai/", "Documentation, https://docs.usecortex.ai/" ]
twine/6.2.0 CPython/3.11.14
2026-02-20T10:11:08.171961
usecortex_ai-0.5.8.tar.gz
64,696
9d/8d/3c888c705924463323ad14fb5355df04cf8bb8c19e49284f34406aef4f5d/usecortex_ai-0.5.8.tar.gz
source
sdist
null
false
c51cbecaa24d7fdd8ad1b5aba71d2032
6f5cbff40f23b7b4a1542a48f0522735fd87ac211573479babf691c8a66f8de3
9d8d3c888c705924463323ad14fb5355df04cf8bb8c19e49284f34406aef4f5d
null
[ "LICENSE" ]
212
2.4
ccs-digitalmarketplace-utils
79.14.0
Common utils for Digital Marketplace apps.
Digital Marketplace utils ========================= ![Python 3.11](https://img.shields.io/badge/python-3.11-blue.svg) ![Python 3.12](https://img.shields.io/badge/python-3.12-blue.svg) ![Python 3.13](https://img.shields.io/badge/python-3.13-blue.svg) ![Python 3.14](https://img.shields.io/badge/python-3.14-blue.svg) [![PyPI version](https://badge.fury.io/py/ccs-digitalmarketplace-utils.svg)](https://badge.fury.io/py/ccs-digitalmarketplace-utils) ## What's in here? * Digital Marketplace API clients * Formatting utilities for Digital Marketplace * Digital Marketplace logging for Flask using JSON Logging * Utility functions/libraries for Amazon S3, Mailchimp, Notify, Cloudwatch * Helper code for Flask configuration * A formed version of Flask Feature Flags ## Logging from applications When logging from applications you should write your message as a [format string](https://docs.python.org/2/library/string.html#format-string-syntax) and pass any required arguments to the log method in the `extra` named argument. This allows our logging to use them as separate fields in our JSON logs making it much easier to search and aggregate on them. ```python logger.info("the user {user_id} did the thing '{thing}'", extra={ 'user_id': user_id, 'thing': thing }) ``` Note that apart from not getting the benefit, passing the formatted message can be dangerous. User generated content may be passed, unescaped to the `.format` method. ## Versioning Releases of this project follow [semantic versioning](http://semver.org/), ie > Given a version number MAJOR.MINOR.PATCH, increment the: > > - MAJOR version when you make incompatible API changes, > - MINOR version when you add functionality in a backwards-compatible manner, and > - PATCH version when you make backwards-compatible bug fixes. To make a new version: - update the version in the `dmutils/__init__.py` file - if you are making a major change, also update the change log; When the pull request is merged a GitHub Action will tag the new version. ## Pre-commit hooks This project has a [pre-commit hook][pre-commit hook] to do some general file checks and check the `pyproject.toml`. Follow the [Quick start][pre-commit quick start] to see how to set this up in your local checkout of this project. ## Licence Unless stated otherwise, the codebase is released under [the MIT License][mit]. This covers both the codebase and any sample code in the documentation. The documentation is [&copy; Crown copyright][copyright] and available under the terms of the [Open Government 3.0][ogl] licence. [mit]: LICENCE [copyright]: http://www.nationalarchives.gov.uk/information-management/re-using-public-sector-information/uk-government-licensing-framework/crown-copyright/ [ogl]: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/ [pre-commit hook]: https://pre-commit.com/ [pre-commit quick start]: https://pre-commit.com/#quick-start
text/markdown
GDS Developers, CCS Developers
null
null
null
null
null
[]
[]
null
null
<3.15,>=3.11
[]
[]
[]
[ "Flask-WTF>=1.2.1", "Flask<3.2,>=3.0", "Flask-gzip>=0.2", "Flask-Login>=0.6.3", "Flask-Session<0.9.0,>=0.6.0", "boto3<2,>=1.7.83", "contextlib2>=21.6.0", "cryptography>=41.0.4", "ccs-digitalmarketplace-apiclient>=37.4.1", "mailchimp3==3.0.21", "requests<3,>=2.22.0", "redis>=5.0.1", "filetype<2,>=1.2.0", "notifications-python-client<11.0.0,>=8.1.0", "odfpy>=1.4.1", "python-json-logger<5.0.0,>=4.0.0", "pytz", "unicodecsv>=0.14.1", "urllib3<3", "werkzeug<3.2,>=3.0", "workdays>=1.4", "ruff==0.15.1; extra == \"dev\"", "freezegun==1.5.5; extra == \"dev\"", "hypothesis==6.151.5; extra == \"dev\"", "moto==5.1.21; extra == \"dev\"", "mypy==1.19.1; extra == \"dev\"", "pytest==9.0.2; extra == \"dev\"", "pytest-cov==7.0.0; extra == \"dev\"", "pytest-datadir==1.8.0; extra == \"dev\"", "requests-mock==1.12.1; extra == \"dev\"", "testfixtures==10.0.0; extra == \"dev\"", "ccs-digitalmarketplace-test-utils==7.7.0; extra == \"dev\"", "types-python-dateutil; extra == \"dev\"", "types-pytz; extra == \"dev\"", "types-redis; extra == \"dev\"", "types-requests; extra == \"dev\"", "pre-commit==4.5.1; extra == \"dev\"" ]
[]
[]
[]
[ "Homepage, https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-utils", "Repository, https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-utils.git", "Issues, https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-utils/issues", "Changelog, https://github.com/Crown-Commercial-Service/ccs-digitalmarketplace-utils/CHANGELOG.md" ]
twine/6.2.0 CPython/3.11.14
2026-02-20T10:10:50.959058
ccs_digitalmarketplace_utils-79.14.0.tar.gz
84,648
fe/c2/3370e7ffa64ae5e7ecde3b8f43d95df0749b3026c27962e52d667758e947/ccs_digitalmarketplace_utils-79.14.0.tar.gz
source
sdist
null
false
af2793f536d73db11beceb4908347da9
70d35d6739069cc68d81247891af556fb8f9b84678d3d325355250a925309cc9
fec23370e7ffa64ae5e7ecde3b8f43d95df0749b3026c27962e52d667758e947
null
[ "LICENCE" ]
211
2.4
mycli
1.55.0
CLI for MySQL Database. With auto-completion and syntax highlighting.
# mycli [![Build Status](https://github.com/dbcli/mycli/workflows/mycli/badge.svg)](https://github.com/dbcli/mycli/actions?query=workflow%3Amycli) A command line client for MySQL that can do auto-completion and syntax highlighting. Homepage: [http://mycli.net](http://mycli.net) Documentation: [http://mycli.net/docs](http://mycli.net/docs) ![Completion](screenshots/tables.png) ![CompletionGif](screenshots/main.gif) Postgres Equivalent: [http://pgcli.com](http://pgcli.com) Quick Start ----------- If you already know how to install Python packages, then you can install it via `pip`: You might need sudo on Linux. ```bash pip install -U 'mycli[all]' ``` or ```bash brew update && brew install mycli # Only on macOS ``` or ```bash sudo apt-get install mycli # Only on Debian or Ubuntu ``` ### Usage See ```bash mycli --help ``` Features -------- `mycli` is written using [prompt_toolkit](https://github.com/jonathanslenders/python-prompt-toolkit/). * Auto-completion as you type for SQL keywords as well as tables, views and columns in the database. * Fuzzy history search using [fzf](https://github.com/junegunn/fzf). * Syntax highlighting using Pygments. * Smart-completion (enabled by default) will suggest context-sensitive completion. - `SELECT * FROM <tab>` will only show table names. - `SELECT * FROM users WHERE <tab>` will only show column names. * Support for multiline queries. * Favorite queries with optional positional parameters. Save a query using `\fs <alias> <query>` and execute it with `\f <alias>`. * Timing of sql statements and table rendering. * Log every query and its results to a file (disabled by default). * Pretty print tabular data (with colors!). * Support for SSL connections * Shell-style trailing redirects with `$>`, `$>>` and `$|` operators. * Support for querying LLMs with context derived from your schema. * Support for storing passwords in the system keyring. Mycli creates a config file `~/.myclirc` on first run; you can use the options in that file to configure the above features, and more. Some features are only exposed as [key bindings](doc/key_bindings.rst). Contributions: -------------- If you're interested in contributing to this project, first of all I would like to extend my heartfelt gratitude. I've written a small doc to describe how to get this running in a development setup. https://github.com/dbcli/mycli/blob/main/CONTRIBUTING.md ## Additional Install Instructions: These are some alternative ways to install mycli that are not managed by our team but provided by OS package maintainers. These packages could be slightly out of date and take time to release the latest version. ### Arch, Manjaro You can install the mycli package available in the AUR: ``` yay -S mycli ``` ### Debian, Ubuntu On Debian, Ubuntu distributions, you can easily install the mycli package using apt: ``` sudo apt-get install mycli ``` ### Fedora Fedora has a package available for mycli, install it using dnf: ``` sudo dnf install mycli ``` ### Windows #### Option 1: Native Windows Install the `less` pager, for example by `scoop install less`. Follow the instructions on this blogpost: http://web.archive.org/web/20221006045208/https://www.codewall.co.uk/installing-using-mycli-on-windows/ **Mycli is not tested on Windows**, but the libraries used in the app are Windows-compatible. This means it should work without any modifications, but isn't supported. PRs to add native Windows testing to Mycli CI would be welcome! #### Option 2: WSL Everything should work as expected in WSL. This is a good option for using Mycli on Windows. ### Thanks: This project was funded through kickstarter. My thanks to the [backers](http://mycli.net/sponsors) who supported the project. A special thanks to [Jonathan Slenders](https://twitter.com/jonathan_s) for creating [Python Prompt Toolkit](http://github.com/jonathanslenders/python-prompt-toolkit), which is quite literally the backbone library, that made this app possible. Jonathan has also provided valuable feedback and support during the development of this app. [Click](http://click.pocoo.org/) is used for command line option parsing and printing error messages. Thanks to [PyMysql](https://github.com/PyMySQL/PyMySQL) for a pure python adapter to MySQL database. ### Compatibility Mycli is tested on macOS and Linux, and requires Python 3.10 or better. To connect to MySQL versions earlier than 5.5, you may need to set the following in `~/.myclirc`: ``` # character set for connections without --charset being set at the CLI default_character_set = utf8 ``` or set `--charset=utf8` when invoking MyCLI. ### Configuration and Usage For more information on using and configuring mycli, [check out our documentation](http://mycli.net/docs). Common topics include: - [Configuring mycli](http://mycli.net/config) - [Using/Disabling the pager](http://mycli.net/pager) - [Syntax colors](http://mycli.net/syntax)
text/markdown
null
Mycli Core Team <mycli-dev@googlegroups.com>
null
null
null
null
[]
[]
null
null
>=3.10
[]
[]
[]
[ "click~=8.3.1", "cryptography~=46.0.5", "Pygments~=2.19.2", "prompt_toolkit<4.0.0,>=3.0.6", "PyMySQL~=1.1.2", "sqlparse<0.6.0,>=0.3.0", "sqlglot[rs]==27.*", "configobj~=5.0.9", "cli_helpers[styles]~=2.10.1", "pyperclip~=1.11.0", "pycryptodomex~=3.23.0", "pyfzf~=0.3.1", "rapidfuzz~=3.14.3", "keyring~=25.7.0", "paramiko~=3.5.1; extra == \"ssh\"", "sshtunnel~=0.4.0; extra == \"ssh\"", "llm~=0.28.0; extra == \"llm\"", "setuptools==82.*; extra == \"llm\"", "pip==26.*; extra == \"llm\"", "mycli[ssh]; extra == \"all\"", "mycli[llm]; extra == \"all\"", "behave~=1.3.3; extra == \"dev\"", "coverage~=7.13.4; extra == \"dev\"", "mypy~=1.19.1; extra == \"dev\"", "pexpect~=4.9.0; extra == \"dev\"", "pytest~=9.0.2; extra == \"dev\"", "pytest-cov~=7.0.0; extra == \"dev\"", "tox~=4.35.0; extra == \"dev\"", "pdbpp~=0.11.7; extra == \"dev\"", "paramiko~=3.5.1; extra == \"dev\"", "sshtunnel~=0.4.0; extra == \"dev\"", "llm~=0.28.0; extra == \"dev\"", "setuptools==82.*; extra == \"dev\"", "pip==26.*; extra == \"dev\"", "ruff~=0.15.0; extra == \"dev\"" ]
[]
[]
[]
[ "homepage, http://mycli.net" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:09:43.342285
mycli-1.55.0.tar.gz
342,165
28/1c/6f423310b346703ac0e711035c619fc8233375201f2f4865c87cdd0633ec/mycli-1.55.0.tar.gz
source
sdist
null
false
b971e8cdbfc7a3597e6c19de222c6f61
6d295cfc0817a7f95a43d91b5fc3b04a23fccdd7d970b67d51ad62d75af38cd3
281c6f423310b346703ac0e711035c619fc8233375201f2f4865c87cdd0633ec
BSD-3-Clause
[ "LICENSE.txt", "AUTHORS.rst" ]
801
2.4
jsonschema-path
0.4.1
JSONSchema Spec with object-oriented paths
*************** JSONSchema Path *************** .. image:: https://img.shields.io/pypi/v/jsonschema-path.svg :target: https://pypi.python.org/pypi/jsonschema-path .. image:: https://travis-ci.org/p1c2u/jsonschema-path.svg?branch=master :target: https://travis-ci.org/p1c2u/jsonschema-path .. image:: https://img.shields.io/codecov/c/github/p1c2u/jsonschema-path/master.svg?style=flat :target: https://codecov.io/github/p1c2u/jsonschema-path?branch=master .. image:: https://img.shields.io/pypi/pyversions/jsonschema-path.svg :target: https://pypi.python.org/pypi/jsonschema-path .. image:: https://img.shields.io/pypi/format/jsonschema-path.svg :target: https://pypi.python.org/pypi/jsonschema-path .. image:: https://img.shields.io/pypi/status/jsonschema-path.svg :target: https://pypi.python.org/pypi/jsonschema-path About ##### Object-oriented JSONSchema Key features ############ * Traverse schema like paths * Access schema on demand with separate dereferencing accessor layer Installation ############ .. code-block:: console pip install jsonschema-path Alternatively you can download the code and install from the repository: .. code-block:: console pip install -e git+https://github.com/p1c2u/jsonschema-path.git#egg=jsonschema_path Usage ##### .. code-block:: python >>> from jsonschema_path import SchemaPath >>> d = { ... "properties": { ... "info": { ... "$ref": "#/$defs/Info", ... }, ... }, ... "$defs": { ... "Info": { ... "properties": { ... "title": { ... "$ref": "http://example.com", ... }, ... "version": { ... "type": "string", ... "default": "1.0", ... }, ... }, ... }, ... }, ... } >>> path = SchemaPath.from_dict(d) >>> # Stat keys >>> "properties" in path True >>> # Concatenate paths with / >>> info_path = path / "properties" / "info" >>> # Stat keys with implicit dereferencing >>> "properties" in info_path True >>> # Concatenate paths with implicit dereferencing >>> version_path = info_path / "properties" / "version" >>> # Open content with implicit dereferencing >>> with version_path.open() as contents: ... print(contents) {'type': 'string', 'default': '1.0'} Benchmarks ########## Benchmarks mirror the lightweight (dependency-free) JSON output format used in `pathable`. Run locally with Poetry: .. code-block:: console poetry run python -m tests.benchmarks.bench_parse --output reports/bench-parse.json poetry run python -m tests.benchmarks.bench_lookup --output reports/bench-lookup.json For a quick smoke run: .. code-block:: console poetry run python -m tests.benchmarks.bench_parse --output reports/bench-parse.quick.json --quick poetry run python -m tests.benchmarks.bench_lookup --output reports/bench-lookup.quick.json --quick You can also control repeats/warmup via env vars: .. code-block:: console export JSONSCHEMA_PATH_BENCH_REPEATS=5 export JSONSCHEMA_PATH_BENCH_WARMUP=1 Compare two results: .. code-block:: console poetry run python -m tests.benchmarks.compare_results \ --baseline reports/bench-lookup-master.json \ --candidate reports/bench-lookup.json \ --tolerance 0.20 Related projects ################ * `openapi-core <https://github.com/p1c2u/openapi-core>`__ Python library that adds client-side and server-side support for the OpenAPI. * `openapi-spec-validator <https://github.com/p1c2u/openapi-spec-validator>`__ Python library that validates OpenAPI Specs against the OpenAPI 2.0 (aka Swagger) and OpenAPI 3.0 specification * `openapi-schema-validator <https://github.com/p1c2u/openapi-schema-validator>`__ Python library that validates schema against the OpenAPI Schema Specification v3.0. License ####### Copyright (c) 2017-2025, Artur Maciag, All rights reserved. Apache-2.0
text/x-rst
Artur Maciag
maciag.artur@gmail.com
null
null
Apache-2.0
jsonschema, swagger, spec
[ "Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: OS Independent", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: 3.14", "Topic :: Software Development :: Libraries", "Topic :: Software Development :: Libraries :: Python Modules" ]
[]
null
null
<4.0.0,>=3.10
[]
[]
[]
[ "PyYAML>=5.1", "pathable<0.6.0,>=0.5.0", "referencing<0.38.0", "requests<3.0.0,>=2.31.0; extra == \"requests\"" ]
[]
[]
[]
[ "Repository, https://github.com/p1c2u/jsonschema-path" ]
twine/6.1.0 CPython/3.13.7
2026-02-20T10:09:41.611056
jsonschema_path-0.4.1.tar.gz
13,450
a8/8d/4b2e648cf643d19e1f76260d9cb002d242e38b4298d6da110bd3c3d8d0d2/jsonschema_path-0.4.1.tar.gz
source
sdist
null
false
b256fb2abe76a59f7f74186a0388e6bd
ffca3bd37f66364ae3afeaa2804d6078a9ab3b9359ade4dd9923aabbbd475e71
a88d4b2e648cf643d19e1f76260d9cb002d242e38b4298d6da110bd3c3d8d0d2
null
[ "LICENSE" ]
1,255,135