metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | snappi-ixnetwork | 1.46.0 | The Snappi IxNetwork Open Traffic Generator Python Package | # snappi Extension for IxNetwork
[](https://en.wikipedia.org/wiki/MIT_License)
[](https://www.repostatus.org/#active)
[](https://github.com/open-traffic-generator/snappi-ixnetwork/actions)
[](https://pypi.org/project/snappi_ixnetwork)
[](https://pypi.python.org/pypi/snappi_ixnetwork)
[](https://lgtm.com/projects/g/open-traffic-generator/snappi-ixnetwork/alerts/)
[](https://lgtm.com/projects/g/open-traffic-generator/snappi-ixnetwork/context:python)
[](https://pepy.tech/project/snappi-ixnetwork)
This extension allows executing test scripts written using [snappi](https://github.com/open-traffic-generator/snappi) against
IxNetwork, (one of) Keysight's implementation of [Open Traffic Generator](https://github.com/open-traffic-generator/models/releases).
> The repository is under active development.
To start contributing, please see [contributing.md](contributing.md).
## Install on a client
```sh
python -m pip install --upgrade "snappi[ixnetwork]"
```
## Start scripting
```python
"""
Configure a raw TCP flow with,
- tx port as source to rx port as destination
- frame count 10000, each of size 128 bytes
- transmit rate of 1000 packets per second
Validate,
- frames transmitted and received for configured flow is as expected
"""
import snappi
# host is IxNetwork API Server
api = snappi.api(location='https://localhost:443', ext='ixnetwork')
# new config
config = api.config()
# port location is chassis-ip;card-id;port-id
tx, rx = (
config.ports
.port(name='tx', location='192.168.0.1;2;1')
.port(name='rx', location='192.168.0.1;2;2')
)
# configure layer 1 properties
ly, = config.layer1.layer1(name='ly')
ly.port_names = [tx.name, rx.name]
ly.speed = ly.SPEED_10_GBPS
ly.media = ly.FIBER
# configure flow properties
flw, = config.flows.flow(name='flw')
# flow endpoints
flw.tx_rx.port.tx_name = tx.name
flw.tx_rx.port.rx_name = rx.name
# enable flow metrics
flw.metrics.enable = True
# configure rate, size, frame count
flw.size.fixed = 128
flw.rate.pps = 1000
flw.duration.fixed_packets.packets = 10000
# configure protocol headers with defaults fields
flw.packet.ethernet().vlan().ipv4().tcp()
# push configuration
api.set_config(config)
# start transmitting configured flows
control_state = api.control_state()
control_state.choice = control_state.TRAFFIC
control_state.traffic.choice = control_state.traffic.FLOW_TRANSMIT
control_state.traffic.flow_transmit.state = control_state.traffic.flow_transmit.START # noqa
res = api.set_control_state(control_state)
if len(res.warnings) > 0:
print("Warnings: {}".format(res.warnings))
# create a query for flow metrics
req = api.metrics_request()
req.flow.flow_names = [flw.name]
# wait for flow metrics to be as expected
while True:
res = api.get_metrics(req)
if all([m.frames_tx == 10000 == m.frames_rx for m in res.flow_metrics]):
break
```
| text/markdown | null | Keysight Technologies <andy.balogh@keysight.com> | null | null | null | snappi, ixnetwork, testing, open traffic generator, automation | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Testing :: Traffic Generation",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: ... | [] | null | null | <4,>=3.7 | [] | [] | [] | [
"ipaddr==2.2.0",
"netaddr==0.8.0",
"ipaddress==1.0.23",
"flake8",
"dpkt",
"black",
"pytest-cov",
"allure-pytest; python_version > \"3.6\"",
"ixnetwork-restpy>=1.7.0",
"build",
"snappi==1.46.0; extra == \"testing\"",
"pytest; extra == \"testing\"",
"mock; extra == \"testing\"",
"dpkt==1.9.4... | [] | [] | [] | [
"Repository, https://github.com/open-traffic-generator/snappi-ixnetwork"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T13:50:52.307979 | snappi_ixnetwork-1.46.0.tar.gz | 111,243 | a4/77/8dddb4ba18b999ee9792c9b85c776953956635391545a1a9e3a1fca28eb2/snappi_ixnetwork-1.46.0.tar.gz | source | sdist | null | false | b140f46fceacd45c92021d1f63ffbf67 | cb2a9a0b2583191cf4d6e3f44b906cf9ac426d370384c6abe1bb662092ab7625 | a4778dddb4ba18b999ee9792c9b85c776953956635391545a1a9e3a1fca28eb2 | MIT | [] | 2,739 |
2.4 | binaryrain-helper-data-processing | 1.1.1 | Aims to simplify and help with commonly used functions in the data processing areas. | # Binary Rain Helper Toolkit: Data Processing
`binaryrain_helper_data_processing` is a python package that aims to simplify and help with common functions data processing areas. It builds on top of the `pandas` library and provides additional functionality to make data processing easier, reduces boilerplate code and provides clear error messages.
For further details, please refer to the [Binary Rain Helper Toolkit documentation](https://binaryrain-net.github.io/Binary-Rain-Helper-Toolkit/toolkits/dataframe/).
## Benefits
- Consistent interface for different file formats
- Simplified error handling with clear messages
- Optional format-specific configurations
- Built on pandas for robust data processing
- Type hints for better IDE support
| text/markdown | Binary Rain, Marcel T.O | null | null | null | null | binary rain, common, help, functions, data processing, dataframe | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pandas>=3.0.1",
"pyarrow>=23.0.1"
] | [] | [] | [] | [
"Homepage, https://binaryrain-net.github.io/Binary-Rain-Helper-Toolkit/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:50:46.777859 | binaryrain_helper_data_processing-1.1.1.tar.gz | 7,158 | 71/f5/3b71cbe7378c34159e7c2eb7c47293f06c846905102666c39b0d1cc3248f/binaryrain_helper_data_processing-1.1.1.tar.gz | source | sdist | null | false | e63733e68346f1d5522216bcf1242072 | 9969402633c0c6fd9585f130ceaccf91d150348122dd18dbeb2d20379380d425 | 71f53b71cbe7378c34159e7c2eb7c47293f06c846905102666c39b0d1cc3248f | null | [
"LICENSE"
] | 239 |
2.4 | binaryrain-helper-cloud-aws | 1.0.13 | Aims to simplify and help with commonly used functions in the aws cloud data areas. | # Binary Rain Helper Toolkit: AWS Cloud
`binaryrain_helper_cloud_aws` is a python package that aims to simplify and help with common functions in AWS Cloud areas. It builds on top of the `boto3` and `aws-lambda-powertools` libraries and provides additional functionality to make working with AWS Cloud easier, reduces boilerplate code and provides clear error messages.
For further details, please refer to the [Binary Rain Helper Toolkit documentation](https://binaryrain-net.github.io/Binary-Rain-Helper-Toolkit/toolkits/aws/).
## Benefits
- Consistent error handling with clear messages
- Input validation for all function parameters
- Simplified authentication and access to AWS services
- Secure handling of secrets and sensitive information
- Type hints for better IDE support
| text/markdown | Binary Rain, Marcel T.O | null | null | null | null | binary rain, common, help, functions, aws | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aws-lambda-powertools[tracer]>=3.24.0",
"urllib3>=2.3.0",
"boto3>=1.42.51"
] | [] | [] | [] | [
"Homepage, https://binaryrain-net.github.io/Binary-Rain-Helper-Toolkit/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:50:45.458272 | binaryrain_helper_cloud_aws-1.0.13-py3-none-any.whl | 5,040 | a4/a7/9511c4dd6c2db959755789190a7ee204afe475e643ffd23f50732827a675/binaryrain_helper_cloud_aws-1.0.13-py3-none-any.whl | py3 | bdist_wheel | null | false | ad26df15fd04fcf4ba7288c55df64602 | da428f6468ead5e4e12e95e780a228384bc0fe291e7176d07bb857847708e033 | a4a79511c4dd6c2db959755789190a7ee204afe475e643ffd23f50732827a675 | null | [
"LICENSE"
] | 227 |
2.4 | binaryrain-helper-cloud-dataverse | 1.0.3 | Aims to simplify and help with connecting to and using Microsoft Dataverse. | # Binary Rain Helper Toolkit: Dataverse Helper
`binaryrain_helper_cloud_dataverse` is a python package that aims to simplify and help with connecting to and using Microsoft Dataverse. It handles common operations like retrieving, creating, updating, and deleting records in a Dataverse environment. With the help of sessions it maintains a consistent connection to the Dataverse API, ensuring efficient and reliable data operations without the need for repetitive code when pagination is required.
For further details, please refer to the [Binary Rain Helper Toolkit documentation](https://binaryrain-net.github.io/Binary-Rain-Helper-Toolkit/toolkits/dataverse/).
## Benefits
- **Simplified API Interaction**: Provides a straightforward interface for common Dataverse operations.
- **Session Management**: Automatically handles session creation and management, reducing boilerplate code.
- **Error Handling**: Includes basic error handling for common issues like missing data or request failures.
- **Pagination Support**: Automatically handles pagination for GET requests, making it easier to work with large datasets.
| text/markdown | Binary Rain, Marcel T.O | null | null | null | null | binary rain, common, help, dataverse, microsoft dataverse | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"requests>=2.32.5"
] | [] | [] | [] | [
"Homepage, https://binaryrain-net.github.io/Binary-Rain-Helper-Toolkit/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:50:43.331774 | binaryrain_helper_cloud_dataverse-1.0.3-py3-none-any.whl | 4,735 | 24/7a/d63cb042ea9f79498328bf8e8b52d00ef0db575f48ca86167627860a2d9a/binaryrain_helper_cloud_dataverse-1.0.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 3945184fa8c977da4b64f43a7eb39436 | 9820bf96170707f0391eadf3a09f6f376f7e4af9a4c62f47d71847c3821aec00 | 247ad63cb042ea9f79498328bf8e8b52d00ef0db575f48ca86167627860a2d9a | null | [
"LICENSE"
] | 229 |
2.4 | binaryrain-helper-cloud-azure | 1.0.10 | Aims to simplify and help with commonly used functions in the azure cloud data areas. | # Binary Rain Helper Toolkit: Azure Cloud
`binaryrain_helper_cloud_azure` is a python package that aims to simplify and help with common functions in Azure Cloud areas. It builds on top of the `azure` library and provides additional functionality to make working with Azure Cloud easier, reduces boilerplate code and provides clear error messages.
For further information, please refer to the [Binary Rain Helper Toolkit documentation](https://binaryrain-net.github.io/Binary-Rain-Helper-Toolkit/toolkits/azure/).
| text/markdown | Binary Rain, Marcel T.O | null | null | null | null | binary rain, common, help, functions, azure | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"azure-functions>=1.24.0",
"azure-identity>=1.25.2",
"azure-storage-blob>=12.28.0",
"azure-keyvault-secrets>=4.10.0",
"azure-mgmt-datafactory>=9.2.0"
] | [] | [] | [] | [
"Homepage, https://binaryrain-net.github.io/Binary-Rain-Helper-Toolkit/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:50:42.293895 | binaryrain_helper_cloud_azure-1.0.10-py3-none-any.whl | 5,434 | b1/2a/9ad27e613e3f8ec1dd48db5001616a8da29c4b03bd86ec919aa13ca01479/binaryrain_helper_cloud_azure-1.0.10-py3-none-any.whl | py3 | bdist_wheel | null | false | c61925c1e96bba331b55340b818e2479 | d30487b82df9df0d426413dd204330f0d6da18841fa59ae95597dc706acc2c43 | b12a9ad27e613e3f8ec1dd48db5001616a8da29c4b03bd86ec919aa13ca01479 | null | [
"LICENSE"
] | 230 |
2.4 | mama | 0.9.23 | A modular C++ build tool even your mama can use | # Mama Build Tool
Mama - A modular C++ build tool even your mama can use
The main goal of this project is to provide extremely convenient in-source builds
for cross platform projects. Building is as simple as `mama build windows` - no ceremony~!
CMake projects with trivial configurations and no dependencies can be handled
automatically by Mama. This makes header-only libraries or stand-alone C libraries
extremely easy to link.
Adding projects with already configured `mamafile.py` is trivial and allows you to manage
large scale projects in a modular way. Dependencies are added and configured through mamafiles.
Each mama build target exports CMake `${ProjectName}_INCLUDES` and `${ProjectName}_LIBS`. All exports
are gathered in correct linker order inside `MAMA_INCLUDES` and `MAMA_LIBS`. This ensures the least
amount of friction for developers - everything just works.
There is no central package repository, all packages are pulled and updated from public or
private git repositories. Package versioning is done through git tags or branches.
Custom build systems are also supported. For additional documentation explore: [build_target.py](mama/build_target.py)
## Who is this FOR?
Anyone who develops cross-platform C++ libraries or applications which
target any combination of [Windows, Linux, macOS, iOS, Android, Raspberry, Oclea, Xilinx, MIPS].
And anyone who is not satisfied with system-wide dependencies and linker
bugs caused by incompatible system-wide libraries on Linux.
If you require an easy to use, reproducible project/namespace scoped package+build system, this is for you.
Your builds will not rely on hard to setup system packages, all you need to do is type `mama build`.
### Supported platforms ###
- Windows (64-bit x86_64, 32-bit x86, 64-bit arm64, 32-bit armv7) default is latest MSVC
- Linux (Ubuntu) (64-bit x86_64, 32-bit x86) both GCC and Clang
- MacOS (64-bit x86_64, 64-bit arm64) via config.macos_version
- iOS (64-bit arm64) via config.ios_version
- Android (64-bit arm64, 32-bit armv7) via env ANDROID_NDK_HOME or ANDROID_HOME
- Raspberry (32-bit armv7) via env RASPI_HOME
- Oclea (64-bit arm64) via config.set_oclea_toolchain()
- MIPS (mips mipsel, mips64, mips64el) via config.set_mips_toolchain()
- Xilinx (64-bit arm64 Zynq UltraScale+ MPSoC) via config.set_xilinx_toolchain() or env XILINX_HOME
## Who is this NOT for?
Single platform projects with platform specific build configuration and system wide dependency management
such as Linux exclusive G++ projects using apt-get libraries or iOS-only apps using cocoapods.
## Artifactory
Provides a mechanism to upload pre-built packages to a private artifactory server through `mama upload mypackage`. These packages will be automatically used if a git:package commit hash matches.
## Setup For Users
1. Get python 3.6 and PIP
2. `$ pip install mama --upgrade`
3. `$ cd yourproject`
3. `$ mama init` which creates a `mamafile.py` and patches your CMakeLists.txt
4. (optional) Manual setup: Create your own `mamafile.py` (examples below) and add this to your CMakeLists.txt:
```cmake
include(mama.cmake)
include_directories(${MAMA_INCLUDES})
target_link_libraries(YourProject PRIVATE ${MAMA_LIBS})
```
5. `$ mama build` and enjoy!
6. `$ mama open` to open your project in an IDE / VSCode
## Command examples
```
mama init Initialize a new project. Tries to create mamafile.py and CMakeLists.txt
mama build Update and build main project only. This only clones, but does not update!
mama build x86 opencv Cross compile build target opencv to x86 architecture
mama build android Cross compile to arm64 android NDK
mama build android-26 arm Cross compile to armv7 android NDK API level 26
mama update Update all dependencies by doing git pull and build.
mama clean Cleans main project only.
mama clean x86 opencv Cleans main project only.
mama clean all Cleans EVERYTHING in the dependency chain for current arch.
mama rebuild Cleans, update and build main project only.
mama build dep1 Update and build dep1 only.
mama update dep1 Update and build the specified target.
mama serve android Update, build and deploy for Android
mama wipe dep1 Wipe target dependency completely and clone again.
mama upload dep1 Deploys and uploads dependency to Artifactory server.
mama test Run tests on main project.
mama test=arg Run tests on main project with an argument.
mama test="arg1 arg2" Run tests on main project with multiple arguments.
mama test dep1 Run tests on target dependency project.
mama dep1 start=dbtool Call target project mamafile start() with args [`dbtool`].
```
Call `mama help` for more usage information.
## Mamafile examples
Project `AlphaGL/mamafile.py`
```py
import mama
class AlphaGL(mama.BuildTarget):
# where to build intermediates
workspace = 'build' # for system-wide workspace, use: global_workspace = 'mycompany'
# grab dependencies straight from git repositories
# if the projects are trivial, then no extra configuration is needed
def dependencies(self):
# set artifactory package server for prebuilt packages
# the credentials can be configured by env vars for CI, call `mama help`
self.set_artifactory_ftp('artifacts.myftp.com', auth='store')
# add packages
self.add_git('ReCpp', 'https://github.com/RedFox20/ReCpp.git', branch='master')
self.add_git('libpng', 'https://github.com/LuaDist/libpng.git')
self.add_git('libjpeg', 'https://github.com/LuaDist/libjpeg.git')
self.add_git('glfw', 'https://github.com/glfw/glfw.git')
# add local packages from existing directory root:
self.add_local('utils', 'libs/utils')
# add a prebuilt package, use `mama upload myproject` to generate these:
self.add_artifactory_pkg('opencv', version='df76b66')
if self.linux: # or do it conditionally for linux only:
self.add_artifactory_pkg('opencv', fullname='opencv-linux-x64-release-df76b66')
# optional: customize package exports if repository doesn't have `include` or `src`
def package(self):
self.export_libs('.', ['.lib', '.a']) # export any .lib or .a from build folder
self.export_includes(['AGL']) # export AGL as include from source folder
# platform specific system library exports:
if self.ios: self.export_syslib('-framework OpenGLES')
if self.macos: self.export_syslib('-framework OpenGL')
if self.linux: self.export_syslib('GL')
def test(self, args):
self.gdb(f'bin/AlphaGLTests {args}')
```
If a dependency is non-trivial (it has dependencies and configuration),
you can simply place a target mamafile at: `mama/{DependencyName}.py`
Example dependency config `AlphaGL/mama/libpng.py`
```py
import mama
class libpng_static(mama.BuildTarget):
def dependencies(self):
# custom mamafile can be passed explicitly:
self.add_git('zlib', 'https://github.com/madler/zlib.git', mamafile='zlib.py')
def configure(self):
zinclude, zlibrary = self.get_target_products('zlib')
self.add_cmake_options(f'ZLIB_INCLUDE_DIR={zinclude}')
self.add_cmake_options(f'ZLIB_LIBRARY={zlibrary}')
self.add_cmake_options('BUILD_SHARED_LIB=NO', 'PNG_TESTS=NO')
def package(self):
# libpng builds its stuff into `{build}/lib`
self.export_libs('lib', ['.lib', '.a'])
# export installed include path from build dir
self.export_include('include', build_dir=True)
```
## Example output from Mama Build
```
$ mama build
========= Mama Build Tool ==========
- Target FaceOne BUILD [root target]
- Target dlib OK
- Target CppGuid OK
- Target opencv OK
- Target ReCpp OK
- Target NanoMesh OK
- Package ReCpp
<I> build/ReCpp/ReCpp
[L] build/ReCpp/windows/RelWithDebInfo/ReCpp.lib
- Package opencv
<I> build/opencv/windows/include
[L] build/opencv/windows/lib/Release/opencv_xphoto342.lib
[L] build/opencv/windows/lib/Release/opencv_features2d342.lib
[L] build/opencv/windows/lib/Release/opencv_imgcodecs342.lib
[L] build/opencv/windows/lib/Release/opencv_imgproc342.lib
[L] build/opencv/windows/lib/Release/opencv_core342.lib
[L] build/opencv/windows/3rdparty/lib/Release/libjpeg-turbo.lib
[L] build/opencv/windows/3rdparty/lib/Release/libpng.lib
[L] build/opencv/windows/3rdparty/lib/Release/zlib.lib
- Package dlib
<I> build/dlib/windows/include
[L] build/dlib/windows/lib/dlib19.15.99_relwithdebinfo_64bit_msvc1914.lib
- Package NanoMesh
<I> build/NanoMesh/NanoMesh
[L] build/NanoMesh/windows/RelWithDebInfo/NanoMesh.lib
- Package CppGuid
<I> build/CppGuid/CppGuid/include
[L] build/CppGuid/windows/RelWithDebInfo/CppGuid.lib
- Package FaceOne
<I> include
[L] bin/FaceOne.dll
[L] bin/FaceOne.lib
```
### Uploading packages ###
```python
def dependencies(self):
self.set_artifactory_ftp('ftp.myartifactory.com', auth='store')
self.add_git('googletest', 'git@github.com:RedFox20/googletest.git')
```
```
$ mama upload googletest
========= Mama Build Tool ==========
- Package googletest
<I> myworkspace/googletest/linux/include
[L] myworkspace/googletest/linux/lib/libgmock.a
[L] myworkspace/googletest/linux/lib/libgtest.a
- PAPA Deploy /home/XXX/myworkspace/googletest/linux/deploy/googletest
I (googletest) include
L (googletest) libgmock.a
L (googletest) libgtest.a
PAPA Deployed: 1 includes, 2 libs, 0 syslibs, 0 assets
- PAPA Upload googletest-linux-x64-release-ebb36f3 770.6KB
|==================================================>| 100 %
```
And then rebuilding with an artifactory package available
```
$ mama rebuild googletest
========= Mama Build Tool ==========
- Target googletest CLEAN linux
- Target googletest BUILD [cleaned target]
Artifactory fetch ftp.myartifactory.com/googletest-linux-x64-release-ebb36f3 770.6KB
|<==================================================| 100 %
Artifactory unzip googletest-linux-x64-release-ebb36f3
- Package googletest
<I> myworkspace/googletest/linux/include
[L] myworkspace/googletest/linux/libgmock.a
[L] myworkspace/googletest/linux/libgtest.a
```
## For Mama Contributors
We are open for any improvements and feedback via pull requests.
### Development Setup
The package `setuptools>=65.0` is required, ensure the version is correct with `pip3 show setuptools`.
You can set up local development with `$ pip3 install -e . --no-cache-dir` but make sure you have latest setuptools (>65.0) and latest pip3 (>22.3). This command will fail with older toolkits.
### Running Tests
Install pytest and run all tests from the project root:
```bash
uv venv
uv pip install pytest
uv run pytest
```
Or to run a specific test:
```bash
pytest tests/test_git_pinning/
```
### Publishing
Uploading a source distribution:
1. Get dependencies: `pip3 install build twine`
2. Build sdist: `python -m build`
3. Upload with twine: `twine upload --skip-existing dist/*`
It will prompt for Username and Password, unless you set up ~/.pypirc file:
```
[distutils]
index-servers = pypi
[pypi]
username=__token__
password=<pypi-api-token>
```
Quick build & upload using Python 3.9: `./deploy.sh`
| text/markdown | null | Jorma Rebane <jorma.rebane@gmail.com> | null | null | null | mama, build, mamabuild, c, c++, tool, cmake, simple, easy, package, manager, cross-platform | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"distro",
"keyring",
"keyrings.cryptfile",
"termcolor",
"colorama",
"python-dateutil",
"psutil"
] | [] | [] | [] | [
"Homepage, https://github.com/RedFox20/Mama",
"Bug Tracker, https://github.com/RedFox20/Mama/issues"
] | twine/6.1.0 CPython/3.12.3 | 2026-02-19T13:49:44.288441 | mama-0.9.23.tar.gz | 89,169 | 7a/aa/f1087486379c67bf5ee6e357979e9355908c3dd78100e0b944da21d4bfe0/mama-0.9.23.tar.gz | source | sdist | null | false | 70b62e740c3aae9fc2af052928f8e77f | 74fc627adf1113ccc452e049b1b41099bd2e5b887212379c1c85961927311379 | 7aaaf1087486379c67bf5ee6e357979e9355908c3dd78100e0b944da21d4bfe0 | MIT | [
"LICENSE"
] | 251 |
2.4 | scibite-toolkit | 1.3.0 | scibite-toolkit - python library for calling SciBite applications: TERMite, TExpress, SciBite Search, CENtree and Workbench. The library also enables processing of the JSON results from such requests | # SciBite Toolkit
Python library for making API calls to [SciBite](https://www.scibite.com/)'s suite of products and processing the JSON responses.
## Supported Products
- **TERMite** - Entity recognition and semantic enrichment (version 6.x)
- **TERMite 7** - Next-generation entity recognition with modern OAuth2 authentication
- **TExpress** - Pattern-based entity relationship extraction
- **CENtree** - Ontology management, navigation, and integration
- **SciBite Search** - Semantic search, document and entity analytics
- **Workbench** - Dataset annotation and management
## Installation
```bash
pip install scibite-toolkit
```
See versions on [PyPI](https://pypi.org/project/scibite-toolkit/)
## Quick Start Examples
- [TERMite 7](#termite-7-examples) - Modern client with OAuth2
- [TERMite 6](#termite-6-examples) - Legacy client
- [TExpress](#texpress-examples) - Pattern matching
- [SciBite Search](#scibite-search-example)
- [CENtree](#centree-examples) - Ontology navigation
- [Workbench](#workbench-example)
---
## TERMite 7 Examples
TERMite 7 is the modern version with enhanced OAuth2 authentication and improved API.
### OAuth2 Client Credentials (SaaS - Recommended)
For modern SaaS deployments using a separate authentication server:
```python
from scibite_toolkit import termite7
# Initialize with context manager for automatic cleanup
with termite7.Termite7RequestBuilder() as t:
# Set URLs
t.set_url('https://termite.saas.scibite.com')
t.set_token_url('https://auth.saas.scibite.com')
# Authenticate with OAuth2 client credentials
if not t.set_oauth2('your_client_id', 'your_client_secret'):
print("Authentication failed!")
exit(1)
# Annotate text
t.set_entities('DRUG,INDICATION')
t.set_subsume(True)
t.set_text('Aspirin is used to treat headaches and reduce inflammation.')
response = t.annotate_text()
# Process the response
df = termite7.process_annotation_output(response)
print(df.head())
```
### OAuth2 Password Grant (Legacy)
For on-premise deployments using username/password authentication:
```python
from scibite_toolkit import termite7
t = termite7.Termite7RequestBuilder()
# Set main TERMite URL and token URL (same server for legacy)
t.set_url('https://termite.example.com')
t.set_token_url('https://termite.example.com')
# Authenticate with username and password
if not t.set_oauth2_legacy('client_id', 'username', 'password'):
print("Authentication failed!")
exit(1)
# Annotate a document
t.set_entities('INDICATION,DRUG')
t.set_parser_id('generic')
t.set_file('path/to/document.pdf')
response = t.annotate_document()
# Process the response
df = termite7.process_annotation_output(response)
print(df)
# Clean up file handles
t.close()
```
### Get System Status
```python
from scibite_toolkit import termite7
t = termite7.Termite7RequestBuilder()
t.set_url('https://termite.example.com')
t.set_token_url('https://auth.example.com')
t.set_oauth2('client_id', 'client_secret')
# Get system status
status = termite7.get_system_status(t.url, t.headers)
print(f"Server Version: {status['data']['serverVersion']}")
# Get available vocabularies
vocabs = termite7.get_vocabs(t.url, t.headers)
print(f"Available vocabularies: {len(vocabs['data'])}")
# Get runtime options
rtos = termite7.get_runtime_options(t.url, t.headers)
print(rtos)
```
---
## TERMite 6 Examples
For legacy TERMite 6.x deployments.
### SciBite Hosted (SaaS)
```python
from scibite_toolkit import termite
# Initialize
t = termite.TermiteRequestBuilder()
# Configure
t.set_url('https://termite.saas.scibite.com')
t.set_saas_login_url('https://login.saas.scibite.com')
# Authenticate
t.set_auth_saas('username', 'password')
# Set runtime options
t.set_entities('INDICATION')
t.set_input_format('medline.xml')
t.set_output_format('json')
t.set_binary_content('path/to/file.xml')
t.set_subsume(True)
# Execute and process
response = t.execute()
df = termite.get_termite_dataframe(response)
print(df.head(3))
```
### Local Instance (Customer Hosted)
```python
from scibite_toolkit import termite
t = termite.TermiteRequestBuilder()
t.set_url('https://termite.local.example.com')
# Basic authentication for local instances
t.set_basic_auth('username', 'password')
# Configure and execute
t.set_entities('INDICATION')
t.set_input_format('medline.xml')
t.set_output_format('json')
t.set_binary_content('path/to/file.xml')
t.set_subsume(True)
response = t.execute()
df = termite.get_termite_dataframe(response)
print(df.head(3))
```
---
## TExpress Examples
Pattern-based entity relationship extraction.
### SciBite Hosted
```python
from scibite_toolkit import texpress
t = texpress.TexpressRequestBuilder()
t.set_url('https://texpress.saas.scibite.com')
t.set_saas_login_url('https://login.saas.scibite.com')
t.set_auth_saas('username', 'password')
# Set pattern to find relationships
t.set_entities('INDICATION,DRUG')
t.set_pattern(':(DRUG):{0,5}:(INDICATION)') # Find DRUG within 5 words of INDICATION
t.set_input_format('medline.xml')
t.set_output_format('json')
t.set_binary_content('path/to/file.xml')
response = t.execute()
df = texpress.get_texpress_dataframe(response)
print(df.head())
```
### Local Instance
```python
from scibite_toolkit import texpress
t = texpress.TexpressRequestBuilder()
t.set_url('https://texpress.local.example.com')
t.set_basic_auth('username', 'password')
t.set_entities('INDICATION,DRUG')
t.set_pattern(':(INDICATION):{0,5}:(INDICATION)')
t.set_input_format('pdf')
t.set_output_format('json')
t.set_binary_content('/path/to/file.pdf')
response = t.execute()
df = texpress.get_texpress_dataframe(response)
print(df.head())
```
---
## SciBite Search Example
Semantic search with entity-based queries and aggregations.
```python
from scibite_toolkit import scibite_search
# Configure
s = scibite_search.SBSRequestBuilder()
s.set_url('https://yourdomain-search.saas.scibite.com/')
s.set_auth_url('https://yourdomain.saas.scibite.com/')
# Authenticate with OAuth2
s.set_oauth2('your_client_id', 'your_client_secret')
# Search documents
query = 'schema_id="clinical_trial" AND (title~INDICATION$D011565 AND DRUG$*)'
# Preferred: request specific fields using the new 'fields' parameter (legacy: 'additional_fields')
response = s.get_docs(query=query, markup=True, limit=100, fields=['*'])
# Get co-occurrence aggregations
# Find top 50 genes co-occurring with psoriasis
response = s.get_aggregates(
query='INDICATION$D011565',
vocabs=['HGNCGENE'],
limit=50
)
```
> **Note:** Preferred parameter name is `fields`. The legacy `additional_fields` is still supported for backward compatibility. When both are provided, `fields` takes precedence.
---
## CENtree Examples
Ontology navigation and search.
### Modern Client (Recommended)
The modern `centree_clients` module provides better error handling, retries, and context manager support.
```python
from scibite_toolkit.centree_clients import CENtreeReaderClient
# Use context manager for automatic cleanup
with CENtreeReaderClient(
base_url="https://centree.example.com",
bearer_token="your_token",
timeout=(3.0, None) # Quick connect, unlimited read
) as reader:
# Search by exact label
hits = reader.get_classes_by_exact_label("efo", "neuron")
print(f"Found {len(hits)} matches")
# Get ontology roots
roots = reader.get_root_entities("efo", "classes", size=10)
# Get paths from root to target (great for LLM grounding)
paths = reader.get_paths_from_root("efo", "MONDO_0007739", as_="labels")
for path in paths:
print(" → ".join(path))
# Or authenticate with OAuth2
from scibite_toolkit.centree_clients import CENtreeReaderClient
reader = CENtreeReaderClient(base_url="https://centree.example.com")
if reader.set_oauth2(client_id="...", client_secret="..."):
hits = reader.get_classes_by_exact_label("efo", "lung")
print(hits)
```
---
## Workbench Example
Dataset management and annotation.
```python
from scibite_toolkit import workbench
# Initialize
wb = workbench.WorkbenchRequestBuilder()
wb.set_url('https://workbench.example.com')
# Authenticate
wb.set_oauth2('client_id', 'username', 'password')
# Create dataset
wb.set_dataset_name('My Analysis Dataset')
wb.set_dataset_desc('Dataset for clinical trial analysis')
wb.create_dataset()
# Upload file
wb.set_file_input('path/to/data.xlsx')
wb.upload_file_to_dataset()
# Configure and run annotation
vocabs = [[5, 6], [8, 9]] # Vocabulary IDs
attrs = [200, 201] # Attribute IDs
wb.set_termite_config('', vocabs, attrs)
wb.auto_annotate_dataset()
```
---
## Key Features
### Context Manager Support (TERMite 7, CENtree Clients)
Modern clients support context managers for automatic resource cleanup:
```python
with termite7.Termite7RequestBuilder() as t:
t.set_url('...')
# ... work with client ...
# File handles automatically closed
```
### Error Handling
All OAuth2 methods return boolean status for easy error handling:
```python
if not t.set_oauth2(client_id, client_secret):
print("Authentication failed - check credentials")
exit(1)
```
### Logging
Enable detailed logging for debugging:
```python
import logging
logging.basicConfig(level=logging.DEBUG)
# Or set per-client
t = termite7.Termite7RequestBuilder(log_level='DEBUG')
```
### Session Management
All clients use `requests.Session()` for efficient connection pooling and automatic retry handling.
---
## License
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
| text/markdown | null | SciBite <help@scibite.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"beautifulsoup4>=4.9.0",
"pandas<3.0,>=1.0.0",
"requests>=2.25.0",
"pytest; extra == \"dev\"",
"coverage; extra == \"dev\"",
"sphinx; extra == \"dev\"",
"sphinx-js; extra == \"dev\"",
"rst2pdf; extra == \"dev\"",
"openpyxl>=3.0.0; extra == \"workbench\"",
"openpyxl>=3.0.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/elsevier-health/scibite-toolkit",
"Repository, https://github.com/elsevier-health/scibite-toolkit",
"Issues, https://github.com/elsevier-health/scibite-toolkit/issues"
] | twine/6.2.0 CPython/3.10.11 | 2026-02-19T13:47:56.311109 | scibite_toolkit-1.3.0.tar.gz | 101,826 | 96/80/61cd2a2fbe6606d0f4c1a00a86d342500a65010a1127a16d09c928e92881/scibite_toolkit-1.3.0.tar.gz | source | sdist | null | false | 3a7fc7dcfaeca158775c81c9cd97008e | 71ab5a50b9c7fa85488f78d96bd27d6dd4db1cbf93f1dcc4b244f0ef8b07494c | 968061cd2a2fbe6606d0f4c1a00a86d342500a65010a1127a16d09c928e92881 | CC-BY-NC-SA-4.0 | [
"LICENSE.txt"
] | 245 |
2.4 | django-treebeard | 5.0.5 | Efficient tree implementations for Django | # django-treebeard
**django-treebeard** is a library that implements efficient tree implementations for the Django Web Framework.
It was written by Gustavo Picón and licensed under the Apache License 2.0.
## Status
[](https://django-treebeard.readthedocs.io/en/latest/?badge=latest)
[]()
[]()

[](https://pypi.org/project/django-treebeard/)
## Features
django-treebeard is:
- **Flexible**: Includes 3 different tree implementations with the
same API:
1. Adjacency List
2. Materialized Path
3. Nested Sets
4. PostgreSQL ltree (experimental)
- **Fast**: Optimized non-naive tree operations
- **Easy**: Uses Django Model Inheritance with abstract classes to
define your own models.
- **Clean**: Testable and well tested code base. Code/branch test
coverage is above 96%.
You can find the documentation at <https://django-treebeard.readthedocs.io/en/latest/>
### Supported versions
**django-treebeard** officially supports
- Django 5.2 and higher
- Python 3.10 and higher
- PostgreSQL, MySQL, MSSQL, SQLite database back-ends.
| text/markdown | null | Gustavo Picon <tabo@tabo.pe> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"django>=5.2",
"pytest-django<5.0,>=4.0; extra == \"test\"",
"pytest-pythonpath<1.0,>=0.7; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:47:17.211902 | django_treebeard-5.0.5.tar.gz | 299,223 | 19/31/d9380569d2e185458240497c948591d9183bd5c77d11800441b1ccee6933/django_treebeard-5.0.5.tar.gz | source | sdist | null | false | ad364ad56ae44d1a3db42b8fb4d23915 | edca8ef8f92a92d607a1310838fe02806414d0678187316b43b48677aff2f51c | 1931d9380569d2e185458240497c948591d9183bd5c77d11800441b1ccee6933 | Apache-2.0 | [
"LICENSE",
"AUTHORS"
] | 4,978 |
2.4 | laituri | 0.5.0 | Docker Toolkit for Python | # laituri — Docker Toolkit for Python
[](https://github.com/valohai/laituri/actions/workflows/ci.yml)
[](https://codecov.io/gh/valohai/laituri)
`laituri` is a set of Docker-related Python snippets used at [Valohai](https://valohai.com/).
You can use it with Python >= 3.8.
## Usage
### Configuration
You can configure your used Docker command if it is not the default `docker`, using laituri settings.
_Example:_
```
laituri.settings.DOCKER_COMMAND = 'docker'
```
### Docker Credential Manager
Laituri contains a docker credentials manager which can be used for example when pulling images.
It logs in and out using the Docker CLI.
_Example:_
```
from laituri.docker.credential_manager import get_credential_manager
my_credentials = {
'username': 'SmolShark1',
'password': 'sharksWithLazers',
}
with get_credential_manager(
image='python:latest',
registry_credentials=my_credentials,
log_status=print # Any callable
):
# Do your docker things!
```
## Development
Installing editable library version in the current virtual environment.
```bash
# install this package and all development dependencies
pip install -e .[dev] pre-commit && pre-commit install
# manually run lint and type checks
pre-commit run --all-files
# manually run tests
pytest --cov
python
>>> import laituri; print(laituri.__version__)
```
## Making a Release
A new release build is released by the CI when a new tag is pushed to the repository:
```bash
# bump version number in "laituri/__init__.py"
vim laituri/__init__.py
# pushing a new tag will trigger a new release build
git add .
git commit -m "Become to X.Y.Z"
git tag -a vX.Y.Z -m "Version X.Y.Z"
git push --follow-tags
```
If a manual release is needed, you can follow up the above steps with:
```bash
pip install build twine
git clean -fdx -e .idea/
python -m build .
twine upload dist/*
```
| text/markdown | null | Valohai <hait@valohai.com> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests<3,>=2.23",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"requests-mock; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/valohai/laituri"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:46:52.114759 | laituri-0.5.0-py3-none-any.whl | 11,317 | e3/8a/d79a80a102150b76b7ebe4bf03c1e40a58368461475c3d96511403432efd/laituri-0.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 67b88dc3c2b9ec39b63b0d63c2bee795 | ae7cd8ed93bbb8e48f51cd0c3b364bedd22f5acfad716b3ea78e115ab6eb94a6 | e38ad79a80a102150b76b7ebe4bf03c1e40a58368461475c3d96511403432efd | MIT | [
"LICENSE"
] | 138 |
2.4 | chad-ai | 0.11.0 | Chad: YOLO AI | # Chad: YOLO AI
Coding agents need hand holding to implement complex features, but no one holds Chad's hand.
Add one or more OpenAI Codex, Claude Code, Google Gemini, Alibaba Qwen, Mistral Vibe, Moonshot Kimi, or OpenCode coding
agents, decide what happens when you reach a limit (wait for the reset and continue, switch provider), ask for a coding
task, and Chad will ralph loop to deliver a one-shot result.
<p style="text-align: center;">
<img src="docs/Chad.png" alt="Chad Code" width="80">
</p>
**The First Warning:** Chad was developed with... Chad. Yes, this material writes itself. No, high quality robust code
this is not.
**World Warning II:** Chad is a risk-taker who knows no limits. Chad runs agents in YOLO mode and has access to
everything on your hard drive and your internet connection.
### Blah blah how do I run it?
```bash
pip install chad-ai
chad
```
### How is this better than $Grug?
Chad provides a Gradio UI to switch between coding agents (tokens encrypted with a master password you create and
provide for each session), monitors usage quotas, switches between providers, is able to communicate via slack,
and runs multiple tasks in parallel with result merging from their worktrees:
<details open>
<summary><b>Screenshots</b></summary>
#### Select coding and verification agents for a task
<img src="docs/screenshot-task-input.png" width="800" alt="Task input panel">
#### Monitor provider accounts with usage tracking
<img src="docs/screenshot-providers.png" width="800" alt="Providers tab with usage">
#### Configure rules to switch providers or wait for usage resets
<img src="docs/screenshot-settings.png" width="800" alt="Action rules configuration">
</details>
### Is this satire? What are you even doing here?
¯\_(ツ)_/¯
| text/markdown | Team Chad: AI Police | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.31.0",
"bcrypt>=4.0.0",
"cryptography>=41.0.0",
"gradio>=4.0.0",
"pexpect>=4.9.0; platform_system != \"Windows\"",
"cffi<3.0.0,>=2.0.0",
"hf-xet==1.1.0",
"huggingface-hub<1.0.0,>=0.16.4",
"filelock>=3.0.0",
"fastapi>=0.100.0",
"uvicorn[standard]>=0.20.0",
"websockets>=11.0",
"ht... | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-19T13:46:22.690148 | chad_ai-0.11.0.tar.gz | 488,602 | d4/d8/742a81110b6bc30d50f50447e4ca13360709b315a01d4788139d1d2f9991/chad_ai-0.11.0.tar.gz | source | sdist | null | false | 9ec389f8d76d5e2b8e40536db2e776e0 | 8467841279c8b2b69087980e7ef2e118646cda9dd8ed608fad0a3473d9c1bbef | d4d8742a81110b6bc30d50f50447e4ca13360709b315a01d4788139d1d2f9991 | null | [] | 253 |
2.4 | wagtail-content-import | 0.13.2 | A module for Wagtail that provides functionality for importing page content from third-party sources. | 
[](https://opensource.org/licenses/BSD-3-Clause)
[](https://github.com/astral-sh/ruff)
[](https://pypi.org/project/wagtail-content-import)
[](https://github.com/torchbox/wagtail-content-import/actions)
Wagtail Content Import is a module for importing page content into Wagtail from third-party sources.
Page content is imported into a StreamField, using a set of customisable mappings.
Currently, it supports:
### As sources:
- Google Docs
- OneDrive/SharePoint
### As files:
- Google Docs documents with:
- Rich text
- Tables
- Images
- Headings
- Docx files with:
- Text with bold and italics
- Headings
### Requirements:
* Python >= 3.9
* Django >= 4.2
* Wagtail >= 6.3
For the full documentation, see: https://torchbox.github.io/wagtail-content-import/
### Note for Google Import
If using Google Docs import, for users to authenticate with Google they must either allow third party cookies or add `accounts.google.com` to their allowed domains ([Settings/Privacy and Security/Cookies and other site data in Chrome](chrome://settings/cookies) or [Preferences/Privacy & Security in Firefox](about:preferences#privacy)).
| text/markdown | null | Samir Shah <solaris.smoke@gmail.com>, Emily Topp-Mugglestone <emilytm@torchbox.com>, Karl Hobley <karl@kaed.uk>, Matthew Westcott <matthew.westcott@torchbox.com> | null | Emily Topp-Mugglestone <emilytm@torchbox.com> | null | Wagtail, Django, content import, Google Docs, OneDrive, SharePoint, docx | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Topic :: Internet :: WWW/HTTP :: Site Management",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"Wagtail>=6.3",
"python-docx>=1.2.0",
"mock>=5.2.0; extra == \"testing\"",
"coverage>=7.11.3; extra == \"testing\"",
"tox>=4.32.0; extra == \"testing\"",
"dj-database-url<4,>=3; extra == \"testing\""
] | [] | [] | [] | [
"Changelog, https://github.com/torchbox/wagtail-content-import/blob/main/docs/release_notes.md",
"Documentation, https://torchbox.github.io/wagtail-content-import/index.html",
"Source, https://github.com/torchbox/wagtail-content-import"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:46:15.622160 | wagtail_content_import-0.13.2.tar.gz | 20,487 | 42/a2/7e9ccb11baaea17762eb1de356addb84698d0f2ac2322a7a60f172103f95/wagtail_content_import-0.13.2.tar.gz | source | sdist | null | false | 46fdb324e6e6004d344adab970ba67ba | 405d17d2b48d7c38a16f00a8ef93cb013c967a2fb91e664ab7cddc79d93db4ed | 42a27e9ccb11baaea17762eb1de356addb84698d0f2ac2322a7a60f172103f95 | BSD-3-Clause | [
"LICENSE"
] | 243 |
2.3 | openai-http-proxy | 3.0.2 | OpenAI HTTP Proxy is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc. | <h1 align="center"><a href="#">OpenAI HTTP Proxy</a></h1>
<p align="center">
<b>Lightweight, OpenAI-compatible HTTP proxy server / gateway</b><br>unifying access to multiple <b>Large Language Model providers</b> and local inference <br>through a single, standardized API endpoint.
</p>
<p align="center">
<a href="https://pypi.org/project/lm-proxy/"><img src="https://img.shields.io/pypi/v/lm-proxy?color=blue" alt="PyPI"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml/badge.svg" alt="Tests"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml/badge.svg" alt="Code Style"></a>
<img src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/coverage.svg" alt="Code Coverage">
<a href="https://www.bestpractices.dev/projects/11364"><img src="https://www.bestpractices.dev/projects/11364/badge"></a>
<a href="https://github.com/Nayjest/lm-proxy/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Nayjest/lm-proxy?color=d08aff" alt="License"></a>
</p>
Built with Python, FastAPI and [MicroCore](https://github.com/Nayjest/ai-microcore), **OpenAI HTTP Proxy** seamlessly integrates cloud providers like Google, Anthropic, and OpenAI, as well as local PyTorch-based inference, while maintaining full compatibility with OpenAI's API format.
It works as a drop-in replacement for OpenAI's API, allowing you to switch between cloud providers and local models without modifying your existing client code.
**OpenAI HTTP Proxy** supports **real-time token streaming**, **secure Virtual API key management**, and can be used both as an importable Python library and as a standalone HTTP service. Whether you're building production applications or experimenting with different models, OpenAI HTTP Proxy eliminates integration complexity and keeps your codebase **provider-agnostic**.
## Table of Contents
- [Overview](#openai-http-proxy)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#-configuration)
- [Basic Structure](#basic-structure)
- [Environment Variables](#environment-variables)
- [Proxy API Keys vs. Provider API Keys](#-proxy-api-keys-vs-provider-api-keys)
- [API Usage](#-api-usage)
- [Chat Completions Endpoint](#chat-completions-endpoint)
- [Models List Endpoint](#models-list-endpoint)
- [User Groups Configuration](#-user-groups-configuration)
- [Basic Group Definition](#basic-group-definition)
- [Group-based Access Control](#group-based-access-control)
- [Connection Restrictions](#connection-restrictions)
- [Virtual API Key Validation](#virtual-api-key-validation)
- [Advanced Usage](#%EF%B8%8F-advanced-usage)
- [Dynamic Model Routing](#dynamic-model-routing)
- [Load Balancing Example](#load-balancing-example)
- [Google Vertex AI Example](#google-vertex-ai-configuration-example)
- [Using Tokens from OIDC Provider as Virtual/Client API Keys](#using-tokens-from-oidc-provider-as-virtualclient-api-keys)
- [Add-on Components](#-add-on-components)
- [Database Connector](#database-connector)
- [Request Handlers (Middleware)](#-request-handlers--middleware)
- [Guides & Reference](#-guides--reference)
- [Known Limitations](#-known-limitations)
- [Debugging](#-debugging)
- [Contributing](#-contributing)
- [License](#-license)
<a href="#" align="center"><img alt="OpenAI HTTP Proxy / Gateway" src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/press-kit/assets/lm-proxy_1_hacker_1600x672.png"></a>
## ✨ Features<a id="-features"></a>
- **Provider Agnostic**: Connect to OpenAI, Anthropic, Google AI, local models, and more using a single API
- **Unified Interface**: Access all models through the standard OpenAI API format
- **Dynamic Routing**: Route requests to different LLM providers based on model name patterns
- **Stream Support**: Full streaming support for real-time responses
- **API Key Management**: Configurable API key validation and access control
- **Easy Configuration**: Simple TOML/YAML/JSON/Python configuration files for setup
- **Extensible by Design**: Minimal core with clearly defined extension points, enabling seamless customization and expansion without modifying the core system.
## 🚀 Getting Started<a id="-getting-started"></a>
### Requirements
Python 3.11 | 3.12 | 3.13
### Installation<a id="installation"></a>
```bash
pip install openai-http-proxy
```
For proxying to Anthropic API or Google Gemini via Vertex AI or Google AI Studio, install optional dependencies:
```
pip install openai-http-proxy[anthropic,google]
```
or
```
pip install openai-http-proxy[all]
```
### Quick Start<a id="quick-start"></a>
#### 1. Create a `config.toml` file:
```toml
host = "0.0.0.0"
port = 8000
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
[routing]
"gpt*" = "openai.*"
"claude*" = "anthropic.*"
"*" = "openai.gpt-3.5-turbo"
[groups.default]
api_keys = ["YOUR_API_KEY_HERE"]
```
> **Note** ℹ️
> To enhance security, consider storing upstream API keys in operating system environment variables rather than embedding them directly in the configuration file. You can reference these variables in the configuration using the env:<VAR_NAME> syntax.
#### 2. Start the server:
```bash
openai-http-proxy
```
Alternatively, run it as a Python module:
```bash
python -m lm_proxy
```
#### 3. Use it with any OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY_HERE",
base_url="http://localhost:8000/v1"
)
completion = client.chat.completions.create(
model="gpt-5", # This will be routed to OpenAI based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(completion.choices[0].message.content)
```
Or use the same endpoint with Claude models:
```python
completion = client.chat.completions.create(
model="claude-opus-4-1-20250805", # This will be routed to Anthropic based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
## 📝 Configuration<a id="-configuration"></a>
OpenAI HTTP Proxy is configured through a TOML/YAML/JSON/Python file that specifies connections, routing rules, and access control.
### Basic Structure<a id="basic-structure"></a>
```toml
host = "0.0.0.0" # Interface to bind to
port = 8000 # Port to listen on
dev_autoreload = false # Enable for development
# API key validation function (optional)
api_key_check = "lm_proxy.api_key_check.check_api_key_in_config"
# LLM Provider Connections
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.google]
api_type = "google"
api_key = "env:GOOGLE_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
# Routing rules (model_pattern = "connection.model")
[routing]
"gpt*" = "openai.*" # Route all GPT models to OpenAI
"claude*" = "anthropic.*" # Route all Claude models to Anthropic
"gemini*" = "google.*" # Route all Gemini models to Google
"*" = "openai.gpt-3.5-turbo" # Default fallback
# Access control groups
[groups.default]
api_keys = [
"KEY1",
"KEY2"
]
# optional
[[loggers]]
class = 'lm_proxy.loggers.BaseLogger'
[loggers.log_writer]
class = 'lm_proxy.loggers.log_writers.JsonLogWriter'
file_name = 'storage/json.log'
[loggers.entry_transformer]
class = 'lm_proxy.loggers.LogEntryTransformer'
completion_tokens = "response.usage.completion_tokens"
prompt_tokens = "response.usage.prompt_tokens"
prompt = "request.messages"
response = "response"
group = "group"
connection = "connection"
api_key_id = "api_key_id"
remote_addr = "remote_addr"
created_at = "created_at"
duration = "duration"
```
### Environment Variables<a id="environment-variables"></a>
You can reference environment variables in your configuration file by prefixing values with `env:`.
For example:
```toml
[connections.openai]
api_key = "env:OPENAI_API_KEY"
```
At runtime, OpenAI HTTP Proxy automatically retrieves the value of the target variable
(OPENAI_API_KEY) from your operating system's environment or from a .env file, if present.
### .env Files
By default, OpenAI HTTP Proxy looks for a `.env` file in the current working directory
and loads environment variables from it.
You can refer to the [.env.template](https://github.com/Nayjest/lm-proxy/blob/main/.env.template)
file for an example:
```dotenv
OPENAI_API_KEY=sk-u........
GOOGLE_API_KEY=AI........
ANTHROPIC_API_KEY=sk-ant-api03--vE........
# "1", "TRUE", "YES", "ON", "ENABLED", "Y", "+" are true, case-insensitive.
# See https://github.com/Nayjest/ai-microcore/blob/v4.4.3/microcore/configuration.py#L36
LM_PROXY_DEBUG=no
```
You can also control `.env` file usage with the `--env` command-line option:
```bash
# Use a custom .env file path
openai-http-proxy --env="path/to/your/.env"
# Disable .env loading
openai-http-proxy --env=""
```
## 🔑 Proxy API Keys vs. Provider API Keys<a id="-proxy-api-keys-vs-provider-api-keys"></a>
OpenAI HTTP Proxy utilizes two distinct types of API keys to facilitate secure and efficient request handling.
- **Proxy API Key (Virtual API Key, Client API Key):**
A unique key generated and managed within OpenAI HTTP Proxy.
Clients use these keys to authenticate their requests to the proxy's API endpoints.
Each Client API Key is associated with a specific group, which defines the scope of access and permissions for the client's requests.
These keys allow users to securely interact with the proxy without direct access to external service credentials.
- **Provider API Key (Upstream API Key):**
A key provided by external LLM inference providers (e.g., OpenAI, Anthropic, Mistral, etc.) and configured within the OpenAI HTTP Proxy.
The proxy uses these keys to authenticate and forward validated client requests to the respective external services.
Provider API Keys remain hidden from end users, ensuring secure and transparent communication with provider APIs.
This distinction ensures a clear separation of concerns:
Virtual API Keys manage user authentication and access within the proxy,
while Upstream API Keys handle secure communication with external providers.
## 🔌 API Usage<a id="-api-usage"></a>
OpenAI HTTP Proxy implements the OpenAI chat completions API endpoint. You can use any OpenAI-compatible client to interact with it.
### Chat Completions Endpoint<a id="chat-completions-endpoint"></a>
```http
POST /v1/chat/completions
```
#### Request Format
```json
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"stream": false
}
```
#### Response Format
```json
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
]
}
```
### Models List Endpoint<a id="models-list-endpoint"></a>
List and describe all models available through the API.
```http
GET /v1/models
```
The **OpenAI HTTP Proxy** dynamically builds the models list based on routing rules defined in `config.routing`.
Routing keys can reference both **exact model names** and **model name patterns** (e.g., `"gpt*"`, `"claude*"`, etc.).
By default, wildcard patterns are displayed as-is in the models list (e.g., `"gpt*"`, `"claude*"`).
This behavior can be customized via the `model_listing_mode` configuration option:
```
model_listing_mode = "as_is" | "ignore_wildcards" | "expand_wildcards"
```
Available modes:
- **`as_is`** *(default)* — Lists all entries exactly as defined in the routing configuration, including wildcard patterns.
- **`ignore_wildcards`** — Excludes wildcard patterns, showing only explicitly defined model names.
- **`expand_wildcards`** — Expands wildcard patterns by querying each connected backend for available models *(feature not yet implemented)*.
To obtain a complete and accurate model list in the current implementation,
all supported models must be explicitly defined in the routing configuration, for example:
```toml
[routing]
"gpt-4" = "my_openai_connection.*"
"gpt-5" = "my_openai_connection.*"
"gpt-8"= "my_openai_connection.gpt-3.5-turbo"
"claude-4.5-sonnet" = "my_anthropic_connection.claude-sonnet-4-5-20250929"
"claude-4.1-opus" = "my_anthropic_connection.claude-opus-4-1-20250805"
[connections]
[connections.my_openai_connection]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.my_anthropic_connection]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
```
#### Response Format
```json
{
"object": "list",
"data": [
{
"id": "gpt-6",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
},
{
"id": "claude-5-sonnet",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
}
]
}
```
## 🔒 User Groups Configuration<a id="-user-groups-configuration"></a>
The `[groups]` section in the configuration defines access control rules for different user groups.
Each group can have its own set of virtual API keys and permitted connections.
### Basic Group Definition<a id="basic-group-definition"></a>
```toml
[groups.default]
api_keys = ["KEY1", "KEY2"]
allowed_connections = "*" # Allow access to all connections
```
### Group-based Access Control<a id="group-based-access-control"></a>
You can create multiple groups to segment your users and control their access:
```toml
# Admin group with full access
[groups.admin]
api_keys = ["ADMIN_KEY_1", "ADMIN_KEY_2"]
allowed_connections = "*" # Access to all connections
# Regular users with limited access
[groups.users]
api_keys = ["USER_KEY_1", "USER_KEY_2"]
allowed_connections = "openai,anthropic" # Only allowed to use specific connections
# Free tier with minimal access
[groups.free]
api_keys = ["FREE_KEY_1", "FREE_KEY_2"]
allowed_connections = "openai" # Only allowed to use OpenAI connection
```
### Connection Restrictions<a id="connection-restrictions"></a>
The `allowed_connections` parameter controls which upstream providers a group can access:
- `"*"` - Group can use all configured connections
- `"openai,anthropic"` - Comma-separated list of specific connections the group can use
This allows fine-grained control over which users can access which AI providers, enabling features like:
- Restricting expensive models to premium users
- Creating specialized access tiers for different user groups
- Implementing usage quotas per group
- Billing and cost allocation by user group
### Virtual API Key Validation<a id="virtual-api-key-validation"></a>
#### Overview
OpenAI HTTP Proxy includes 2 built-in methods for validating Virtual API keys:
- `lm_proxy.api_key_check.check_api_key_in_config` - verifies API keys against those defined in the config file; used by default
- `lm_proxy.api_key_check.CheckAPIKeyWithRequest` - validates API keys via an external HTTP service
The API key check method can be configured using the `api_key_check` configuration key.
Its value can be either a reference to a Python function in the format `my_module.sub_module1.sub_module2.fn_name`,
or an object containing parameters for a class-based validator.
In the .py config representation, the validator function can be passed directly as a callable.
#### Example configuration for external API key validation using HTTP request to Keycloak / OpenID Connect
This example shows how to validate API keys against an external service (e.g., Keycloak):
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true # interpret response JSON as user info object for further processing / logging
use_cache = true # requires installing cachetools if True: pip install cachetools
cache_ttl = 60 # Cache duration in seconds
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
#### Custom API Key Validation / Extending functionality
For more advanced authentication needs,
you can implement a custom validator function:
```python
# my_validators.py
def validate_api_key(api_key: str) -> str | None:
"""
Validate an API key and return the group name if valid.
Args:
api_key: The API key to validate
Returns:
The name of the group if valid, None otherwise
"""
if api_key == "secret-key":
return "admin"
elif api_key.startswith("user-"):
return "users"
return None
```
Then reference it in your config:
```toml
api_key_check = "my_validators.validate_api_key"
```
> **NOTE**
> In this case, the `api_keys` lists in groups are ignored, and the custom function is responsible for all validation logic.
## 🛠️ Advanced Usage<a id="-advanced-usage"></a>
### Dynamic Model Routing<a id="dynamic-model-routing"></a>
The routing section allows flexible pattern matching with wildcards:
```toml
[routing]
"gpt-4*" = "openai.gpt-4" # Route gpt-4 requests to OpenAI GPT-4
"gpt-3.5*" = "openai.gpt-3.5-turbo" # Route gpt-3.5 requests to OpenAI
"claude*" = "anthropic.*" # Pass model name as-is to Anthropic
"gemini*" = "google.*" # Pass model name as-is to Google
"custom*" = "local.llama-7b" # Map any "custom*" to a specific local model
"*" = "openai.gpt-3.5-turbo" # Default fallback for unmatched models
```
Keys are model name patterns (with `*` wildcard support), and values are connection/model mappings.
Connection names reference those defined in the `[connections]` section.
### Load Balancing Example<a id="load-balancing-example"></a>
- [Simple load-balancer configuration](https://github.com/Nayjest/lm-proxy/blob/main/examples/load_balancer_config.py)
This example demonstrates how to set up a load balancer that randomly
distributes requests across multiple language model servers using the lm_proxy.
### Google Vertex AI Configuration Example<a id="google-vertex-ai-configuration-example"></a>
- [vertex-ai.toml](https://github.com/Nayjest/lm-proxy/blob/main/examples/vertex-ai.toml)
This example demonstrates how to connect OpenAI HTTP Proxy to Google Gemini model via Vertex AI API
### Using Tokens from OIDC Provider as Virtual/Client API Keys<a id="using-tokens-from-oidc-provider-as-virtualclient-api-keys"></a>
You can configure OpenAI HTTP Proxy to validate tokens from OpenID Connect (OIDC) providers like Keycloak, Auth0, or Okta as API keys.
The following configuration validates Keycloak access tokens by calling the userinfo endpoint:
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true
use_cache = true
cache_ttl = 60
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
**Configuration Parameters:**
- `class` - The API key validation handler class ([lm_proxy.api_key_check.CheckAPIKeyWithRequest](https://github.com/Nayjest/lm-proxy/blob/main/lm_proxy/api_key_check/with_request.py))
- `method` - HTTP method for the validation request (typically `POST` or `GET`)
- `url` - The OIDC provider's userinfo endpoint URL
- `response_as_user_info` - Parse the response as user information for further usage in OpenAI HTTP Proxy (extend logged info, determine user group, etc.)
- `use_cache` - Enable caching of validation results (requires installing the `cachetools` package if enabled: `pip install cachetools`)
- `cache_ttl` - Cache time-to-live in seconds (reduces load on identity provider)
- `headers` - Dictionary of headers to send with the validation request
> **Note**: The `{api_key}` placeholder can be used in headers or in the URL. OpenAI HTTP Proxy substitutes it with the API key from the client to perform the check.
**Usage:**
Clients pass their OIDC access token as the API key when making requests to OpenAI HTTP Proxy.
## 🪝 Request Handlers (Middleware)<a id="-request-handlers--middleware"></a>
Handlers intercept and modify requests *before* they reach the upstream LLM provider. They enable cross-cutting concerns such as rate limiting, logging, auditing, and header manipulation.
Handlers are defined in the `before` list within the configuration file and execute sequentially in the order specified.
### Built-in Handlers
OpenAI HTTP Proxy includes several built-in handlers for common operational needs.
#### Rate Limiter
The `RateLimiter` protects upstream credentials and manages traffic load using a sliding window algorithm.
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `max_requests` | int | Maximum number of requests allowed per window |
| `window_seconds` | int | Duration of the sliding window in seconds |
| `per` | string | Scope of the limit: `api_key`, `ip`, `connection`, `group`, or `global` |
**Configuration:**
```toml
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 10
window_seconds = 60
per = "api_key"
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 1000
window_seconds = 300
per = "global"
```
#### HTTP Headers Forwarder
The `HTTPHeadersForwarder` passes specific headers from incoming client requests to the upstream provider—useful for distributed tracing or tenant context propagation.
Sensitive headers (`Authorization`, `Host`, `Content-Length`) are stripped by default to prevent protocol corruption and credential leaks.
```toml
[[before]]
class = "lm_proxy.handlers.HTTPHeadersForwarder"
white_list_headers = ["x-trace-id", "x-correlation-id", "x-tenant-id"]
```
See also [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md).
### Custom Handlers
Extend functionality by implementing custom handlers in Python. A handler is any callable (function or class instance) that accepts a `RequestContext`.
#### Interface
```python
from lm_proxy.base_types import RequestContext
async def my_custom_handler(ctx: RequestContext) -> None:
# Implementation here
pass
```
#### Example: Audit Logger
```python
# my_extensions.py
import logging
from lm_proxy.base_types import RequestContext
class AuditLogger:
def __init__(self, prefix: str = "AUDIT"):
self.prefix = prefix
async def __call__(self, ctx: RequestContext) -> None:
user = ctx.user_info.get("name", "anonymous")
logging.info(f"[{self.prefix}] User '{user}' requested model '{ctx.model}'")
```
**Registration:**
```toml
[[before]]
class = "my_extensions.AuditLogger"
prefix = "SECURITY_AUDIT"
```
## 🧩 Add-on Components<a id="-add-on-components"></a>
### Database Connector<a id="database-connector"></a>
[openai-http-proxy-db-connector](https://github.com/nayjest/lm-proxy-db-connector) is a lightweight SQLAlchemy-based connector that enables OpenAI HTTP Proxy to work with relational databases including PostgreSQL, MySQL/MariaDB, SQLite, Oracle, Microsoft SQL Server, and many others.
**Key Features:**
- Configure database connections directly through OpenAI HTTP Proxy configuration
- Share database connections across components, extensions, and custom functions
- Built-in database logger for structured logging of AI request data
## 📚 Guides & Reference<a id="-guides--reference"></a>
For more detailed information, check out these articles:
- [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md)
## 🚧 Known Limitations<a id="-known-limitations"></a>
- **Multiple generations (n > 1):** When proxying requests to Google or Anthropic APIs, only the first generation is returned. Multi-generation support is tracked in [#35](https://github.com/Nayjest/lm-proxy/issues/35).
- **Model listing with wildcards / forwarding actual model metadata:** The `/v1/models` endpoint does not query upstream providers to expand wildcard patterns (e.g., `gpt*`) or fetch model metadata. Only explicitly defined model names are listed [#36](https://github.com/Nayjest/lm-proxy/issues/36).
## 🔍 Debugging<a id="-debugging"></a>
### Overview
When **debugging mode** is enabled,
OpenAI HTTP Proxy provides detailed logging information to help diagnose issues:
- Stack traces for exceptions are shown in the console
- Logging level is set to DEBUG instead of INFO
> **Warning** ⚠️
> Never enable debugging mode in production environments, as it may expose sensitive information to the application logs.
### Enabling Debugging Mode
To enable debugging, set the `LM_PROXY_DEBUG` environment variable to a truthy value (e.g., "1", "true", "yes").
> **Tip** 💡
> Environment variables can also be defined in a `.env` file.
Alternatively, you can enable or disable debugging via the command-line arguments:
- `--debug` to enable debugging
- `--no-debug` to disable debugging
> **Note** ℹ️
> CLI arguments override environment variable settings.
## 🤝 Contributing<a id="-contributing"></a>
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## 📄 License<a id="-license"></a>
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
© 2025–2026 [Vitalii Stepanenko](mailto:mail@vitaliy.in)
| text/markdown | Vitalii Stepanenko | mail@vitaliy.in | Vitalii Stepanenko | mail@vitaliy.in | MIT License
Copyright (c) 2025–2026 Vitalii Stepanenko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | llm, large language models, ai, gpt, openai, proxy, http, proxy-server, llm gateway, openai, anthropic, google genai | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: O... | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"ai-microcore<6,>=5.1.2",
"anthropic<1,>=0.77; extra == \"all\"",
"anthropic<1,>=0.77; extra == \"anthropic\"",
"fastapi<1,>=0.121.3",
"google-genai<2,>=1.62.0; extra == \"all\"",
"google-genai<2,>=1.62.0; extra == \"google\"",
"pydantic<2.13.0,>=2.12.5",
"pytest<8.5.0,>=8.4.2; extra == \"test\"",
"... | [] | [] | [] | [
"Bug Tracker, https://github.com/Nayjest/lm-proxy/issues",
"Source Code, https://github.com/Nayjest/lm-proxy"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-19T13:46:02.547110 | openai_http_proxy-3.0.2.tar.gz | 27,979 | 61/2e/6bc2ef6da9a4324b2bb09eeb9831f33e3e55c16af65c86e57475da9327a3/openai_http_proxy-3.0.2.tar.gz | source | sdist | null | false | 9d003844724b2ade95fe8054d08e0443 | 8f16cc8f46816c4059373a3e5b4219ce6dca9ed9a3e3c4900c80f09283658a2f | 612e6bc2ef6da9a4324b2bb09eeb9831f33e3e55c16af65c86e57475da9327a3 | null | [] | 222 |
2.3 | oai-proxy | 3.0.2 | OAI Proxy is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc. | <h1 align="center"><a href="#">OAI Proxy</a></h1>
<p align="center">
<b>Lightweight, OpenAI-compatible HTTP proxy server / gateway</b><br>unifying access to multiple <b>Large Language Model providers</b> and local inference <br>through a single, standardized API endpoint.
</p>
<p align="center">
<a href="https://pypi.org/project/lm-proxy/"><img src="https://img.shields.io/pypi/v/lm-proxy?color=blue" alt="PyPI"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml/badge.svg" alt="Tests"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml/badge.svg" alt="Code Style"></a>
<img src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/coverage.svg" alt="Code Coverage">
<a href="https://www.bestpractices.dev/projects/11364"><img src="https://www.bestpractices.dev/projects/11364/badge"></a>
<a href="https://github.com/Nayjest/lm-proxy/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Nayjest/lm-proxy?color=d08aff" alt="License"></a>
</p>
Built with Python, FastAPI and [MicroCore](https://github.com/Nayjest/ai-microcore), **OAI Proxy** seamlessly integrates cloud providers like Google, Anthropic, and OpenAI, as well as local PyTorch-based inference, while maintaining full compatibility with OpenAI's API format.
It works as a drop-in replacement for OpenAI's API, allowing you to switch between cloud providers and local models without modifying your existing client code.
**OAI Proxy** supports **real-time token streaming**, **secure Virtual API key management**, and can be used both as an importable Python library and as a standalone HTTP service. Whether you're building production applications or experimenting with different models, OAI Proxy eliminates integration complexity and keeps your codebase **provider-agnostic**.
## Table of Contents
- [Overview](#oai-proxy)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#-configuration)
- [Basic Structure](#basic-structure)
- [Environment Variables](#environment-variables)
- [Proxy API Keys vs. Provider API Keys](#-proxy-api-keys-vs-provider-api-keys)
- [API Usage](#-api-usage)
- [Chat Completions Endpoint](#chat-completions-endpoint)
- [Models List Endpoint](#models-list-endpoint)
- [User Groups Configuration](#-user-groups-configuration)
- [Basic Group Definition](#basic-group-definition)
- [Group-based Access Control](#group-based-access-control)
- [Connection Restrictions](#connection-restrictions)
- [Virtual API Key Validation](#virtual-api-key-validation)
- [Advanced Usage](#%EF%B8%8F-advanced-usage)
- [Dynamic Model Routing](#dynamic-model-routing)
- [Load Balancing Example](#load-balancing-example)
- [Google Vertex AI Example](#google-vertex-ai-configuration-example)
- [Using Tokens from OIDC Provider as Virtual/Client API Keys](#using-tokens-from-oidc-provider-as-virtualclient-api-keys)
- [Add-on Components](#-add-on-components)
- [Database Connector](#database-connector)
- [Request Handlers (Middleware)](#-request-handlers--middleware)
- [Guides & Reference](#-guides--reference)
- [Known Limitations](#-known-limitations)
- [Debugging](#-debugging)
- [Contributing](#-contributing)
- [License](#-license)
<a href="#" align="center"><img alt="OAI Proxy / Gateway" src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/press-kit/assets/lm-proxy_1_hacker_1600x672.png"></a>
## ✨ Features<a id="-features"></a>
- **Provider Agnostic**: Connect to OpenAI, Anthropic, Google AI, local models, and more using a single API
- **Unified Interface**: Access all models through the standard OpenAI API format
- **Dynamic Routing**: Route requests to different LLM providers based on model name patterns
- **Stream Support**: Full streaming support for real-time responses
- **API Key Management**: Configurable API key validation and access control
- **Easy Configuration**: Simple TOML/YAML/JSON/Python configuration files for setup
- **Extensible by Design**: Minimal core with clearly defined extension points, enabling seamless customization and expansion without modifying the core system.
## 🚀 Getting Started<a id="-getting-started"></a>
### Requirements
Python 3.11 | 3.12 | 3.13
### Installation<a id="installation"></a>
```bash
pip install oai-proxy
```
For proxying to Anthropic API or Google Gemini via Vertex AI or Google AI Studio, install optional dependencies:
```
pip install oai-proxy[anthropic,google]
```
or
```
pip install oai-proxy[all]
```
### Quick Start<a id="quick-start"></a>
#### 1. Create a `config.toml` file:
```toml
host = "0.0.0.0"
port = 8000
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
[routing]
"gpt*" = "openai.*"
"claude*" = "anthropic.*"
"*" = "openai.gpt-3.5-turbo"
[groups.default]
api_keys = ["YOUR_API_KEY_HERE"]
```
> **Note** ℹ️
> To enhance security, consider storing upstream API keys in operating system environment variables rather than embedding them directly in the configuration file. You can reference these variables in the configuration using the env:<VAR_NAME> syntax.
#### 2. Start the server:
```bash
oai-proxy
```
Alternatively, run it as a Python module:
```bash
python -m lm_proxy
```
#### 3. Use it with any OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY_HERE",
base_url="http://localhost:8000/v1"
)
completion = client.chat.completions.create(
model="gpt-5", # This will be routed to OpenAI based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(completion.choices[0].message.content)
```
Or use the same endpoint with Claude models:
```python
completion = client.chat.completions.create(
model="claude-opus-4-1-20250805", # This will be routed to Anthropic based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
## 📝 Configuration<a id="-configuration"></a>
OAI Proxy is configured through a TOML/YAML/JSON/Python file that specifies connections, routing rules, and access control.
### Basic Structure<a id="basic-structure"></a>
```toml
host = "0.0.0.0" # Interface to bind to
port = 8000 # Port to listen on
dev_autoreload = false # Enable for development
# API key validation function (optional)
api_key_check = "lm_proxy.api_key_check.check_api_key_in_config"
# LLM Provider Connections
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.google]
api_type = "google"
api_key = "env:GOOGLE_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
# Routing rules (model_pattern = "connection.model")
[routing]
"gpt*" = "openai.*" # Route all GPT models to OpenAI
"claude*" = "anthropic.*" # Route all Claude models to Anthropic
"gemini*" = "google.*" # Route all Gemini models to Google
"*" = "openai.gpt-3.5-turbo" # Default fallback
# Access control groups
[groups.default]
api_keys = [
"KEY1",
"KEY2"
]
# optional
[[loggers]]
class = 'lm_proxy.loggers.BaseLogger'
[loggers.log_writer]
class = 'lm_proxy.loggers.log_writers.JsonLogWriter'
file_name = 'storage/json.log'
[loggers.entry_transformer]
class = 'lm_proxy.loggers.LogEntryTransformer'
completion_tokens = "response.usage.completion_tokens"
prompt_tokens = "response.usage.prompt_tokens"
prompt = "request.messages"
response = "response"
group = "group"
connection = "connection"
api_key_id = "api_key_id"
remote_addr = "remote_addr"
created_at = "created_at"
duration = "duration"
```
### Environment Variables<a id="environment-variables"></a>
You can reference environment variables in your configuration file by prefixing values with `env:`.
For example:
```toml
[connections.openai]
api_key = "env:OPENAI_API_KEY"
```
At runtime, OAI Proxy automatically retrieves the value of the target variable
(OPENAI_API_KEY) from your operating system's environment or from a .env file, if present.
### .env Files
By default, OAI Proxy looks for a `.env` file in the current working directory
and loads environment variables from it.
You can refer to the [.env.template](https://github.com/Nayjest/lm-proxy/blob/main/.env.template)
file for an example:
```dotenv
OPENAI_API_KEY=sk-u........
GOOGLE_API_KEY=AI........
ANTHROPIC_API_KEY=sk-ant-api03--vE........
# "1", "TRUE", "YES", "ON", "ENABLED", "Y", "+" are true, case-insensitive.
# See https://github.com/Nayjest/ai-microcore/blob/v4.4.3/microcore/configuration.py#L36
LM_PROXY_DEBUG=no
```
You can also control `.env` file usage with the `--env` command-line option:
```bash
# Use a custom .env file path
oai-proxy --env="path/to/your/.env"
# Disable .env loading
oai-proxy --env=""
```
## 🔑 Proxy API Keys vs. Provider API Keys<a id="-proxy-api-keys-vs-provider-api-keys"></a>
OAI Proxy utilizes two distinct types of API keys to facilitate secure and efficient request handling.
- **Proxy API Key (Virtual API Key, Client API Key):**
A unique key generated and managed within OAI Proxy.
Clients use these keys to authenticate their requests to the proxy's API endpoints.
Each Client API Key is associated with a specific group, which defines the scope of access and permissions for the client's requests.
These keys allow users to securely interact with the proxy without direct access to external service credentials.
- **Provider API Key (Upstream API Key):**
A key provided by external LLM inference providers (e.g., OpenAI, Anthropic, Mistral, etc.) and configured within the OAI Proxy.
The proxy uses these keys to authenticate and forward validated client requests to the respective external services.
Provider API Keys remain hidden from end users, ensuring secure and transparent communication with provider APIs.
This distinction ensures a clear separation of concerns:
Virtual API Keys manage user authentication and access within the proxy,
while Upstream API Keys handle secure communication with external providers.
## 🔌 API Usage<a id="-api-usage"></a>
OAI Proxy implements the OpenAI chat completions API endpoint. You can use any OpenAI-compatible client to interact with it.
### Chat Completions Endpoint<a id="chat-completions-endpoint"></a>
```http
POST /v1/chat/completions
```
#### Request Format
```json
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"stream": false
}
```
#### Response Format
```json
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
]
}
```
### Models List Endpoint<a id="models-list-endpoint"></a>
List and describe all models available through the API.
```http
GET /v1/models
```
The **OAI Proxy** dynamically builds the models list based on routing rules defined in `config.routing`.
Routing keys can reference both **exact model names** and **model name patterns** (e.g., `"gpt*"`, `"claude*"`, etc.).
By default, wildcard patterns are displayed as-is in the models list (e.g., `"gpt*"`, `"claude*"`).
This behavior can be customized via the `model_listing_mode` configuration option:
```
model_listing_mode = "as_is" | "ignore_wildcards" | "expand_wildcards"
```
Available modes:
- **`as_is`** *(default)* — Lists all entries exactly as defined in the routing configuration, including wildcard patterns.
- **`ignore_wildcards`** — Excludes wildcard patterns, showing only explicitly defined model names.
- **`expand_wildcards`** — Expands wildcard patterns by querying each connected backend for available models *(feature not yet implemented)*.
To obtain a complete and accurate model list in the current implementation,
all supported models must be explicitly defined in the routing configuration, for example:
```toml
[routing]
"gpt-4" = "my_openai_connection.*"
"gpt-5" = "my_openai_connection.*"
"gpt-8"= "my_openai_connection.gpt-3.5-turbo"
"claude-4.5-sonnet" = "my_anthropic_connection.claude-sonnet-4-5-20250929"
"claude-4.1-opus" = "my_anthropic_connection.claude-opus-4-1-20250805"
[connections]
[connections.my_openai_connection]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.my_anthropic_connection]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
```
#### Response Format
```json
{
"object": "list",
"data": [
{
"id": "gpt-6",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
},
{
"id": "claude-5-sonnet",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
}
]
}
```
## 🔒 User Groups Configuration<a id="-user-groups-configuration"></a>
The `[groups]` section in the configuration defines access control rules for different user groups.
Each group can have its own set of virtual API keys and permitted connections.
### Basic Group Definition<a id="basic-group-definition"></a>
```toml
[groups.default]
api_keys = ["KEY1", "KEY2"]
allowed_connections = "*" # Allow access to all connections
```
### Group-based Access Control<a id="group-based-access-control"></a>
You can create multiple groups to segment your users and control their access:
```toml
# Admin group with full access
[groups.admin]
api_keys = ["ADMIN_KEY_1", "ADMIN_KEY_2"]
allowed_connections = "*" # Access to all connections
# Regular users with limited access
[groups.users]
api_keys = ["USER_KEY_1", "USER_KEY_2"]
allowed_connections = "openai,anthropic" # Only allowed to use specific connections
# Free tier with minimal access
[groups.free]
api_keys = ["FREE_KEY_1", "FREE_KEY_2"]
allowed_connections = "openai" # Only allowed to use OpenAI connection
```
### Connection Restrictions<a id="connection-restrictions"></a>
The `allowed_connections` parameter controls which upstream providers a group can access:
- `"*"` - Group can use all configured connections
- `"openai,anthropic"` - Comma-separated list of specific connections the group can use
This allows fine-grained control over which users can access which AI providers, enabling features like:
- Restricting expensive models to premium users
- Creating specialized access tiers for different user groups
- Implementing usage quotas per group
- Billing and cost allocation by user group
### Virtual API Key Validation<a id="virtual-api-key-validation"></a>
#### Overview
OAI Proxy includes 2 built-in methods for validating Virtual API keys:
- `lm_proxy.api_key_check.check_api_key_in_config` - verifies API keys against those defined in the config file; used by default
- `lm_proxy.api_key_check.CheckAPIKeyWithRequest` - validates API keys via an external HTTP service
The API key check method can be configured using the `api_key_check` configuration key.
Its value can be either a reference to a Python function in the format `my_module.sub_module1.sub_module2.fn_name`,
or an object containing parameters for a class-based validator.
In the .py config representation, the validator function can be passed directly as a callable.
#### Example configuration for external API key validation using HTTP request to Keycloak / OpenID Connect
This example shows how to validate API keys against an external service (e.g., Keycloak):
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true # interpret response JSON as user info object for further processing / logging
use_cache = true # requires installing cachetools if True: pip install cachetools
cache_ttl = 60 # Cache duration in seconds
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
#### Custom API Key Validation / Extending functionality
For more advanced authentication needs,
you can implement a custom validator function:
```python
# my_validators.py
def validate_api_key(api_key: str) -> str | None:
"""
Validate an API key and return the group name if valid.
Args:
api_key: The API key to validate
Returns:
The name of the group if valid, None otherwise
"""
if api_key == "secret-key":
return "admin"
elif api_key.startswith("user-"):
return "users"
return None
```
Then reference it in your config:
```toml
api_key_check = "my_validators.validate_api_key"
```
> **NOTE**
> In this case, the `api_keys` lists in groups are ignored, and the custom function is responsible for all validation logic.
## 🛠️ Advanced Usage<a id="-advanced-usage"></a>
### Dynamic Model Routing<a id="dynamic-model-routing"></a>
The routing section allows flexible pattern matching with wildcards:
```toml
[routing]
"gpt-4*" = "openai.gpt-4" # Route gpt-4 requests to OpenAI GPT-4
"gpt-3.5*" = "openai.gpt-3.5-turbo" # Route gpt-3.5 requests to OpenAI
"claude*" = "anthropic.*" # Pass model name as-is to Anthropic
"gemini*" = "google.*" # Pass model name as-is to Google
"custom*" = "local.llama-7b" # Map any "custom*" to a specific local model
"*" = "openai.gpt-3.5-turbo" # Default fallback for unmatched models
```
Keys are model name patterns (with `*` wildcard support), and values are connection/model mappings.
Connection names reference those defined in the `[connections]` section.
### Load Balancing Example<a id="load-balancing-example"></a>
- [Simple load-balancer configuration](https://github.com/Nayjest/lm-proxy/blob/main/examples/load_balancer_config.py)
This example demonstrates how to set up a load balancer that randomly
distributes requests across multiple language model servers using the lm_proxy.
### Google Vertex AI Configuration Example<a id="google-vertex-ai-configuration-example"></a>
- [vertex-ai.toml](https://github.com/Nayjest/lm-proxy/blob/main/examples/vertex-ai.toml)
This example demonstrates how to connect OAI Proxy to Google Gemini model via Vertex AI API
### Using Tokens from OIDC Provider as Virtual/Client API Keys<a id="using-tokens-from-oidc-provider-as-virtualclient-api-keys"></a>
You can configure OAI Proxy to validate tokens from OpenID Connect (OIDC) providers like Keycloak, Auth0, or Okta as API keys.
The following configuration validates Keycloak access tokens by calling the userinfo endpoint:
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true
use_cache = true
cache_ttl = 60
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
**Configuration Parameters:**
- `class` - The API key validation handler class ([lm_proxy.api_key_check.CheckAPIKeyWithRequest](https://github.com/Nayjest/lm-proxy/blob/main/lm_proxy/api_key_check/with_request.py))
- `method` - HTTP method for the validation request (typically `POST` or `GET`)
- `url` - The OIDC provider's userinfo endpoint URL
- `response_as_user_info` - Parse the response as user information for further usage in OAI Proxy (extend logged info, determine user group, etc.)
- `use_cache` - Enable caching of validation results (requires installing the `cachetools` package if enabled: `pip install cachetools`)
- `cache_ttl` - Cache time-to-live in seconds (reduces load on identity provider)
- `headers` - Dictionary of headers to send with the validation request
> **Note**: The `{api_key}` placeholder can be used in headers or in the URL. OAI Proxy substitutes it with the API key from the client to perform the check.
**Usage:**
Clients pass their OIDC access token as the API key when making requests to OAI Proxy.
## 🪝 Request Handlers (Middleware)<a id="-request-handlers--middleware"></a>
Handlers intercept and modify requests *before* they reach the upstream LLM provider. They enable cross-cutting concerns such as rate limiting, logging, auditing, and header manipulation.
Handlers are defined in the `before` list within the configuration file and execute sequentially in the order specified.
### Built-in Handlers
OAI Proxy includes several built-in handlers for common operational needs.
#### Rate Limiter
The `RateLimiter` protects upstream credentials and manages traffic load using a sliding window algorithm.
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `max_requests` | int | Maximum number of requests allowed per window |
| `window_seconds` | int | Duration of the sliding window in seconds |
| `per` | string | Scope of the limit: `api_key`, `ip`, `connection`, `group`, or `global` |
**Configuration:**
```toml
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 10
window_seconds = 60
per = "api_key"
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 1000
window_seconds = 300
per = "global"
```
#### HTTP Headers Forwarder
The `HTTPHeadersForwarder` passes specific headers from incoming client requests to the upstream provider—useful for distributed tracing or tenant context propagation.
Sensitive headers (`Authorization`, `Host`, `Content-Length`) are stripped by default to prevent protocol corruption and credential leaks.
```toml
[[before]]
class = "lm_proxy.handlers.HTTPHeadersForwarder"
white_list_headers = ["x-trace-id", "x-correlation-id", "x-tenant-id"]
```
See also [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md).
### Custom Handlers
Extend functionality by implementing custom handlers in Python. A handler is any callable (function or class instance) that accepts a `RequestContext`.
#### Interface
```python
from lm_proxy.base_types import RequestContext
async def my_custom_handler(ctx: RequestContext) -> None:
# Implementation here
pass
```
#### Example: Audit Logger
```python
# my_extensions.py
import logging
from lm_proxy.base_types import RequestContext
class AuditLogger:
def __init__(self, prefix: str = "AUDIT"):
self.prefix = prefix
async def __call__(self, ctx: RequestContext) -> None:
user = ctx.user_info.get("name", "anonymous")
logging.info(f"[{self.prefix}] User '{user}' requested model '{ctx.model}'")
```
**Registration:**
```toml
[[before]]
class = "my_extensions.AuditLogger"
prefix = "SECURITY_AUDIT"
```
## 🧩 Add-on Components<a id="-add-on-components"></a>
### Database Connector<a id="database-connector"></a>
[oai-proxy-db-connector](https://github.com/nayjest/lm-proxy-db-connector) is a lightweight SQLAlchemy-based connector that enables OAI Proxy to work with relational databases including PostgreSQL, MySQL/MariaDB, SQLite, Oracle, Microsoft SQL Server, and many others.
**Key Features:**
- Configure database connections directly through OAI Proxy configuration
- Share database connections across components, extensions, and custom functions
- Built-in database logger for structured logging of AI request data
## 📚 Guides & Reference<a id="-guides--reference"></a>
For more detailed information, check out these articles:
- [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md)
## 🚧 Known Limitations<a id="-known-limitations"></a>
- **Multiple generations (n > 1):** When proxying requests to Google or Anthropic APIs, only the first generation is returned. Multi-generation support is tracked in [#35](https://github.com/Nayjest/lm-proxy/issues/35).
- **Model listing with wildcards / forwarding actual model metadata:** The `/v1/models` endpoint does not query upstream providers to expand wildcard patterns (e.g., `gpt*`) or fetch model metadata. Only explicitly defined model names are listed [#36](https://github.com/Nayjest/lm-proxy/issues/36).
## 🔍 Debugging<a id="-debugging"></a>
### Overview
When **debugging mode** is enabled,
OAI Proxy provides detailed logging information to help diagnose issues:
- Stack traces for exceptions are shown in the console
- Logging level is set to DEBUG instead of INFO
> **Warning** ⚠️
> Never enable debugging mode in production environments, as it may expose sensitive information to the application logs.
### Enabling Debugging Mode
To enable debugging, set the `LM_PROXY_DEBUG` environment variable to a truthy value (e.g., "1", "true", "yes").
> **Tip** 💡
> Environment variables can also be defined in a `.env` file.
Alternatively, you can enable or disable debugging via the command-line arguments:
- `--debug` to enable debugging
- `--no-debug` to disable debugging
> **Note** ℹ️
> CLI arguments override environment variable settings.
## 🤝 Contributing<a id="-contributing"></a>
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## 📄 License<a id="-license"></a>
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
© 2025–2026 [Vitalii Stepanenko](mailto:mail@vitaliy.in)
| text/markdown | Vitalii Stepanenko | mail@vitaliy.in | Vitalii Stepanenko | mail@vitaliy.in | MIT License
Copyright (c) 2025–2026 Vitalii Stepanenko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | llm, large language models, ai, gpt, openai, proxy, http, proxy-server, llm gateway, openai, anthropic, google genai | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: O... | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"ai-microcore<6,>=5.1.2",
"anthropic<1,>=0.77; extra == \"all\"",
"anthropic<1,>=0.77; extra == \"anthropic\"",
"fastapi<1,>=0.121.3",
"google-genai<2,>=1.62.0; extra == \"all\"",
"google-genai<2,>=1.62.0; extra == \"google\"",
"pydantic<2.13.0,>=2.12.5",
"pytest<8.5.0,>=8.4.2; extra == \"test\"",
"... | [] | [] | [] | [
"Bug Tracker, https://github.com/Nayjest/lm-proxy/issues",
"Source Code, https://github.com/Nayjest/lm-proxy"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-19T13:46:01.294445 | oai_proxy-3.0.2.tar.gz | 27,798 | 18/6c/d8872b27e678ade2e2e3412e4c2438c93eedea4c5b9d9b3e4b5162ca0485/oai_proxy-3.0.2.tar.gz | source | sdist | null | false | 42caba940f1cf575b1148c248acd06ce | 41583f73cc95c5c644c834014cdc912c41547c85fac9ebea134926319858b4f3 | 186cd8872b27e678ade2e2e3412e4c2438c93eedea4c5b9d9b3e4b5162ca0485 | null | [] | 229 |
2.3 | pytest-playwright-artifacts | 0.2.0 | Capture screenshots, HTML, and console logs on Playwright test failures | # Capture debugging artifacts on Playwright test failures
[](https://github.com/iloveitaly/pytest-playwright-artifacts/releases)
[](https://pepy.tech/project/pytest-playwright-artifacts)

[](https://opensource.org/licenses/MIT)
When your Playwright tests fail, you need to see what went wrong. This pytest plugin does a couple things to make it easier to debug and build playwight tests:
1. Automatically captures HTML, screenshots, console logs, and failure summaries the moment a test fails and dumps them into a per-test folder for easy debugging.
2. Allows you to assert that no console errors were logged during a test.
3. Automatically retry tests that fail due to playwright flakiness
No more guessing what the page looked like, what JavaScript errors occurred, or what the actual DOM content was.
## Installation
```bash
uv add --dev pytest-playwright-artifacts
```
The plugin activates automatically once installed. No configuration needed.
## Usage
### Artifacts on failure
```python
def test_my_page(page):
page.goto("https://example.com")
assert page.title() == "Example"
```
When this test fails, you'll find artifacts in `test-results/<test-name>/`:
- `failure.html` - Rendered DOM content at the moment of failure
- `screenshot.png` - Full-page screenshot
- `failure.txt` - Failure summary with traceback
- `console_logs.log` - All captured console messages
### Fail tests on console errors
```python
from pytest_playwright_artifacts import assert_no_console_errors
def test_no_console_errors(page, request):
page.goto("https://example.com")
assert_no_console_errors(request)
```
This fails the test if any `console.error()` messages were logged during the test.
### Retry on Playwright timeouts
Playwright tests can flake due to network latency or slow animations. The plugin can retry a test automatically when it fails with a `TimeoutError`. Only `playwright._impl._errors.TimeoutError` triggers a retry — assertion failures and other errors fail immediately. Retried attempts show as `R` / `RERUN` in pytest output.
#### Per-test
```python
@pytest.mark.playwright_timeout_retries(2)
def test_checkout(page):
page.goto("https://example.com/checkout")
page.click("#pay-button")
expect(page.locator(".success")).to_be_visible()
```
#### Per-folder
Add a `pytestmark` to `conftest.py` in the folder you want to cover:
```python
# tests/e2e/conftest.py
import pytest
pytestmark = [pytest.mark.playwright_timeout_retries(2)]
```
#### Global default
Set a default for the entire suite in `pyproject.toml`. A marker on an individual test or folder always takes precedence.
```toml
[tool.pytest.ini_options]
playwright_timeout_retries = 2
```
## Configuration
### Filter noisy console messages
Use regex patterns to ignore known noisy messages:
**pyproject.toml:**
```toml
[tool.pytest.ini_options]
playwright_console_ignore = [
"Invalid Sentry Dsn:.*",
"Radar SDK: initialized.*",
"\\[Meta Pixel\\].*",
]
```
Patterns match against both the raw console text and the formatted log line.
### Change artifact output directory
By default, artifacts are saved to `test-results/`. You can customize this:
**Command line:**
```bash
pytest --playwright-artifacts-output=my-artifacts
```
**pyproject.toml:**
```toml
[tool.pytest.ini_options]
playwright_artifacts_output = "my-artifacts"
```
## Related Projects
- [pytest-playwright-visual-snapshot](https://github.com/iloveitaly/pytest-playwright-visual-snapshot): Adds visual regression testing capabilities to your Playwright and pytest suite.
- [playwright-trace-analyzer](https://github.com/iloveitaly/playwright-trace-analyzer): Provides a CLI for inspecting Playwright trace files without needing the full browser viewer.
- [pytest-plugin-utils](https://github.com/iloveitaly/pytest-plugin-utils): Offers reusable logic for managing artifacts and configurations when building other pytest plugins.
- [gh-clean-artifacts](https://github.com/iloveitaly/gh-clean-artifacts): Helps manage storage costs by pruning old or large GitHub Actions artifacts.
- [pytest-line-runner](https://github.com/iloveitaly/pytest-line-runner): Simplifies test execution by allowing you to run pytest tests using file line numbers.
- [pytest-celery-utils](https://github.com/iloveitaly/pytest-celery-utils): Enables inspection of Celery task queues in Redis directly from your pytest environment.
- [python-package-prompts](https://github.com/iloveitaly/python-package-prompts): Contains LLM instructions for maintaining Python standards across projects using pytest and other libraries.
## [MIT License](LICENSE.md)
---
*This project was created from [iloveitaly/python-package-template](https://github.com/iloveitaly/python-package-template)*
| text/markdown | Michael Bianco | Michael Bianco <mike@mikebian.co> | null | null | null | pytest-plugin, playwright, test-artifacts, debugging | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"playwright>=1.40.0",
"pytest-plugin-utils>=0.1.0",
"structlog>=25.5.0"
] | [] | [] | [] | [
"Repository, https://github.com/iloveitaly/pytest-playwright-artifacts"
] | uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:46:00.548316 | pytest_playwright_artifacts-0.2.0-py3-none-any.whl | 9,311 | d5/8b/63a1d3af8d05f8cdc8264ce698c32903fa140fdfec08c9fe91b74c7aaf02/pytest_playwright_artifacts-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 18e9fa72adfe89631ac9458209cd011b | fa788ed52b6ed9f706962c819a737bfa01465dc55d03471907f9eed09579a5b8 | d58b63a1d3af8d05f8cdc8264ce698c32903fa140fdfec08c9fe91b74c7aaf02 | null | [] | 235 |
2.3 | lm-proxy-server | 3.0.2 | LM Proxy Server is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc. | <h1 align="center"><a href="#">LM Proxy Server</a></h1>
<p align="center">
<b>Lightweight, OpenAI-compatible HTTP proxy server / gateway</b><br>unifying access to multiple <b>Large Language Model providers</b> and local inference <br>through a single, standardized API endpoint.
</p>
<p align="center">
<a href="https://pypi.org/project/lm-proxy/"><img src="https://img.shields.io/pypi/v/lm-proxy?color=blue" alt="PyPI"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml/badge.svg" alt="Tests"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml/badge.svg" alt="Code Style"></a>
<img src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/coverage.svg" alt="Code Coverage">
<a href="https://www.bestpractices.dev/projects/11364"><img src="https://www.bestpractices.dev/projects/11364/badge"></a>
<a href="https://github.com/Nayjest/lm-proxy/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Nayjest/lm-proxy?color=d08aff" alt="License"></a>
</p>
Built with Python, FastAPI and [MicroCore](https://github.com/Nayjest/ai-microcore), **LM Proxy Server** seamlessly integrates cloud providers like Google, Anthropic, and OpenAI, as well as local PyTorch-based inference, while maintaining full compatibility with OpenAI's API format.
It works as a drop-in replacement for OpenAI's API, allowing you to switch between cloud providers and local models without modifying your existing client code.
**LM Proxy Server** supports **real-time token streaming**, **secure Virtual API key management**, and can be used both as an importable Python library and as a standalone HTTP service. Whether you're building production applications or experimenting with different models, LM Proxy Server eliminates integration complexity and keeps your codebase **provider-agnostic**.
## Table of Contents
- [Overview](#lm-proxy-server)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#-configuration)
- [Basic Structure](#basic-structure)
- [Environment Variables](#environment-variables)
- [Proxy API Keys vs. Provider API Keys](#-proxy-api-keys-vs-provider-api-keys)
- [API Usage](#-api-usage)
- [Chat Completions Endpoint](#chat-completions-endpoint)
- [Models List Endpoint](#models-list-endpoint)
- [User Groups Configuration](#-user-groups-configuration)
- [Basic Group Definition](#basic-group-definition)
- [Group-based Access Control](#group-based-access-control)
- [Connection Restrictions](#connection-restrictions)
- [Virtual API Key Validation](#virtual-api-key-validation)
- [Advanced Usage](#%EF%B8%8F-advanced-usage)
- [Dynamic Model Routing](#dynamic-model-routing)
- [Load Balancing Example](#load-balancing-example)
- [Google Vertex AI Example](#google-vertex-ai-configuration-example)
- [Using Tokens from OIDC Provider as Virtual/Client API Keys](#using-tokens-from-oidc-provider-as-virtualclient-api-keys)
- [Add-on Components](#-add-on-components)
- [Database Connector](#database-connector)
- [Request Handlers (Middleware)](#-request-handlers--middleware)
- [Guides & Reference](#-guides--reference)
- [Known Limitations](#-known-limitations)
- [Debugging](#-debugging)
- [Contributing](#-contributing)
- [License](#-license)
<a href="#" align="center"><img alt="LM Proxy Server / Gateway" src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/press-kit/assets/lm-proxy_1_hacker_1600x672.png"></a>
## ✨ Features<a id="-features"></a>
- **Provider Agnostic**: Connect to OpenAI, Anthropic, Google AI, local models, and more using a single API
- **Unified Interface**: Access all models through the standard OpenAI API format
- **Dynamic Routing**: Route requests to different LLM providers based on model name patterns
- **Stream Support**: Full streaming support for real-time responses
- **API Key Management**: Configurable API key validation and access control
- **Easy Configuration**: Simple TOML/YAML/JSON/Python configuration files for setup
- **Extensible by Design**: Minimal core with clearly defined extension points, enabling seamless customization and expansion without modifying the core system.
## 🚀 Getting Started<a id="-getting-started"></a>
### Requirements
Python 3.11 | 3.12 | 3.13
### Installation<a id="installation"></a>
```bash
pip install lm-proxy-server
```
For proxying to Anthropic API or Google Gemini via Vertex AI or Google AI Studio, install optional dependencies:
```
pip install lm-proxy-server[anthropic,google]
```
or
```
pip install lm-proxy-server[all]
```
### Quick Start<a id="quick-start"></a>
#### 1. Create a `config.toml` file:
```toml
host = "0.0.0.0"
port = 8000
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
[routing]
"gpt*" = "openai.*"
"claude*" = "anthropic.*"
"*" = "openai.gpt-3.5-turbo"
[groups.default]
api_keys = ["YOUR_API_KEY_HERE"]
```
> **Note** ℹ️
> To enhance security, consider storing upstream API keys in operating system environment variables rather than embedding them directly in the configuration file. You can reference these variables in the configuration using the env:<VAR_NAME> syntax.
#### 2. Start the server:
```bash
lm-proxy-server
```
Alternatively, run it as a Python module:
```bash
python -m lm_proxy
```
#### 3. Use it with any OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY_HERE",
base_url="http://localhost:8000/v1"
)
completion = client.chat.completions.create(
model="gpt-5", # This will be routed to OpenAI based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(completion.choices[0].message.content)
```
Or use the same endpoint with Claude models:
```python
completion = client.chat.completions.create(
model="claude-opus-4-1-20250805", # This will be routed to Anthropic based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
## 📝 Configuration<a id="-configuration"></a>
LM Proxy Server is configured through a TOML/YAML/JSON/Python file that specifies connections, routing rules, and access control.
### Basic Structure<a id="basic-structure"></a>
```toml
host = "0.0.0.0" # Interface to bind to
port = 8000 # Port to listen on
dev_autoreload = false # Enable for development
# API key validation function (optional)
api_key_check = "lm_proxy.api_key_check.check_api_key_in_config"
# LLM Provider Connections
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.google]
api_type = "google"
api_key = "env:GOOGLE_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
# Routing rules (model_pattern = "connection.model")
[routing]
"gpt*" = "openai.*" # Route all GPT models to OpenAI
"claude*" = "anthropic.*" # Route all Claude models to Anthropic
"gemini*" = "google.*" # Route all Gemini models to Google
"*" = "openai.gpt-3.5-turbo" # Default fallback
# Access control groups
[groups.default]
api_keys = [
"KEY1",
"KEY2"
]
# optional
[[loggers]]
class = 'lm_proxy.loggers.BaseLogger'
[loggers.log_writer]
class = 'lm_proxy.loggers.log_writers.JsonLogWriter'
file_name = 'storage/json.log'
[loggers.entry_transformer]
class = 'lm_proxy.loggers.LogEntryTransformer'
completion_tokens = "response.usage.completion_tokens"
prompt_tokens = "response.usage.prompt_tokens"
prompt = "request.messages"
response = "response"
group = "group"
connection = "connection"
api_key_id = "api_key_id"
remote_addr = "remote_addr"
created_at = "created_at"
duration = "duration"
```
### Environment Variables<a id="environment-variables"></a>
You can reference environment variables in your configuration file by prefixing values with `env:`.
For example:
```toml
[connections.openai]
api_key = "env:OPENAI_API_KEY"
```
At runtime, LM Proxy Server automatically retrieves the value of the target variable
(OPENAI_API_KEY) from your operating system's environment or from a .env file, if present.
### .env Files
By default, LM Proxy Server looks for a `.env` file in the current working directory
and loads environment variables from it.
You can refer to the [.env.template](https://github.com/Nayjest/lm-proxy/blob/main/.env.template)
file for an example:
```dotenv
OPENAI_API_KEY=sk-u........
GOOGLE_API_KEY=AI........
ANTHROPIC_API_KEY=sk-ant-api03--vE........
# "1", "TRUE", "YES", "ON", "ENABLED", "Y", "+" are true, case-insensitive.
# See https://github.com/Nayjest/ai-microcore/blob/v4.4.3/microcore/configuration.py#L36
LM_PROXY_DEBUG=no
```
You can also control `.env` file usage with the `--env` command-line option:
```bash
# Use a custom .env file path
lm-proxy-server --env="path/to/your/.env"
# Disable .env loading
lm-proxy-server --env=""
```
## 🔑 Proxy API Keys vs. Provider API Keys<a id="-proxy-api-keys-vs-provider-api-keys"></a>
LM Proxy Server utilizes two distinct types of API keys to facilitate secure and efficient request handling.
- **Proxy API Key (Virtual API Key, Client API Key):**
A unique key generated and managed within LM Proxy Server.
Clients use these keys to authenticate their requests to the proxy's API endpoints.
Each Client API Key is associated with a specific group, which defines the scope of access and permissions for the client's requests.
These keys allow users to securely interact with the proxy without direct access to external service credentials.
- **Provider API Key (Upstream API Key):**
A key provided by external LLM inference providers (e.g., OpenAI, Anthropic, Mistral, etc.) and configured within the LM Proxy Server.
The proxy uses these keys to authenticate and forward validated client requests to the respective external services.
Provider API Keys remain hidden from end users, ensuring secure and transparent communication with provider APIs.
This distinction ensures a clear separation of concerns:
Virtual API Keys manage user authentication and access within the proxy,
while Upstream API Keys handle secure communication with external providers.
## 🔌 API Usage<a id="-api-usage"></a>
LM Proxy Server implements the OpenAI chat completions API endpoint. You can use any OpenAI-compatible client to interact with it.
### Chat Completions Endpoint<a id="chat-completions-endpoint"></a>
```http
POST /v1/chat/completions
```
#### Request Format
```json
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"stream": false
}
```
#### Response Format
```json
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
]
}
```
### Models List Endpoint<a id="models-list-endpoint"></a>
List and describe all models available through the API.
```http
GET /v1/models
```
The **LM Proxy Server** dynamically builds the models list based on routing rules defined in `config.routing`.
Routing keys can reference both **exact model names** and **model name patterns** (e.g., `"gpt*"`, `"claude*"`, etc.).
By default, wildcard patterns are displayed as-is in the models list (e.g., `"gpt*"`, `"claude*"`).
This behavior can be customized via the `model_listing_mode` configuration option:
```
model_listing_mode = "as_is" | "ignore_wildcards" | "expand_wildcards"
```
Available modes:
- **`as_is`** *(default)* — Lists all entries exactly as defined in the routing configuration, including wildcard patterns.
- **`ignore_wildcards`** — Excludes wildcard patterns, showing only explicitly defined model names.
- **`expand_wildcards`** — Expands wildcard patterns by querying each connected backend for available models *(feature not yet implemented)*.
To obtain a complete and accurate model list in the current implementation,
all supported models must be explicitly defined in the routing configuration, for example:
```toml
[routing]
"gpt-4" = "my_openai_connection.*"
"gpt-5" = "my_openai_connection.*"
"gpt-8"= "my_openai_connection.gpt-3.5-turbo"
"claude-4.5-sonnet" = "my_anthropic_connection.claude-sonnet-4-5-20250929"
"claude-4.1-opus" = "my_anthropic_connection.claude-opus-4-1-20250805"
[connections]
[connections.my_openai_connection]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.my_anthropic_connection]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
```
#### Response Format
```json
{
"object": "list",
"data": [
{
"id": "gpt-6",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
},
{
"id": "claude-5-sonnet",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
}
]
}
```
## 🔒 User Groups Configuration<a id="-user-groups-configuration"></a>
The `[groups]` section in the configuration defines access control rules for different user groups.
Each group can have its own set of virtual API keys and permitted connections.
### Basic Group Definition<a id="basic-group-definition"></a>
```toml
[groups.default]
api_keys = ["KEY1", "KEY2"]
allowed_connections = "*" # Allow access to all connections
```
### Group-based Access Control<a id="group-based-access-control"></a>
You can create multiple groups to segment your users and control their access:
```toml
# Admin group with full access
[groups.admin]
api_keys = ["ADMIN_KEY_1", "ADMIN_KEY_2"]
allowed_connections = "*" # Access to all connections
# Regular users with limited access
[groups.users]
api_keys = ["USER_KEY_1", "USER_KEY_2"]
allowed_connections = "openai,anthropic" # Only allowed to use specific connections
# Free tier with minimal access
[groups.free]
api_keys = ["FREE_KEY_1", "FREE_KEY_2"]
allowed_connections = "openai" # Only allowed to use OpenAI connection
```
### Connection Restrictions<a id="connection-restrictions"></a>
The `allowed_connections` parameter controls which upstream providers a group can access:
- `"*"` - Group can use all configured connections
- `"openai,anthropic"` - Comma-separated list of specific connections the group can use
This allows fine-grained control over which users can access which AI providers, enabling features like:
- Restricting expensive models to premium users
- Creating specialized access tiers for different user groups
- Implementing usage quotas per group
- Billing and cost allocation by user group
### Virtual API Key Validation<a id="virtual-api-key-validation"></a>
#### Overview
LM Proxy Server includes 2 built-in methods for validating Virtual API keys:
- `lm_proxy.api_key_check.check_api_key_in_config` - verifies API keys against those defined in the config file; used by default
- `lm_proxy.api_key_check.CheckAPIKeyWithRequest` - validates API keys via an external HTTP service
The API key check method can be configured using the `api_key_check` configuration key.
Its value can be either a reference to a Python function in the format `my_module.sub_module1.sub_module2.fn_name`,
or an object containing parameters for a class-based validator.
In the .py config representation, the validator function can be passed directly as a callable.
#### Example configuration for external API key validation using HTTP request to Keycloak / OpenID Connect
This example shows how to validate API keys against an external service (e.g., Keycloak):
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true # interpret response JSON as user info object for further processing / logging
use_cache = true # requires installing cachetools if True: pip install cachetools
cache_ttl = 60 # Cache duration in seconds
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
#### Custom API Key Validation / Extending functionality
For more advanced authentication needs,
you can implement a custom validator function:
```python
# my_validators.py
def validate_api_key(api_key: str) -> str | None:
"""
Validate an API key and return the group name if valid.
Args:
api_key: The API key to validate
Returns:
The name of the group if valid, None otherwise
"""
if api_key == "secret-key":
return "admin"
elif api_key.startswith("user-"):
return "users"
return None
```
Then reference it in your config:
```toml
api_key_check = "my_validators.validate_api_key"
```
> **NOTE**
> In this case, the `api_keys` lists in groups are ignored, and the custom function is responsible for all validation logic.
## 🛠️ Advanced Usage<a id="-advanced-usage"></a>
### Dynamic Model Routing<a id="dynamic-model-routing"></a>
The routing section allows flexible pattern matching with wildcards:
```toml
[routing]
"gpt-4*" = "openai.gpt-4" # Route gpt-4 requests to OpenAI GPT-4
"gpt-3.5*" = "openai.gpt-3.5-turbo" # Route gpt-3.5 requests to OpenAI
"claude*" = "anthropic.*" # Pass model name as-is to Anthropic
"gemini*" = "google.*" # Pass model name as-is to Google
"custom*" = "local.llama-7b" # Map any "custom*" to a specific local model
"*" = "openai.gpt-3.5-turbo" # Default fallback for unmatched models
```
Keys are model name patterns (with `*` wildcard support), and values are connection/model mappings.
Connection names reference those defined in the `[connections]` section.
### Load Balancing Example<a id="load-balancing-example"></a>
- [Simple load-balancer configuration](https://github.com/Nayjest/lm-proxy/blob/main/examples/load_balancer_config.py)
This example demonstrates how to set up a load balancer that randomly
distributes requests across multiple language model servers using the lm_proxy.
### Google Vertex AI Configuration Example<a id="google-vertex-ai-configuration-example"></a>
- [vertex-ai.toml](https://github.com/Nayjest/lm-proxy/blob/main/examples/vertex-ai.toml)
This example demonstrates how to connect LM Proxy Server to Google Gemini model via Vertex AI API
### Using Tokens from OIDC Provider as Virtual/Client API Keys<a id="using-tokens-from-oidc-provider-as-virtualclient-api-keys"></a>
You can configure LM Proxy Server to validate tokens from OpenID Connect (OIDC) providers like Keycloak, Auth0, or Okta as API keys.
The following configuration validates Keycloak access tokens by calling the userinfo endpoint:
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true
use_cache = true
cache_ttl = 60
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
**Configuration Parameters:**
- `class` - The API key validation handler class ([lm_proxy.api_key_check.CheckAPIKeyWithRequest](https://github.com/Nayjest/lm-proxy/blob/main/lm_proxy/api_key_check/with_request.py))
- `method` - HTTP method for the validation request (typically `POST` or `GET`)
- `url` - The OIDC provider's userinfo endpoint URL
- `response_as_user_info` - Parse the response as user information for further usage in LM Proxy Server (extend logged info, determine user group, etc.)
- `use_cache` - Enable caching of validation results (requires installing the `cachetools` package if enabled: `pip install cachetools`)
- `cache_ttl` - Cache time-to-live in seconds (reduces load on identity provider)
- `headers` - Dictionary of headers to send with the validation request
> **Note**: The `{api_key}` placeholder can be used in headers or in the URL. LM Proxy Server substitutes it with the API key from the client to perform the check.
**Usage:**
Clients pass their OIDC access token as the API key when making requests to LM Proxy Server.
## 🪝 Request Handlers (Middleware)<a id="-request-handlers--middleware"></a>
Handlers intercept and modify requests *before* they reach the upstream LLM provider. They enable cross-cutting concerns such as rate limiting, logging, auditing, and header manipulation.
Handlers are defined in the `before` list within the configuration file and execute sequentially in the order specified.
### Built-in Handlers
LM Proxy Server includes several built-in handlers for common operational needs.
#### Rate Limiter
The `RateLimiter` protects upstream credentials and manages traffic load using a sliding window algorithm.
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `max_requests` | int | Maximum number of requests allowed per window |
| `window_seconds` | int | Duration of the sliding window in seconds |
| `per` | string | Scope of the limit: `api_key`, `ip`, `connection`, `group`, or `global` |
**Configuration:**
```toml
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 10
window_seconds = 60
per = "api_key"
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 1000
window_seconds = 300
per = "global"
```
#### HTTP Headers Forwarder
The `HTTPHeadersForwarder` passes specific headers from incoming client requests to the upstream provider—useful for distributed tracing or tenant context propagation.
Sensitive headers (`Authorization`, `Host`, `Content-Length`) are stripped by default to prevent protocol corruption and credential leaks.
```toml
[[before]]
class = "lm_proxy.handlers.HTTPHeadersForwarder"
white_list_headers = ["x-trace-id", "x-correlation-id", "x-tenant-id"]
```
See also [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md).
### Custom Handlers
Extend functionality by implementing custom handlers in Python. A handler is any callable (function or class instance) that accepts a `RequestContext`.
#### Interface
```python
from lm_proxy.base_types import RequestContext
async def my_custom_handler(ctx: RequestContext) -> None:
# Implementation here
pass
```
#### Example: Audit Logger
```python
# my_extensions.py
import logging
from lm_proxy.base_types import RequestContext
class AuditLogger:
def __init__(self, prefix: str = "AUDIT"):
self.prefix = prefix
async def __call__(self, ctx: RequestContext) -> None:
user = ctx.user_info.get("name", "anonymous")
logging.info(f"[{self.prefix}] User '{user}' requested model '{ctx.model}'")
```
**Registration:**
```toml
[[before]]
class = "my_extensions.AuditLogger"
prefix = "SECURITY_AUDIT"
```
## 🧩 Add-on Components<a id="-add-on-components"></a>
### Database Connector<a id="database-connector"></a>
[lm-proxy-server-db-connector](https://github.com/nayjest/lm-proxy-db-connector) is a lightweight SQLAlchemy-based connector that enables LM Proxy Server to work with relational databases including PostgreSQL, MySQL/MariaDB, SQLite, Oracle, Microsoft SQL Server, and many others.
**Key Features:**
- Configure database connections directly through LM Proxy Server configuration
- Share database connections across components, extensions, and custom functions
- Built-in database logger for structured logging of AI request data
## 📚 Guides & Reference<a id="-guides--reference"></a>
For more detailed information, check out these articles:
- [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md)
## 🚧 Known Limitations<a id="-known-limitations"></a>
- **Multiple generations (n > 1):** When proxying requests to Google or Anthropic APIs, only the first generation is returned. Multi-generation support is tracked in [#35](https://github.com/Nayjest/lm-proxy/issues/35).
- **Model listing with wildcards / forwarding actual model metadata:** The `/v1/models` endpoint does not query upstream providers to expand wildcard patterns (e.g., `gpt*`) or fetch model metadata. Only explicitly defined model names are listed [#36](https://github.com/Nayjest/lm-proxy/issues/36).
## 🔍 Debugging<a id="-debugging"></a>
### Overview
When **debugging mode** is enabled,
LM Proxy Server provides detailed logging information to help diagnose issues:
- Stack traces for exceptions are shown in the console
- Logging level is set to DEBUG instead of INFO
> **Warning** ⚠️
> Never enable debugging mode in production environments, as it may expose sensitive information to the application logs.
### Enabling Debugging Mode
To enable debugging, set the `LM_PROXY_DEBUG` environment variable to a truthy value (e.g., "1", "true", "yes").
> **Tip** 💡
> Environment variables can also be defined in a `.env` file.
Alternatively, you can enable or disable debugging via the command-line arguments:
- `--debug` to enable debugging
- `--no-debug` to disable debugging
> **Note** ℹ️
> CLI arguments override environment variable settings.
## 🤝 Contributing<a id="-contributing"></a>
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## 📄 License<a id="-license"></a>
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
© 2025–2026 [Vitalii Stepanenko](mailto:mail@vitaliy.in)
| text/markdown | Vitalii Stepanenko | mail@vitaliy.in | Vitalii Stepanenko | mail@vitaliy.in | MIT License
Copyright (c) 2025–2026 Vitalii Stepanenko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | llm, large language models, ai, gpt, openai, proxy, http, proxy-server, llm gateway, openai, anthropic, google genai | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: O... | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"ai-microcore<6,>=5.1.2",
"anthropic<1,>=0.77; extra == \"all\"",
"anthropic<1,>=0.77; extra == \"anthropic\"",
"fastapi<1,>=0.121.3",
"google-genai<2,>=1.62.0; extra == \"all\"",
"google-genai<2,>=1.62.0; extra == \"google\"",
"pydantic<2.13.0,>=2.12.5",
"pytest<8.5.0,>=8.4.2; extra == \"test\"",
"... | [] | [] | [] | [
"Bug Tracker, https://github.com/Nayjest/lm-proxy/issues",
"Source Code, https://github.com/Nayjest/lm-proxy"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-19T13:45:59.674472 | lm_proxy_server-3.0.2.tar.gz | 27,982 | d6/51/d5ece17327ae0b3c71d264b5abc151de81aa17fbbbe2acc4abfd48e7131b/lm_proxy_server-3.0.2.tar.gz | source | sdist | null | false | 143a0357164705a14c90218066ca3697 | bf46e1d7aefe1f3aaac32fd4f4eab0b403330feb2f1a29656b0d672fc27703de | d651d5ece17327ae0b3c71d264b5abc151de81aa17fbbbe2acc4abfd48e7131b | null | [] | 231 |
2.3 | lm-proxy | 3.0.2 | LM-Proxy is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc. | <h1 align="center"><a href="#">LM-Proxy</a></h1>
<p align="center">
<b>Lightweight, OpenAI-compatible HTTP proxy server / gateway</b><br>unifying access to multiple <b>Large Language Model providers</b> and local inference <br>through a single, standardized API endpoint.
</p>
<p align="center">
<a href="https://pypi.org/project/lm-proxy/"><img src="https://img.shields.io/pypi/v/lm-proxy?color=blue" alt="PyPI"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml/badge.svg" alt="Tests"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml/badge.svg" alt="Code Style"></a>
<img src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/coverage.svg" alt="Code Coverage">
<a href="https://www.bestpractices.dev/projects/11364"><img src="https://www.bestpractices.dev/projects/11364/badge"></a>
<a href="https://github.com/Nayjest/lm-proxy/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Nayjest/lm-proxy?color=d08aff" alt="License"></a>
</p>
Built with Python, FastAPI and [MicroCore](https://github.com/Nayjest/ai-microcore), **LM-Proxy** seamlessly integrates cloud providers like Google, Anthropic, and OpenAI, as well as local PyTorch-based inference, while maintaining full compatibility with OpenAI's API format.
It works as a drop-in replacement for OpenAI's API, allowing you to switch between cloud providers and local models without modifying your existing client code.
**LM-Proxy** supports **real-time token streaming**, **secure Virtual API key management**, and can be used both as an importable Python library and as a standalone HTTP service. Whether you're building production applications or experimenting with different models, LM-Proxy eliminates integration complexity and keeps your codebase **provider-agnostic**.
## Table of Contents
- [Overview](#lm-proxy)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#-configuration)
- [Basic Structure](#basic-structure)
- [Environment Variables](#environment-variables)
- [Proxy API Keys vs. Provider API Keys](#-proxy-api-keys-vs-provider-api-keys)
- [API Usage](#-api-usage)
- [Chat Completions Endpoint](#chat-completions-endpoint)
- [Models List Endpoint](#models-list-endpoint)
- [User Groups Configuration](#-user-groups-configuration)
- [Basic Group Definition](#basic-group-definition)
- [Group-based Access Control](#group-based-access-control)
- [Connection Restrictions](#connection-restrictions)
- [Virtual API Key Validation](#virtual-api-key-validation)
- [Advanced Usage](#%EF%B8%8F-advanced-usage)
- [Dynamic Model Routing](#dynamic-model-routing)
- [Load Balancing Example](#load-balancing-example)
- [Google Vertex AI Example](#google-vertex-ai-configuration-example)
- [Using Tokens from OIDC Provider as Virtual/Client API Keys](#using-tokens-from-oidc-provider-as-virtualclient-api-keys)
- [Add-on Components](#-add-on-components)
- [Database Connector](#database-connector)
- [Request Handlers (Middleware)](#-request-handlers--middleware)
- [Guides & Reference](#-guides--reference)
- [Known Limitations](#-known-limitations)
- [Debugging](#-debugging)
- [Contributing](#-contributing)
- [License](#-license)
<a href="#" align="center"><img alt="LM-Proxy / Gateway" src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/press-kit/assets/lm-proxy_1_hacker_1600x672.png"></a>
## ✨ Features<a id="-features"></a>
- **Provider Agnostic**: Connect to OpenAI, Anthropic, Google AI, local models, and more using a single API
- **Unified Interface**: Access all models through the standard OpenAI API format
- **Dynamic Routing**: Route requests to different LLM providers based on model name patterns
- **Stream Support**: Full streaming support for real-time responses
- **API Key Management**: Configurable API key validation and access control
- **Easy Configuration**: Simple TOML/YAML/JSON/Python configuration files for setup
- **Extensible by Design**: Minimal core with clearly defined extension points, enabling seamless customization and expansion without modifying the core system.
## 🚀 Getting Started<a id="-getting-started"></a>
### Requirements
Python 3.11 | 3.12 | 3.13
### Installation<a id="installation"></a>
```bash
pip install lm-proxy
```
For proxying to Anthropic API or Google Gemini via Vertex AI or Google AI Studio, install optional dependencies:
```
pip install lm-proxy[anthropic,google]
```
or
```
pip install lm-proxy[all]
```
### Quick Start<a id="quick-start"></a>
#### 1. Create a `config.toml` file:
```toml
host = "0.0.0.0"
port = 8000
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
[routing]
"gpt*" = "openai.*"
"claude*" = "anthropic.*"
"*" = "openai.gpt-3.5-turbo"
[groups.default]
api_keys = ["YOUR_API_KEY_HERE"]
```
> **Note** ℹ️
> To enhance security, consider storing upstream API keys in operating system environment variables rather than embedding them directly in the configuration file. You can reference these variables in the configuration using the env:<VAR_NAME> syntax.
#### 2. Start the server:
```bash
lm-proxy
```
Alternatively, run it as a Python module:
```bash
python -m lm_proxy
```
#### 3. Use it with any OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY_HERE",
base_url="http://localhost:8000/v1"
)
completion = client.chat.completions.create(
model="gpt-5", # This will be routed to OpenAI based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(completion.choices[0].message.content)
```
Or use the same endpoint with Claude models:
```python
completion = client.chat.completions.create(
model="claude-opus-4-1-20250805", # This will be routed to Anthropic based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
## 📝 Configuration<a id="-configuration"></a>
LM-Proxy is configured through a TOML/YAML/JSON/Python file that specifies connections, routing rules, and access control.
### Basic Structure<a id="basic-structure"></a>
```toml
host = "0.0.0.0" # Interface to bind to
port = 8000 # Port to listen on
dev_autoreload = false # Enable for development
# API key validation function (optional)
api_key_check = "lm_proxy.api_key_check.check_api_key_in_config"
# LLM Provider Connections
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.google]
api_type = "google"
api_key = "env:GOOGLE_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
# Routing rules (model_pattern = "connection.model")
[routing]
"gpt*" = "openai.*" # Route all GPT models to OpenAI
"claude*" = "anthropic.*" # Route all Claude models to Anthropic
"gemini*" = "google.*" # Route all Gemini models to Google
"*" = "openai.gpt-3.5-turbo" # Default fallback
# Access control groups
[groups.default]
api_keys = [
"KEY1",
"KEY2"
]
# optional
[[loggers]]
class = 'lm_proxy.loggers.BaseLogger'
[loggers.log_writer]
class = 'lm_proxy.loggers.log_writers.JsonLogWriter'
file_name = 'storage/json.log'
[loggers.entry_transformer]
class = 'lm_proxy.loggers.LogEntryTransformer'
completion_tokens = "response.usage.completion_tokens"
prompt_tokens = "response.usage.prompt_tokens"
prompt = "request.messages"
response = "response"
group = "group"
connection = "connection"
api_key_id = "api_key_id"
remote_addr = "remote_addr"
created_at = "created_at"
duration = "duration"
```
### Environment Variables<a id="environment-variables"></a>
You can reference environment variables in your configuration file by prefixing values with `env:`.
For example:
```toml
[connections.openai]
api_key = "env:OPENAI_API_KEY"
```
At runtime, LM-Proxy automatically retrieves the value of the target variable
(OPENAI_API_KEY) from your operating system's environment or from a .env file, if present.
### .env Files
By default, LM-Proxy looks for a `.env` file in the current working directory
and loads environment variables from it.
You can refer to the [.env.template](https://github.com/Nayjest/lm-proxy/blob/main/.env.template)
file for an example:
```dotenv
OPENAI_API_KEY=sk-u........
GOOGLE_API_KEY=AI........
ANTHROPIC_API_KEY=sk-ant-api03--vE........
# "1", "TRUE", "YES", "ON", "ENABLED", "Y", "+" are true, case-insensitive.
# See https://github.com/Nayjest/ai-microcore/blob/v4.4.3/microcore/configuration.py#L36
LM_PROXY_DEBUG=no
```
You can also control `.env` file usage with the `--env` command-line option:
```bash
# Use a custom .env file path
lm-proxy --env="path/to/your/.env"
# Disable .env loading
lm-proxy --env=""
```
## 🔑 Proxy API Keys vs. Provider API Keys<a id="-proxy-api-keys-vs-provider-api-keys"></a>
LM-Proxy utilizes two distinct types of API keys to facilitate secure and efficient request handling.
- **Proxy API Key (Virtual API Key, Client API Key):**
A unique key generated and managed within LM-Proxy.
Clients use these keys to authenticate their requests to the proxy's API endpoints.
Each Client API Key is associated with a specific group, which defines the scope of access and permissions for the client's requests.
These keys allow users to securely interact with the proxy without direct access to external service credentials.
- **Provider API Key (Upstream API Key):**
A key provided by external LLM inference providers (e.g., OpenAI, Anthropic, Mistral, etc.) and configured within the LM-Proxy.
The proxy uses these keys to authenticate and forward validated client requests to the respective external services.
Provider API Keys remain hidden from end users, ensuring secure and transparent communication with provider APIs.
This distinction ensures a clear separation of concerns:
Virtual API Keys manage user authentication and access within the proxy,
while Upstream API Keys handle secure communication with external providers.
## 🔌 API Usage<a id="-api-usage"></a>
LM-Proxy implements the OpenAI chat completions API endpoint. You can use any OpenAI-compatible client to interact with it.
### Chat Completions Endpoint<a id="chat-completions-endpoint"></a>
```http
POST /v1/chat/completions
```
#### Request Format
```json
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"stream": false
}
```
#### Response Format
```json
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
]
}
```
### Models List Endpoint<a id="models-list-endpoint"></a>
List and describe all models available through the API.
```http
GET /v1/models
```
The **LM-Proxy** dynamically builds the models list based on routing rules defined in `config.routing`.
Routing keys can reference both **exact model names** and **model name patterns** (e.g., `"gpt*"`, `"claude*"`, etc.).
By default, wildcard patterns are displayed as-is in the models list (e.g., `"gpt*"`, `"claude*"`).
This behavior can be customized via the `model_listing_mode` configuration option:
```
model_listing_mode = "as_is" | "ignore_wildcards" | "expand_wildcards"
```
Available modes:
- **`as_is`** *(default)* — Lists all entries exactly as defined in the routing configuration, including wildcard patterns.
- **`ignore_wildcards`** — Excludes wildcard patterns, showing only explicitly defined model names.
- **`expand_wildcards`** — Expands wildcard patterns by querying each connected backend for available models *(feature not yet implemented)*.
To obtain a complete and accurate model list in the current implementation,
all supported models must be explicitly defined in the routing configuration, for example:
```toml
[routing]
"gpt-4" = "my_openai_connection.*"
"gpt-5" = "my_openai_connection.*"
"gpt-8"= "my_openai_connection.gpt-3.5-turbo"
"claude-4.5-sonnet" = "my_anthropic_connection.claude-sonnet-4-5-20250929"
"claude-4.1-opus" = "my_anthropic_connection.claude-opus-4-1-20250805"
[connections]
[connections.my_openai_connection]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.my_anthropic_connection]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
```
#### Response Format
```json
{
"object": "list",
"data": [
{
"id": "gpt-6",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
},
{
"id": "claude-5-sonnet",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
}
]
}
```
## 🔒 User Groups Configuration<a id="-user-groups-configuration"></a>
The `[groups]` section in the configuration defines access control rules for different user groups.
Each group can have its own set of virtual API keys and permitted connections.
### Basic Group Definition<a id="basic-group-definition"></a>
```toml
[groups.default]
api_keys = ["KEY1", "KEY2"]
allowed_connections = "*" # Allow access to all connections
```
### Group-based Access Control<a id="group-based-access-control"></a>
You can create multiple groups to segment your users and control their access:
```toml
# Admin group with full access
[groups.admin]
api_keys = ["ADMIN_KEY_1", "ADMIN_KEY_2"]
allowed_connections = "*" # Access to all connections
# Regular users with limited access
[groups.users]
api_keys = ["USER_KEY_1", "USER_KEY_2"]
allowed_connections = "openai,anthropic" # Only allowed to use specific connections
# Free tier with minimal access
[groups.free]
api_keys = ["FREE_KEY_1", "FREE_KEY_2"]
allowed_connections = "openai" # Only allowed to use OpenAI connection
```
### Connection Restrictions<a id="connection-restrictions"></a>
The `allowed_connections` parameter controls which upstream providers a group can access:
- `"*"` - Group can use all configured connections
- `"openai,anthropic"` - Comma-separated list of specific connections the group can use
This allows fine-grained control over which users can access which AI providers, enabling features like:
- Restricting expensive models to premium users
- Creating specialized access tiers for different user groups
- Implementing usage quotas per group
- Billing and cost allocation by user group
### Virtual API Key Validation<a id="virtual-api-key-validation"></a>
#### Overview
LM-Proxy includes 2 built-in methods for validating Virtual API keys:
- `lm_proxy.api_key_check.check_api_key_in_config` - verifies API keys against those defined in the config file; used by default
- `lm_proxy.api_key_check.CheckAPIKeyWithRequest` - validates API keys via an external HTTP service
The API key check method can be configured using the `api_key_check` configuration key.
Its value can be either a reference to a Python function in the format `my_module.sub_module1.sub_module2.fn_name`,
or an object containing parameters for a class-based validator.
In the .py config representation, the validator function can be passed directly as a callable.
#### Example configuration for external API key validation using HTTP request to Keycloak / OpenID Connect
This example shows how to validate API keys against an external service (e.g., Keycloak):
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true # interpret response JSON as user info object for further processing / logging
use_cache = true # requires installing cachetools if True: pip install cachetools
cache_ttl = 60 # Cache duration in seconds
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
#### Custom API Key Validation / Extending functionality
For more advanced authentication needs,
you can implement a custom validator function:
```python
# my_validators.py
def validate_api_key(api_key: str) -> str | None:
"""
Validate an API key and return the group name if valid.
Args:
api_key: The API key to validate
Returns:
The name of the group if valid, None otherwise
"""
if api_key == "secret-key":
return "admin"
elif api_key.startswith("user-"):
return "users"
return None
```
Then reference it in your config:
```toml
api_key_check = "my_validators.validate_api_key"
```
> **NOTE**
> In this case, the `api_keys` lists in groups are ignored, and the custom function is responsible for all validation logic.
## 🛠️ Advanced Usage<a id="-advanced-usage"></a>
### Dynamic Model Routing<a id="dynamic-model-routing"></a>
The routing section allows flexible pattern matching with wildcards:
```toml
[routing]
"gpt-4*" = "openai.gpt-4" # Route gpt-4 requests to OpenAI GPT-4
"gpt-3.5*" = "openai.gpt-3.5-turbo" # Route gpt-3.5 requests to OpenAI
"claude*" = "anthropic.*" # Pass model name as-is to Anthropic
"gemini*" = "google.*" # Pass model name as-is to Google
"custom*" = "local.llama-7b" # Map any "custom*" to a specific local model
"*" = "openai.gpt-3.5-turbo" # Default fallback for unmatched models
```
Keys are model name patterns (with `*` wildcard support), and values are connection/model mappings.
Connection names reference those defined in the `[connections]` section.
### Load Balancing Example<a id="load-balancing-example"></a>
- [Simple load-balancer configuration](https://github.com/Nayjest/lm-proxy/blob/main/examples/load_balancer_config.py)
This example demonstrates how to set up a load balancer that randomly
distributes requests across multiple language model servers using the lm_proxy.
### Google Vertex AI Configuration Example<a id="google-vertex-ai-configuration-example"></a>
- [vertex-ai.toml](https://github.com/Nayjest/lm-proxy/blob/main/examples/vertex-ai.toml)
This example demonstrates how to connect LM-Proxy to Google Gemini model via Vertex AI API
### Using Tokens from OIDC Provider as Virtual/Client API Keys<a id="using-tokens-from-oidc-provider-as-virtualclient-api-keys"></a>
You can configure LM-Proxy to validate tokens from OpenID Connect (OIDC) providers like Keycloak, Auth0, or Okta as API keys.
The following configuration validates Keycloak access tokens by calling the userinfo endpoint:
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true
use_cache = true
cache_ttl = 60
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
**Configuration Parameters:**
- `class` - The API key validation handler class ([lm_proxy.api_key_check.CheckAPIKeyWithRequest](https://github.com/Nayjest/lm-proxy/blob/main/lm_proxy/api_key_check/with_request.py))
- `method` - HTTP method for the validation request (typically `POST` or `GET`)
- `url` - The OIDC provider's userinfo endpoint URL
- `response_as_user_info` - Parse the response as user information for further usage in LM-Proxy (extend logged info, determine user group, etc.)
- `use_cache` - Enable caching of validation results (requires installing the `cachetools` package if enabled: `pip install cachetools`)
- `cache_ttl` - Cache time-to-live in seconds (reduces load on identity provider)
- `headers` - Dictionary of headers to send with the validation request
> **Note**: The `{api_key}` placeholder can be used in headers or in the URL. LM-Proxy substitutes it with the API key from the client to perform the check.
**Usage:**
Clients pass their OIDC access token as the API key when making requests to LM-Proxy.
## 🪝 Request Handlers (Middleware)<a id="-request-handlers--middleware"></a>
Handlers intercept and modify requests *before* they reach the upstream LLM provider. They enable cross-cutting concerns such as rate limiting, logging, auditing, and header manipulation.
Handlers are defined in the `before` list within the configuration file and execute sequentially in the order specified.
### Built-in Handlers
LM-Proxy includes several built-in handlers for common operational needs.
#### Rate Limiter
The `RateLimiter` protects upstream credentials and manages traffic load using a sliding window algorithm.
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `max_requests` | int | Maximum number of requests allowed per window |
| `window_seconds` | int | Duration of the sliding window in seconds |
| `per` | string | Scope of the limit: `api_key`, `ip`, `connection`, `group`, or `global` |
**Configuration:**
```toml
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 10
window_seconds = 60
per = "api_key"
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 1000
window_seconds = 300
per = "global"
```
#### HTTP Headers Forwarder
The `HTTPHeadersForwarder` passes specific headers from incoming client requests to the upstream provider—useful for distributed tracing or tenant context propagation.
Sensitive headers (`Authorization`, `Host`, `Content-Length`) are stripped by default to prevent protocol corruption and credential leaks.
```toml
[[before]]
class = "lm_proxy.handlers.HTTPHeadersForwarder"
white_list_headers = ["x-trace-id", "x-correlation-id", "x-tenant-id"]
```
See also [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md).
### Custom Handlers
Extend functionality by implementing custom handlers in Python. A handler is any callable (function or class instance) that accepts a `RequestContext`.
#### Interface
```python
from lm_proxy.base_types import RequestContext
async def my_custom_handler(ctx: RequestContext) -> None:
# Implementation here
pass
```
#### Example: Audit Logger
```python
# my_extensions.py
import logging
from lm_proxy.base_types import RequestContext
class AuditLogger:
def __init__(self, prefix: str = "AUDIT"):
self.prefix = prefix
async def __call__(self, ctx: RequestContext) -> None:
user = ctx.user_info.get("name", "anonymous")
logging.info(f"[{self.prefix}] User '{user}' requested model '{ctx.model}'")
```
**Registration:**
```toml
[[before]]
class = "my_extensions.AuditLogger"
prefix = "SECURITY_AUDIT"
```
## 🧩 Add-on Components<a id="-add-on-components"></a>
### Database Connector<a id="database-connector"></a>
[lm-proxy-db-connector](https://github.com/nayjest/lm-proxy-db-connector) is a lightweight SQLAlchemy-based connector that enables LM-Proxy to work with relational databases including PostgreSQL, MySQL/MariaDB, SQLite, Oracle, Microsoft SQL Server, and many others.
**Key Features:**
- Configure database connections directly through LM-Proxy configuration
- Share database connections across components, extensions, and custom functions
- Built-in database logger for structured logging of AI request data
## 📚 Guides & Reference<a id="-guides--reference"></a>
For more detailed information, check out these articles:
- [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md)
## 🚧 Known Limitations<a id="-known-limitations"></a>
- **Multiple generations (n > 1):** When proxying requests to Google or Anthropic APIs, only the first generation is returned. Multi-generation support is tracked in [#35](https://github.com/Nayjest/lm-proxy/issues/35).
- **Model listing with wildcards / forwarding actual model metadata:** The `/v1/models` endpoint does not query upstream providers to expand wildcard patterns (e.g., `gpt*`) or fetch model metadata. Only explicitly defined model names are listed [#36](https://github.com/Nayjest/lm-proxy/issues/36).
## 🔍 Debugging<a id="-debugging"></a>
### Overview
When **debugging mode** is enabled,
LM-Proxy provides detailed logging information to help diagnose issues:
- Stack traces for exceptions are shown in the console
- Logging level is set to DEBUG instead of INFO
> **Warning** ⚠️
> Never enable debugging mode in production environments, as it may expose sensitive information to the application logs.
### Enabling Debugging Mode
To enable debugging, set the `LM_PROXY_DEBUG` environment variable to a truthy value (e.g., "1", "true", "yes").
> **Tip** 💡
> Environment variables can also be defined in a `.env` file.
Alternatively, you can enable or disable debugging via the command-line arguments:
- `--debug` to enable debugging
- `--no-debug` to disable debugging
> **Note** ℹ️
> CLI arguments override environment variable settings.
## 🤝 Contributing<a id="-contributing"></a>
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## 📄 License<a id="-license"></a>
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
© 2025–2026 [Vitalii Stepanenko](mailto:mail@vitaliy.in)
| text/markdown | Vitalii Stepanenko | mail@vitaliy.in | Vitalii Stepanenko | mail@vitaliy.in | MIT License
Copyright (c) 2025–2026 Vitalii Stepanenko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | llm, large language models, ai, gpt, openai, proxy, http, proxy-server, llm gateway, openai, anthropic, google genai | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: O... | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"ai-microcore<6,>=5.1.2",
"anthropic<1,>=0.77; extra == \"all\"",
"anthropic<1,>=0.77; extra == \"anthropic\"",
"fastapi<1,>=0.121.3",
"google-genai<2,>=1.62.0; extra == \"all\"",
"google-genai<2,>=1.62.0; extra == \"google\"",
"pydantic<2.13.0,>=2.12.5",
"pytest<8.5.0,>=8.4.2; extra == \"test\"",
"... | [] | [] | [] | [
"Bug Tracker, https://github.com/Nayjest/lm-proxy/issues",
"Source Code, https://github.com/Nayjest/lm-proxy"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-19T13:45:58.390043 | lm_proxy-3.0.2.tar.gz | 27,778 | db/b3/95ff6d8b2bf6d57b06eb370cd0d77ac6dfc6958b2cd12752b16fe2e2f573/lm_proxy-3.0.2.tar.gz | source | sdist | null | false | 38f813a87ea3a1b9fc35a750dd9f96fb | 22473f96f8ee07028ee7741940461322fb6585288396bc02c7f70891b8fe52fb | dbb395ff6d8b2bf6d57b06eb370cd0d77ac6dfc6958b2cd12752b16fe2e2f573 | null | [] | 233 |
2.3 | llm-proxy-server | 3.0.2 | LLM Proxy Server is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc. | <h1 align="center"><a href="#">LLM Proxy Server</a></h1>
<p align="center">
<b>Lightweight, OpenAI-compatible HTTP proxy server / gateway</b><br>unifying access to multiple <b>Large Language Model providers</b> and local inference <br>through a single, standardized API endpoint.
</p>
<p align="center">
<a href="https://pypi.org/project/lm-proxy/"><img src="https://img.shields.io/pypi/v/lm-proxy?color=blue" alt="PyPI"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml/badge.svg" alt="Tests"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml/badge.svg" alt="Code Style"></a>
<img src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/coverage.svg" alt="Code Coverage">
<a href="https://www.bestpractices.dev/projects/11364"><img src="https://www.bestpractices.dev/projects/11364/badge"></a>
<a href="https://github.com/Nayjest/lm-proxy/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Nayjest/lm-proxy?color=d08aff" alt="License"></a>
</p>
Built with Python, FastAPI and [MicroCore](https://github.com/Nayjest/ai-microcore), **LLM Proxy Server** seamlessly integrates cloud providers like Google, Anthropic, and OpenAI, as well as local PyTorch-based inference, while maintaining full compatibility with OpenAI's API format.
It works as a drop-in replacement for OpenAI's API, allowing you to switch between cloud providers and local models without modifying your existing client code.
**LLM Proxy Server** supports **real-time token streaming**, **secure Virtual API key management**, and can be used both as an importable Python library and as a standalone HTTP service. Whether you're building production applications or experimenting with different models, LLM Proxy Server eliminates integration complexity and keeps your codebase **provider-agnostic**.
## Table of Contents
- [Overview](#llm-proxy-server)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#-configuration)
- [Basic Structure](#basic-structure)
- [Environment Variables](#environment-variables)
- [Proxy API Keys vs. Provider API Keys](#-proxy-api-keys-vs-provider-api-keys)
- [API Usage](#-api-usage)
- [Chat Completions Endpoint](#chat-completions-endpoint)
- [Models List Endpoint](#models-list-endpoint)
- [User Groups Configuration](#-user-groups-configuration)
- [Basic Group Definition](#basic-group-definition)
- [Group-based Access Control](#group-based-access-control)
- [Connection Restrictions](#connection-restrictions)
- [Virtual API Key Validation](#virtual-api-key-validation)
- [Advanced Usage](#%EF%B8%8F-advanced-usage)
- [Dynamic Model Routing](#dynamic-model-routing)
- [Load Balancing Example](#load-balancing-example)
- [Google Vertex AI Example](#google-vertex-ai-configuration-example)
- [Using Tokens from OIDC Provider as Virtual/Client API Keys](#using-tokens-from-oidc-provider-as-virtualclient-api-keys)
- [Add-on Components](#-add-on-components)
- [Database Connector](#database-connector)
- [Request Handlers (Middleware)](#-request-handlers--middleware)
- [Guides & Reference](#-guides--reference)
- [Known Limitations](#-known-limitations)
- [Debugging](#-debugging)
- [Contributing](#-contributing)
- [License](#-license)
<a href="#" align="center"><img alt="LLM Proxy Server / Gateway" src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/press-kit/assets/lm-proxy_1_hacker_1600x672.png"></a>
## ✨ Features<a id="-features"></a>
- **Provider Agnostic**: Connect to OpenAI, Anthropic, Google AI, local models, and more using a single API
- **Unified Interface**: Access all models through the standard OpenAI API format
- **Dynamic Routing**: Route requests to different LLM providers based on model name patterns
- **Stream Support**: Full streaming support for real-time responses
- **API Key Management**: Configurable API key validation and access control
- **Easy Configuration**: Simple TOML/YAML/JSON/Python configuration files for setup
- **Extensible by Design**: Minimal core with clearly defined extension points, enabling seamless customization and expansion without modifying the core system.
## 🚀 Getting Started<a id="-getting-started"></a>
### Requirements
Python 3.11 | 3.12 | 3.13
### Installation<a id="installation"></a>
```bash
pip install llm-proxy-server
```
For proxying to Anthropic API or Google Gemini via Vertex AI or Google AI Studio, install optional dependencies:
```
pip install llm-proxy-server[anthropic,google]
```
or
```
pip install llm-proxy-server[all]
```
### Quick Start<a id="quick-start"></a>
#### 1. Create a `config.toml` file:
```toml
host = "0.0.0.0"
port = 8000
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
[routing]
"gpt*" = "openai.*"
"claude*" = "anthropic.*"
"*" = "openai.gpt-3.5-turbo"
[groups.default]
api_keys = ["YOUR_API_KEY_HERE"]
```
> **Note** ℹ️
> To enhance security, consider storing upstream API keys in operating system environment variables rather than embedding them directly in the configuration file. You can reference these variables in the configuration using the env:<VAR_NAME> syntax.
#### 2. Start the server:
```bash
llm-proxy-server
```
Alternatively, run it as a Python module:
```bash
python -m lm_proxy
```
#### 3. Use it with any OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY_HERE",
base_url="http://localhost:8000/v1"
)
completion = client.chat.completions.create(
model="gpt-5", # This will be routed to OpenAI based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(completion.choices[0].message.content)
```
Or use the same endpoint with Claude models:
```python
completion = client.chat.completions.create(
model="claude-opus-4-1-20250805", # This will be routed to Anthropic based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
## 📝 Configuration<a id="-configuration"></a>
LLM Proxy Server is configured through a TOML/YAML/JSON/Python file that specifies connections, routing rules, and access control.
### Basic Structure<a id="basic-structure"></a>
```toml
host = "0.0.0.0" # Interface to bind to
port = 8000 # Port to listen on
dev_autoreload = false # Enable for development
# API key validation function (optional)
api_key_check = "lm_proxy.api_key_check.check_api_key_in_config"
# LLM Provider Connections
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.google]
api_type = "google"
api_key = "env:GOOGLE_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
# Routing rules (model_pattern = "connection.model")
[routing]
"gpt*" = "openai.*" # Route all GPT models to OpenAI
"claude*" = "anthropic.*" # Route all Claude models to Anthropic
"gemini*" = "google.*" # Route all Gemini models to Google
"*" = "openai.gpt-3.5-turbo" # Default fallback
# Access control groups
[groups.default]
api_keys = [
"KEY1",
"KEY2"
]
# optional
[[loggers]]
class = 'lm_proxy.loggers.BaseLogger'
[loggers.log_writer]
class = 'lm_proxy.loggers.log_writers.JsonLogWriter'
file_name = 'storage/json.log'
[loggers.entry_transformer]
class = 'lm_proxy.loggers.LogEntryTransformer'
completion_tokens = "response.usage.completion_tokens"
prompt_tokens = "response.usage.prompt_tokens"
prompt = "request.messages"
response = "response"
group = "group"
connection = "connection"
api_key_id = "api_key_id"
remote_addr = "remote_addr"
created_at = "created_at"
duration = "duration"
```
### Environment Variables<a id="environment-variables"></a>
You can reference environment variables in your configuration file by prefixing values with `env:`.
For example:
```toml
[connections.openai]
api_key = "env:OPENAI_API_KEY"
```
At runtime, LLM Proxy Server automatically retrieves the value of the target variable
(OPENAI_API_KEY) from your operating system's environment or from a .env file, if present.
### .env Files
By default, LLM Proxy Server looks for a `.env` file in the current working directory
and loads environment variables from it.
You can refer to the [.env.template](https://github.com/Nayjest/lm-proxy/blob/main/.env.template)
file for an example:
```dotenv
OPENAI_API_KEY=sk-u........
GOOGLE_API_KEY=AI........
ANTHROPIC_API_KEY=sk-ant-api03--vE........
# "1", "TRUE", "YES", "ON", "ENABLED", "Y", "+" are true, case-insensitive.
# See https://github.com/Nayjest/ai-microcore/blob/v4.4.3/microcore/configuration.py#L36
LM_PROXY_DEBUG=no
```
You can also control `.env` file usage with the `--env` command-line option:
```bash
# Use a custom .env file path
llm-proxy-server --env="path/to/your/.env"
# Disable .env loading
llm-proxy-server --env=""
```
## 🔑 Proxy API Keys vs. Provider API Keys<a id="-proxy-api-keys-vs-provider-api-keys"></a>
LLM Proxy Server utilizes two distinct types of API keys to facilitate secure and efficient request handling.
- **Proxy API Key (Virtual API Key, Client API Key):**
A unique key generated and managed within LLM Proxy Server.
Clients use these keys to authenticate their requests to the proxy's API endpoints.
Each Client API Key is associated with a specific group, which defines the scope of access and permissions for the client's requests.
These keys allow users to securely interact with the proxy without direct access to external service credentials.
- **Provider API Key (Upstream API Key):**
A key provided by external LLM inference providers (e.g., OpenAI, Anthropic, Mistral, etc.) and configured within the LLM Proxy Server.
The proxy uses these keys to authenticate and forward validated client requests to the respective external services.
Provider API Keys remain hidden from end users, ensuring secure and transparent communication with provider APIs.
This distinction ensures a clear separation of concerns:
Virtual API Keys manage user authentication and access within the proxy,
while Upstream API Keys handle secure communication with external providers.
## 🔌 API Usage<a id="-api-usage"></a>
LLM Proxy Server implements the OpenAI chat completions API endpoint. You can use any OpenAI-compatible client to interact with it.
### Chat Completions Endpoint<a id="chat-completions-endpoint"></a>
```http
POST /v1/chat/completions
```
#### Request Format
```json
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"stream": false
}
```
#### Response Format
```json
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
]
}
```
### Models List Endpoint<a id="models-list-endpoint"></a>
List and describe all models available through the API.
```http
GET /v1/models
```
The **LLM Proxy Server** dynamically builds the models list based on routing rules defined in `config.routing`.
Routing keys can reference both **exact model names** and **model name patterns** (e.g., `"gpt*"`, `"claude*"`, etc.).
By default, wildcard patterns are displayed as-is in the models list (e.g., `"gpt*"`, `"claude*"`).
This behavior can be customized via the `model_listing_mode` configuration option:
```
model_listing_mode = "as_is" | "ignore_wildcards" | "expand_wildcards"
```
Available modes:
- **`as_is`** *(default)* — Lists all entries exactly as defined in the routing configuration, including wildcard patterns.
- **`ignore_wildcards`** — Excludes wildcard patterns, showing only explicitly defined model names.
- **`expand_wildcards`** — Expands wildcard patterns by querying each connected backend for available models *(feature not yet implemented)*.
To obtain a complete and accurate model list in the current implementation,
all supported models must be explicitly defined in the routing configuration, for example:
```toml
[routing]
"gpt-4" = "my_openai_connection.*"
"gpt-5" = "my_openai_connection.*"
"gpt-8"= "my_openai_connection.gpt-3.5-turbo"
"claude-4.5-sonnet" = "my_anthropic_connection.claude-sonnet-4-5-20250929"
"claude-4.1-opus" = "my_anthropic_connection.claude-opus-4-1-20250805"
[connections]
[connections.my_openai_connection]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.my_anthropic_connection]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
```
#### Response Format
```json
{
"object": "list",
"data": [
{
"id": "gpt-6",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
},
{
"id": "claude-5-sonnet",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
}
]
}
```
## 🔒 User Groups Configuration<a id="-user-groups-configuration"></a>
The `[groups]` section in the configuration defines access control rules for different user groups.
Each group can have its own set of virtual API keys and permitted connections.
### Basic Group Definition<a id="basic-group-definition"></a>
```toml
[groups.default]
api_keys = ["KEY1", "KEY2"]
allowed_connections = "*" # Allow access to all connections
```
### Group-based Access Control<a id="group-based-access-control"></a>
You can create multiple groups to segment your users and control their access:
```toml
# Admin group with full access
[groups.admin]
api_keys = ["ADMIN_KEY_1", "ADMIN_KEY_2"]
allowed_connections = "*" # Access to all connections
# Regular users with limited access
[groups.users]
api_keys = ["USER_KEY_1", "USER_KEY_2"]
allowed_connections = "openai,anthropic" # Only allowed to use specific connections
# Free tier with minimal access
[groups.free]
api_keys = ["FREE_KEY_1", "FREE_KEY_2"]
allowed_connections = "openai" # Only allowed to use OpenAI connection
```
### Connection Restrictions<a id="connection-restrictions"></a>
The `allowed_connections` parameter controls which upstream providers a group can access:
- `"*"` - Group can use all configured connections
- `"openai,anthropic"` - Comma-separated list of specific connections the group can use
This allows fine-grained control over which users can access which AI providers, enabling features like:
- Restricting expensive models to premium users
- Creating specialized access tiers for different user groups
- Implementing usage quotas per group
- Billing and cost allocation by user group
### Virtual API Key Validation<a id="virtual-api-key-validation"></a>
#### Overview
LLM Proxy Server includes 2 built-in methods for validating Virtual API keys:
- `lm_proxy.api_key_check.check_api_key_in_config` - verifies API keys against those defined in the config file; used by default
- `lm_proxy.api_key_check.CheckAPIKeyWithRequest` - validates API keys via an external HTTP service
The API key check method can be configured using the `api_key_check` configuration key.
Its value can be either a reference to a Python function in the format `my_module.sub_module1.sub_module2.fn_name`,
or an object containing parameters for a class-based validator.
In the .py config representation, the validator function can be passed directly as a callable.
#### Example configuration for external API key validation using HTTP request to Keycloak / OpenID Connect
This example shows how to validate API keys against an external service (e.g., Keycloak):
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true # interpret response JSON as user info object for further processing / logging
use_cache = true # requires installing cachetools if True: pip install cachetools
cache_ttl = 60 # Cache duration in seconds
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
#### Custom API Key Validation / Extending functionality
For more advanced authentication needs,
you can implement a custom validator function:
```python
# my_validators.py
def validate_api_key(api_key: str) -> str | None:
"""
Validate an API key and return the group name if valid.
Args:
api_key: The API key to validate
Returns:
The name of the group if valid, None otherwise
"""
if api_key == "secret-key":
return "admin"
elif api_key.startswith("user-"):
return "users"
return None
```
Then reference it in your config:
```toml
api_key_check = "my_validators.validate_api_key"
```
> **NOTE**
> In this case, the `api_keys` lists in groups are ignored, and the custom function is responsible for all validation logic.
## 🛠️ Advanced Usage<a id="-advanced-usage"></a>
### Dynamic Model Routing<a id="dynamic-model-routing"></a>
The routing section allows flexible pattern matching with wildcards:
```toml
[routing]
"gpt-4*" = "openai.gpt-4" # Route gpt-4 requests to OpenAI GPT-4
"gpt-3.5*" = "openai.gpt-3.5-turbo" # Route gpt-3.5 requests to OpenAI
"claude*" = "anthropic.*" # Pass model name as-is to Anthropic
"gemini*" = "google.*" # Pass model name as-is to Google
"custom*" = "local.llama-7b" # Map any "custom*" to a specific local model
"*" = "openai.gpt-3.5-turbo" # Default fallback for unmatched models
```
Keys are model name patterns (with `*` wildcard support), and values are connection/model mappings.
Connection names reference those defined in the `[connections]` section.
### Load Balancing Example<a id="load-balancing-example"></a>
- [Simple load-balancer configuration](https://github.com/Nayjest/lm-proxy/blob/main/examples/load_balancer_config.py)
This example demonstrates how to set up a load balancer that randomly
distributes requests across multiple language model servers using the lm_proxy.
### Google Vertex AI Configuration Example<a id="google-vertex-ai-configuration-example"></a>
- [vertex-ai.toml](https://github.com/Nayjest/lm-proxy/blob/main/examples/vertex-ai.toml)
This example demonstrates how to connect LLM Proxy Server to Google Gemini model via Vertex AI API
### Using Tokens from OIDC Provider as Virtual/Client API Keys<a id="using-tokens-from-oidc-provider-as-virtualclient-api-keys"></a>
You can configure LLM Proxy Server to validate tokens from OpenID Connect (OIDC) providers like Keycloak, Auth0, or Okta as API keys.
The following configuration validates Keycloak access tokens by calling the userinfo endpoint:
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true
use_cache = true
cache_ttl = 60
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
**Configuration Parameters:**
- `class` - The API key validation handler class ([lm_proxy.api_key_check.CheckAPIKeyWithRequest](https://github.com/Nayjest/lm-proxy/blob/main/lm_proxy/api_key_check/with_request.py))
- `method` - HTTP method for the validation request (typically `POST` or `GET`)
- `url` - The OIDC provider's userinfo endpoint URL
- `response_as_user_info` - Parse the response as user information for further usage in LLM Proxy Server (extend logged info, determine user group, etc.)
- `use_cache` - Enable caching of validation results (requires installing the `cachetools` package if enabled: `pip install cachetools`)
- `cache_ttl` - Cache time-to-live in seconds (reduces load on identity provider)
- `headers` - Dictionary of headers to send with the validation request
> **Note**: The `{api_key}` placeholder can be used in headers or in the URL. LLM Proxy Server substitutes it with the API key from the client to perform the check.
**Usage:**
Clients pass their OIDC access token as the API key when making requests to LLM Proxy Server.
## 🪝 Request Handlers (Middleware)<a id="-request-handlers--middleware"></a>
Handlers intercept and modify requests *before* they reach the upstream LLM provider. They enable cross-cutting concerns such as rate limiting, logging, auditing, and header manipulation.
Handlers are defined in the `before` list within the configuration file and execute sequentially in the order specified.
### Built-in Handlers
LLM Proxy Server includes several built-in handlers for common operational needs.
#### Rate Limiter
The `RateLimiter` protects upstream credentials and manages traffic load using a sliding window algorithm.
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `max_requests` | int | Maximum number of requests allowed per window |
| `window_seconds` | int | Duration of the sliding window in seconds |
| `per` | string | Scope of the limit: `api_key`, `ip`, `connection`, `group`, or `global` |
**Configuration:**
```toml
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 10
window_seconds = 60
per = "api_key"
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 1000
window_seconds = 300
per = "global"
```
#### HTTP Headers Forwarder
The `HTTPHeadersForwarder` passes specific headers from incoming client requests to the upstream provider—useful for distributed tracing or tenant context propagation.
Sensitive headers (`Authorization`, `Host`, `Content-Length`) are stripped by default to prevent protocol corruption and credential leaks.
```toml
[[before]]
class = "lm_proxy.handlers.HTTPHeadersForwarder"
white_list_headers = ["x-trace-id", "x-correlation-id", "x-tenant-id"]
```
See also [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md).
### Custom Handlers
Extend functionality by implementing custom handlers in Python. A handler is any callable (function or class instance) that accepts a `RequestContext`.
#### Interface
```python
from lm_proxy.base_types import RequestContext
async def my_custom_handler(ctx: RequestContext) -> None:
# Implementation here
pass
```
#### Example: Audit Logger
```python
# my_extensions.py
import logging
from lm_proxy.base_types import RequestContext
class AuditLogger:
def __init__(self, prefix: str = "AUDIT"):
self.prefix = prefix
async def __call__(self, ctx: RequestContext) -> None:
user = ctx.user_info.get("name", "anonymous")
logging.info(f"[{self.prefix}] User '{user}' requested model '{ctx.model}'")
```
**Registration:**
```toml
[[before]]
class = "my_extensions.AuditLogger"
prefix = "SECURITY_AUDIT"
```
## 🧩 Add-on Components<a id="-add-on-components"></a>
### Database Connector<a id="database-connector"></a>
[llm-proxy-server-db-connector](https://github.com/nayjest/lm-proxy-db-connector) is a lightweight SQLAlchemy-based connector that enables LLM Proxy Server to work with relational databases including PostgreSQL, MySQL/MariaDB, SQLite, Oracle, Microsoft SQL Server, and many others.
**Key Features:**
- Configure database connections directly through LLM Proxy Server configuration
- Share database connections across components, extensions, and custom functions
- Built-in database logger for structured logging of AI request data
## 📚 Guides & Reference<a id="-guides--reference"></a>
For more detailed information, check out these articles:
- [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md)
## 🚧 Known Limitations<a id="-known-limitations"></a>
- **Multiple generations (n > 1):** When proxying requests to Google or Anthropic APIs, only the first generation is returned. Multi-generation support is tracked in [#35](https://github.com/Nayjest/lm-proxy/issues/35).
- **Model listing with wildcards / forwarding actual model metadata:** The `/v1/models` endpoint does not query upstream providers to expand wildcard patterns (e.g., `gpt*`) or fetch model metadata. Only explicitly defined model names are listed [#36](https://github.com/Nayjest/lm-proxy/issues/36).
## 🔍 Debugging<a id="-debugging"></a>
### Overview
When **debugging mode** is enabled,
LLM Proxy Server provides detailed logging information to help diagnose issues:
- Stack traces for exceptions are shown in the console
- Logging level is set to DEBUG instead of INFO
> **Warning** ⚠️
> Never enable debugging mode in production environments, as it may expose sensitive information to the application logs.
### Enabling Debugging Mode
To enable debugging, set the `LM_PROXY_DEBUG` environment variable to a truthy value (e.g., "1", "true", "yes").
> **Tip** 💡
> Environment variables can also be defined in a `.env` file.
Alternatively, you can enable or disable debugging via the command-line arguments:
- `--debug` to enable debugging
- `--no-debug` to disable debugging
> **Note** ℹ️
> CLI arguments override environment variable settings.
## 🤝 Contributing<a id="-contributing"></a>
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## 📄 License<a id="-license"></a>
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
© 2025–2026 [Vitalii Stepanenko](mailto:mail@vitaliy.in)
| text/markdown | Vitalii Stepanenko | mail@vitaliy.in | Vitalii Stepanenko | mail@vitaliy.in | MIT License
Copyright (c) 2025–2026 Vitalii Stepanenko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | llm, large language models, ai, gpt, openai, proxy, http, proxy-server, llm gateway, openai, anthropic, google genai | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: O... | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"ai-microcore<6,>=5.1.2",
"anthropic<1,>=0.77; extra == \"all\"",
"anthropic<1,>=0.77; extra == \"anthropic\"",
"fastapi<1,>=0.121.3",
"google-genai<2,>=1.62.0; extra == \"all\"",
"google-genai<2,>=1.62.0; extra == \"google\"",
"pydantic<2.13.0,>=2.12.5",
"pytest<8.5.0,>=8.4.2; extra == \"test\"",
"... | [] | [] | [] | [
"Bug Tracker, https://github.com/Nayjest/lm-proxy/issues",
"Source Code, https://github.com/Nayjest/lm-proxy"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-19T13:45:57.239843 | llm_proxy_server-3.0.2.tar.gz | 27,975 | 19/f3/df7bcb04a715c5c9d1f77cc6c5af6cf079bf0d267f58e2b5400f64836940/llm_proxy_server-3.0.2.tar.gz | source | sdist | null | false | 7b8ff3ddf2edf82e9d310e0a1fcda115 | f8fee513e3fb72c0a08b6fafe9d73926807d9c143aaef7427c54ca36148d8b55 | 19f3df7bcb04a715c5c9d1f77cc6c5af6cf079bf0d267f58e2b5400f64836940 | null | [] | 226 |
2.3 | inference-proxy | 3.0.2 | Inference Proxy is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc. | <h1 align="center"><a href="#">Inference Proxy</a></h1>
<p align="center">
<b>Lightweight, OpenAI-compatible HTTP proxy server / gateway</b><br>unifying access to multiple <b>Large Language Model providers</b> and local inference <br>through a single, standardized API endpoint.
</p>
<p align="center">
<a href="https://pypi.org/project/lm-proxy/"><img src="https://img.shields.io/pypi/v/lm-proxy?color=blue" alt="PyPI"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml/badge.svg" alt="Tests"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml/badge.svg" alt="Code Style"></a>
<img src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/coverage.svg" alt="Code Coverage">
<a href="https://www.bestpractices.dev/projects/11364"><img src="https://www.bestpractices.dev/projects/11364/badge"></a>
<a href="https://github.com/Nayjest/lm-proxy/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Nayjest/lm-proxy?color=d08aff" alt="License"></a>
</p>
Built with Python, FastAPI and [MicroCore](https://github.com/Nayjest/ai-microcore), **Inference Proxy** seamlessly integrates cloud providers like Google, Anthropic, and OpenAI, as well as local PyTorch-based inference, while maintaining full compatibility with OpenAI's API format.
It works as a drop-in replacement for OpenAI's API, allowing you to switch between cloud providers and local models without modifying your existing client code.
**Inference Proxy** supports **real-time token streaming**, **secure Virtual API key management**, and can be used both as an importable Python library and as a standalone HTTP service. Whether you're building production applications or experimenting with different models, Inference Proxy eliminates integration complexity and keeps your codebase **provider-agnostic**.
## Table of Contents
- [Overview](#inference-proxy)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#-configuration)
- [Basic Structure](#basic-structure)
- [Environment Variables](#environment-variables)
- [Proxy API Keys vs. Provider API Keys](#-proxy-api-keys-vs-provider-api-keys)
- [API Usage](#-api-usage)
- [Chat Completions Endpoint](#chat-completions-endpoint)
- [Models List Endpoint](#models-list-endpoint)
- [User Groups Configuration](#-user-groups-configuration)
- [Basic Group Definition](#basic-group-definition)
- [Group-based Access Control](#group-based-access-control)
- [Connection Restrictions](#connection-restrictions)
- [Virtual API Key Validation](#virtual-api-key-validation)
- [Advanced Usage](#%EF%B8%8F-advanced-usage)
- [Dynamic Model Routing](#dynamic-model-routing)
- [Load Balancing Example](#load-balancing-example)
- [Google Vertex AI Example](#google-vertex-ai-configuration-example)
- [Using Tokens from OIDC Provider as Virtual/Client API Keys](#using-tokens-from-oidc-provider-as-virtualclient-api-keys)
- [Add-on Components](#-add-on-components)
- [Database Connector](#database-connector)
- [Request Handlers (Middleware)](#-request-handlers--middleware)
- [Guides & Reference](#-guides--reference)
- [Known Limitations](#-known-limitations)
- [Debugging](#-debugging)
- [Contributing](#-contributing)
- [License](#-license)
<a href="#" align="center"><img alt="Inference Proxy / Gateway" src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/press-kit/assets/lm-proxy_1_hacker_1600x672.png"></a>
## ✨ Features<a id="-features"></a>
- **Provider Agnostic**: Connect to OpenAI, Anthropic, Google AI, local models, and more using a single API
- **Unified Interface**: Access all models through the standard OpenAI API format
- **Dynamic Routing**: Route requests to different LLM providers based on model name patterns
- **Stream Support**: Full streaming support for real-time responses
- **API Key Management**: Configurable API key validation and access control
- **Easy Configuration**: Simple TOML/YAML/JSON/Python configuration files for setup
- **Extensible by Design**: Minimal core with clearly defined extension points, enabling seamless customization and expansion without modifying the core system.
## 🚀 Getting Started<a id="-getting-started"></a>
### Requirements
Python 3.11 | 3.12 | 3.13
### Installation<a id="installation"></a>
```bash
pip install inference-proxy
```
For proxying to Anthropic API or Google Gemini via Vertex AI or Google AI Studio, install optional dependencies:
```
pip install inference-proxy[anthropic,google]
```
or
```
pip install inference-proxy[all]
```
### Quick Start<a id="quick-start"></a>
#### 1. Create a `config.toml` file:
```toml
host = "0.0.0.0"
port = 8000
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
[routing]
"gpt*" = "openai.*"
"claude*" = "anthropic.*"
"*" = "openai.gpt-3.5-turbo"
[groups.default]
api_keys = ["YOUR_API_KEY_HERE"]
```
> **Note** ℹ️
> To enhance security, consider storing upstream API keys in operating system environment variables rather than embedding them directly in the configuration file. You can reference these variables in the configuration using the env:<VAR_NAME> syntax.
#### 2. Start the server:
```bash
inference-proxy
```
Alternatively, run it as a Python module:
```bash
python -m lm_proxy
```
#### 3. Use it with any OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY_HERE",
base_url="http://localhost:8000/v1"
)
completion = client.chat.completions.create(
model="gpt-5", # This will be routed to OpenAI based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(completion.choices[0].message.content)
```
Or use the same endpoint with Claude models:
```python
completion = client.chat.completions.create(
model="claude-opus-4-1-20250805", # This will be routed to Anthropic based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
## 📝 Configuration<a id="-configuration"></a>
Inference Proxy is configured through a TOML/YAML/JSON/Python file that specifies connections, routing rules, and access control.
### Basic Structure<a id="basic-structure"></a>
```toml
host = "0.0.0.0" # Interface to bind to
port = 8000 # Port to listen on
dev_autoreload = false # Enable for development
# API key validation function (optional)
api_key_check = "lm_proxy.api_key_check.check_api_key_in_config"
# LLM Provider Connections
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.google]
api_type = "google"
api_key = "env:GOOGLE_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
# Routing rules (model_pattern = "connection.model")
[routing]
"gpt*" = "openai.*" # Route all GPT models to OpenAI
"claude*" = "anthropic.*" # Route all Claude models to Anthropic
"gemini*" = "google.*" # Route all Gemini models to Google
"*" = "openai.gpt-3.5-turbo" # Default fallback
# Access control groups
[groups.default]
api_keys = [
"KEY1",
"KEY2"
]
# optional
[[loggers]]
class = 'lm_proxy.loggers.BaseLogger'
[loggers.log_writer]
class = 'lm_proxy.loggers.log_writers.JsonLogWriter'
file_name = 'storage/json.log'
[loggers.entry_transformer]
class = 'lm_proxy.loggers.LogEntryTransformer'
completion_tokens = "response.usage.completion_tokens"
prompt_tokens = "response.usage.prompt_tokens"
prompt = "request.messages"
response = "response"
group = "group"
connection = "connection"
api_key_id = "api_key_id"
remote_addr = "remote_addr"
created_at = "created_at"
duration = "duration"
```
### Environment Variables<a id="environment-variables"></a>
You can reference environment variables in your configuration file by prefixing values with `env:`.
For example:
```toml
[connections.openai]
api_key = "env:OPENAI_API_KEY"
```
At runtime, Inference Proxy automatically retrieves the value of the target variable
(OPENAI_API_KEY) from your operating system's environment or from a .env file, if present.
### .env Files
By default, Inference Proxy looks for a `.env` file in the current working directory
and loads environment variables from it.
You can refer to the [.env.template](https://github.com/Nayjest/lm-proxy/blob/main/.env.template)
file for an example:
```dotenv
OPENAI_API_KEY=sk-u........
GOOGLE_API_KEY=AI........
ANTHROPIC_API_KEY=sk-ant-api03--vE........
# "1", "TRUE", "YES", "ON", "ENABLED", "Y", "+" are true, case-insensitive.
# See https://github.com/Nayjest/ai-microcore/blob/v4.4.3/microcore/configuration.py#L36
LM_PROXY_DEBUG=no
```
You can also control `.env` file usage with the `--env` command-line option:
```bash
# Use a custom .env file path
inference-proxy --env="path/to/your/.env"
# Disable .env loading
inference-proxy --env=""
```
## 🔑 Proxy API Keys vs. Provider API Keys<a id="-proxy-api-keys-vs-provider-api-keys"></a>
Inference Proxy utilizes two distinct types of API keys to facilitate secure and efficient request handling.
- **Proxy API Key (Virtual API Key, Client API Key):**
A unique key generated and managed within Inference Proxy.
Clients use these keys to authenticate their requests to the proxy's API endpoints.
Each Client API Key is associated with a specific group, which defines the scope of access and permissions for the client's requests.
These keys allow users to securely interact with the proxy without direct access to external service credentials.
- **Provider API Key (Upstream API Key):**
A key provided by external LLM inference providers (e.g., OpenAI, Anthropic, Mistral, etc.) and configured within the Inference Proxy.
The proxy uses these keys to authenticate and forward validated client requests to the respective external services.
Provider API Keys remain hidden from end users, ensuring secure and transparent communication with provider APIs.
This distinction ensures a clear separation of concerns:
Virtual API Keys manage user authentication and access within the proxy,
while Upstream API Keys handle secure communication with external providers.
## 🔌 API Usage<a id="-api-usage"></a>
Inference Proxy implements the OpenAI chat completions API endpoint. You can use any OpenAI-compatible client to interact with it.
### Chat Completions Endpoint<a id="chat-completions-endpoint"></a>
```http
POST /v1/chat/completions
```
#### Request Format
```json
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"stream": false
}
```
#### Response Format
```json
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
]
}
```
### Models List Endpoint<a id="models-list-endpoint"></a>
List and describe all models available through the API.
```http
GET /v1/models
```
The **Inference Proxy** dynamically builds the models list based on routing rules defined in `config.routing`.
Routing keys can reference both **exact model names** and **model name patterns** (e.g., `"gpt*"`, `"claude*"`, etc.).
By default, wildcard patterns are displayed as-is in the models list (e.g., `"gpt*"`, `"claude*"`).
This behavior can be customized via the `model_listing_mode` configuration option:
```
model_listing_mode = "as_is" | "ignore_wildcards" | "expand_wildcards"
```
Available modes:
- **`as_is`** *(default)* — Lists all entries exactly as defined in the routing configuration, including wildcard patterns.
- **`ignore_wildcards`** — Excludes wildcard patterns, showing only explicitly defined model names.
- **`expand_wildcards`** — Expands wildcard patterns by querying each connected backend for available models *(feature not yet implemented)*.
To obtain a complete and accurate model list in the current implementation,
all supported models must be explicitly defined in the routing configuration, for example:
```toml
[routing]
"gpt-4" = "my_openai_connection.*"
"gpt-5" = "my_openai_connection.*"
"gpt-8"= "my_openai_connection.gpt-3.5-turbo"
"claude-4.5-sonnet" = "my_anthropic_connection.claude-sonnet-4-5-20250929"
"claude-4.1-opus" = "my_anthropic_connection.claude-opus-4-1-20250805"
[connections]
[connections.my_openai_connection]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.my_anthropic_connection]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
```
#### Response Format
```json
{
"object": "list",
"data": [
{
"id": "gpt-6",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
},
{
"id": "claude-5-sonnet",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
}
]
}
```
## 🔒 User Groups Configuration<a id="-user-groups-configuration"></a>
The `[groups]` section in the configuration defines access control rules for different user groups.
Each group can have its own set of virtual API keys and permitted connections.
### Basic Group Definition<a id="basic-group-definition"></a>
```toml
[groups.default]
api_keys = ["KEY1", "KEY2"]
allowed_connections = "*" # Allow access to all connections
```
### Group-based Access Control<a id="group-based-access-control"></a>
You can create multiple groups to segment your users and control their access:
```toml
# Admin group with full access
[groups.admin]
api_keys = ["ADMIN_KEY_1", "ADMIN_KEY_2"]
allowed_connections = "*" # Access to all connections
# Regular users with limited access
[groups.users]
api_keys = ["USER_KEY_1", "USER_KEY_2"]
allowed_connections = "openai,anthropic" # Only allowed to use specific connections
# Free tier with minimal access
[groups.free]
api_keys = ["FREE_KEY_1", "FREE_KEY_2"]
allowed_connections = "openai" # Only allowed to use OpenAI connection
```
### Connection Restrictions<a id="connection-restrictions"></a>
The `allowed_connections` parameter controls which upstream providers a group can access:
- `"*"` - Group can use all configured connections
- `"openai,anthropic"` - Comma-separated list of specific connections the group can use
This allows fine-grained control over which users can access which AI providers, enabling features like:
- Restricting expensive models to premium users
- Creating specialized access tiers for different user groups
- Implementing usage quotas per group
- Billing and cost allocation by user group
### Virtual API Key Validation<a id="virtual-api-key-validation"></a>
#### Overview
Inference Proxy includes 2 built-in methods for validating Virtual API keys:
- `lm_proxy.api_key_check.check_api_key_in_config` - verifies API keys against those defined in the config file; used by default
- `lm_proxy.api_key_check.CheckAPIKeyWithRequest` - validates API keys via an external HTTP service
The API key check method can be configured using the `api_key_check` configuration key.
Its value can be either a reference to a Python function in the format `my_module.sub_module1.sub_module2.fn_name`,
or an object containing parameters for a class-based validator.
In the .py config representation, the validator function can be passed directly as a callable.
#### Example configuration for external API key validation using HTTP request to Keycloak / OpenID Connect
This example shows how to validate API keys against an external service (e.g., Keycloak):
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true # interpret response JSON as user info object for further processing / logging
use_cache = true # requires installing cachetools if True: pip install cachetools
cache_ttl = 60 # Cache duration in seconds
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
#### Custom API Key Validation / Extending functionality
For more advanced authentication needs,
you can implement a custom validator function:
```python
# my_validators.py
def validate_api_key(api_key: str) -> str | None:
"""
Validate an API key and return the group name if valid.
Args:
api_key: The API key to validate
Returns:
The name of the group if valid, None otherwise
"""
if api_key == "secret-key":
return "admin"
elif api_key.startswith("user-"):
return "users"
return None
```
Then reference it in your config:
```toml
api_key_check = "my_validators.validate_api_key"
```
> **NOTE**
> In this case, the `api_keys` lists in groups are ignored, and the custom function is responsible for all validation logic.
## 🛠️ Advanced Usage<a id="-advanced-usage"></a>
### Dynamic Model Routing<a id="dynamic-model-routing"></a>
The routing section allows flexible pattern matching with wildcards:
```toml
[routing]
"gpt-4*" = "openai.gpt-4" # Route gpt-4 requests to OpenAI GPT-4
"gpt-3.5*" = "openai.gpt-3.5-turbo" # Route gpt-3.5 requests to OpenAI
"claude*" = "anthropic.*" # Pass model name as-is to Anthropic
"gemini*" = "google.*" # Pass model name as-is to Google
"custom*" = "local.llama-7b" # Map any "custom*" to a specific local model
"*" = "openai.gpt-3.5-turbo" # Default fallback for unmatched models
```
Keys are model name patterns (with `*` wildcard support), and values are connection/model mappings.
Connection names reference those defined in the `[connections]` section.
### Load Balancing Example<a id="load-balancing-example"></a>
- [Simple load-balancer configuration](https://github.com/Nayjest/lm-proxy/blob/main/examples/load_balancer_config.py)
This example demonstrates how to set up a load balancer that randomly
distributes requests across multiple language model servers using the lm_proxy.
### Google Vertex AI Configuration Example<a id="google-vertex-ai-configuration-example"></a>
- [vertex-ai.toml](https://github.com/Nayjest/lm-proxy/blob/main/examples/vertex-ai.toml)
This example demonstrates how to connect Inference Proxy to Google Gemini model via Vertex AI API
### Using Tokens from OIDC Provider as Virtual/Client API Keys<a id="using-tokens-from-oidc-provider-as-virtualclient-api-keys"></a>
You can configure Inference Proxy to validate tokens from OpenID Connect (OIDC) providers like Keycloak, Auth0, or Okta as API keys.
The following configuration validates Keycloak access tokens by calling the userinfo endpoint:
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true
use_cache = true
cache_ttl = 60
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
**Configuration Parameters:**
- `class` - The API key validation handler class ([lm_proxy.api_key_check.CheckAPIKeyWithRequest](https://github.com/Nayjest/lm-proxy/blob/main/lm_proxy/api_key_check/with_request.py))
- `method` - HTTP method for the validation request (typically `POST` or `GET`)
- `url` - The OIDC provider's userinfo endpoint URL
- `response_as_user_info` - Parse the response as user information for further usage in Inference Proxy (extend logged info, determine user group, etc.)
- `use_cache` - Enable caching of validation results (requires installing the `cachetools` package if enabled: `pip install cachetools`)
- `cache_ttl` - Cache time-to-live in seconds (reduces load on identity provider)
- `headers` - Dictionary of headers to send with the validation request
> **Note**: The `{api_key}` placeholder can be used in headers or in the URL. Inference Proxy substitutes it with the API key from the client to perform the check.
**Usage:**
Clients pass their OIDC access token as the API key when making requests to Inference Proxy.
## 🪝 Request Handlers (Middleware)<a id="-request-handlers--middleware"></a>
Handlers intercept and modify requests *before* they reach the upstream LLM provider. They enable cross-cutting concerns such as rate limiting, logging, auditing, and header manipulation.
Handlers are defined in the `before` list within the configuration file and execute sequentially in the order specified.
### Built-in Handlers
Inference Proxy includes several built-in handlers for common operational needs.
#### Rate Limiter
The `RateLimiter` protects upstream credentials and manages traffic load using a sliding window algorithm.
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `max_requests` | int | Maximum number of requests allowed per window |
| `window_seconds` | int | Duration of the sliding window in seconds |
| `per` | string | Scope of the limit: `api_key`, `ip`, `connection`, `group`, or `global` |
**Configuration:**
```toml
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 10
window_seconds = 60
per = "api_key"
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 1000
window_seconds = 300
per = "global"
```
#### HTTP Headers Forwarder
The `HTTPHeadersForwarder` passes specific headers from incoming client requests to the upstream provider—useful for distributed tracing or tenant context propagation.
Sensitive headers (`Authorization`, `Host`, `Content-Length`) are stripped by default to prevent protocol corruption and credential leaks.
```toml
[[before]]
class = "lm_proxy.handlers.HTTPHeadersForwarder"
white_list_headers = ["x-trace-id", "x-correlation-id", "x-tenant-id"]
```
See also [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md).
### Custom Handlers
Extend functionality by implementing custom handlers in Python. A handler is any callable (function or class instance) that accepts a `RequestContext`.
#### Interface
```python
from lm_proxy.base_types import RequestContext
async def my_custom_handler(ctx: RequestContext) -> None:
# Implementation here
pass
```
#### Example: Audit Logger
```python
# my_extensions.py
import logging
from lm_proxy.base_types import RequestContext
class AuditLogger:
def __init__(self, prefix: str = "AUDIT"):
self.prefix = prefix
async def __call__(self, ctx: RequestContext) -> None:
user = ctx.user_info.get("name", "anonymous")
logging.info(f"[{self.prefix}] User '{user}' requested model '{ctx.model}'")
```
**Registration:**
```toml
[[before]]
class = "my_extensions.AuditLogger"
prefix = "SECURITY_AUDIT"
```
## 🧩 Add-on Components<a id="-add-on-components"></a>
### Database Connector<a id="database-connector"></a>
[inference-proxy-db-connector](https://github.com/nayjest/lm-proxy-db-connector) is a lightweight SQLAlchemy-based connector that enables Inference Proxy to work with relational databases including PostgreSQL, MySQL/MariaDB, SQLite, Oracle, Microsoft SQL Server, and many others.
**Key Features:**
- Configure database connections directly through Inference Proxy configuration
- Share database connections across components, extensions, and custom functions
- Built-in database logger for structured logging of AI request data
## 📚 Guides & Reference<a id="-guides--reference"></a>
For more detailed information, check out these articles:
- [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md)
## 🚧 Known Limitations<a id="-known-limitations"></a>
- **Multiple generations (n > 1):** When proxying requests to Google or Anthropic APIs, only the first generation is returned. Multi-generation support is tracked in [#35](https://github.com/Nayjest/lm-proxy/issues/35).
- **Model listing with wildcards / forwarding actual model metadata:** The `/v1/models` endpoint does not query upstream providers to expand wildcard patterns (e.g., `gpt*`) or fetch model metadata. Only explicitly defined model names are listed [#36](https://github.com/Nayjest/lm-proxy/issues/36).
## 🔍 Debugging<a id="-debugging"></a>
### Overview
When **debugging mode** is enabled,
Inference Proxy provides detailed logging information to help diagnose issues:
- Stack traces for exceptions are shown in the console
- Logging level is set to DEBUG instead of INFO
> **Warning** ⚠️
> Never enable debugging mode in production environments, as it may expose sensitive information to the application logs.
### Enabling Debugging Mode
To enable debugging, set the `LM_PROXY_DEBUG` environment variable to a truthy value (e.g., "1", "true", "yes").
> **Tip** 💡
> Environment variables can also be defined in a `.env` file.
Alternatively, you can enable or disable debugging via the command-line arguments:
- `--debug` to enable debugging
- `--no-debug` to disable debugging
> **Note** ℹ️
> CLI arguments override environment variable settings.
## 🤝 Contributing<a id="-contributing"></a>
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## 📄 License<a id="-license"></a>
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
© 2025–2026 [Vitalii Stepanenko](mailto:mail@vitaliy.in)
| text/markdown | Vitalii Stepanenko | mail@vitaliy.in | Vitalii Stepanenko | mail@vitaliy.in | MIT License
Copyright (c) 2025–2026 Vitalii Stepanenko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | llm, large language models, ai, gpt, openai, proxy, http, proxy-server, llm gateway, openai, anthropic, google genai | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: O... | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"ai-microcore<6,>=5.1.2",
"anthropic<1,>=0.77; extra == \"all\"",
"anthropic<1,>=0.77; extra == \"anthropic\"",
"fastapi<1,>=0.121.3",
"google-genai<2,>=1.62.0; extra == \"all\"",
"google-genai<2,>=1.62.0; extra == \"google\"",
"pydantic<2.13.0,>=2.12.5",
"pytest<8.5.0,>=8.4.2; extra == \"test\"",
"... | [] | [] | [] | [
"Bug Tracker, https://github.com/Nayjest/lm-proxy/issues",
"Source Code, https://github.com/Nayjest/lm-proxy"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-19T13:45:55.566336 | inference_proxy-3.0.2.tar.gz | 27,974 | 33/b6/eeaca405e97d0a8a51c8421b542e6aca821723d18d75f90f294e1ef3a01d/inference_proxy-3.0.2.tar.gz | source | sdist | null | false | f9a927d75f5aaa653282cf2efc230155 | 725092977cfc72879113a6961ce83c01e64d934c3cd8984340d63d0787f03e1c | 33b6eeaca405e97d0a8a51c8421b542e6aca821723d18d75f90f294e1ef3a01d | null | [] | 233 |
2.3 | ai-proxy-server | 3.0.2 | AI Proxy Server is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc. | <h1 align="center"><a href="#">AI Proxy Server</a></h1>
<p align="center">
<b>Lightweight, OpenAI-compatible HTTP proxy server / gateway</b><br>unifying access to multiple <b>Large Language Model providers</b> and local inference <br>through a single, standardized API endpoint.
</p>
<p align="center">
<a href="https://pypi.org/project/lm-proxy/"><img src="https://img.shields.io/pypi/v/lm-proxy?color=blue" alt="PyPI"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/tests.yml/badge.svg" alt="Tests"></a>
<a href="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml"><img src="https://github.com/Nayjest/lm-proxy/actions/workflows/code-style.yml/badge.svg" alt="Code Style"></a>
<img src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/coverage.svg" alt="Code Coverage">
<a href="https://www.bestpractices.dev/projects/11364"><img src="https://www.bestpractices.dev/projects/11364/badge"></a>
<a href="https://github.com/Nayjest/lm-proxy/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Nayjest/lm-proxy?color=d08aff" alt="License"></a>
</p>
Built with Python, FastAPI and [MicroCore](https://github.com/Nayjest/ai-microcore), **AI Proxy Server** seamlessly integrates cloud providers like Google, Anthropic, and OpenAI, as well as local PyTorch-based inference, while maintaining full compatibility with OpenAI's API format.
It works as a drop-in replacement for OpenAI's API, allowing you to switch between cloud providers and local models without modifying your existing client code.
**AI Proxy Server** supports **real-time token streaming**, **secure Virtual API key management**, and can be used both as an importable Python library and as a standalone HTTP service. Whether you're building production applications or experimenting with different models, AI Proxy Server eliminates integration complexity and keeps your codebase **provider-agnostic**.
## Table of Contents
- [Overview](#ai-proxy-server)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#-configuration)
- [Basic Structure](#basic-structure)
- [Environment Variables](#environment-variables)
- [Proxy API Keys vs. Provider API Keys](#-proxy-api-keys-vs-provider-api-keys)
- [API Usage](#-api-usage)
- [Chat Completions Endpoint](#chat-completions-endpoint)
- [Models List Endpoint](#models-list-endpoint)
- [User Groups Configuration](#-user-groups-configuration)
- [Basic Group Definition](#basic-group-definition)
- [Group-based Access Control](#group-based-access-control)
- [Connection Restrictions](#connection-restrictions)
- [Virtual API Key Validation](#virtual-api-key-validation)
- [Advanced Usage](#%EF%B8%8F-advanced-usage)
- [Dynamic Model Routing](#dynamic-model-routing)
- [Load Balancing Example](#load-balancing-example)
- [Google Vertex AI Example](#google-vertex-ai-configuration-example)
- [Using Tokens from OIDC Provider as Virtual/Client API Keys](#using-tokens-from-oidc-provider-as-virtualclient-api-keys)
- [Add-on Components](#-add-on-components)
- [Database Connector](#database-connector)
- [Request Handlers (Middleware)](#-request-handlers--middleware)
- [Guides & Reference](#-guides--reference)
- [Known Limitations](#-known-limitations)
- [Debugging](#-debugging)
- [Contributing](#-contributing)
- [License](#-license)
<a href="#" align="center"><img alt="AI Proxy Server / Gateway" src="https://raw.githubusercontent.com/Nayjest/lm-proxy/main/press-kit/assets/lm-proxy_1_hacker_1600x672.png"></a>
## ✨ Features<a id="-features"></a>
- **Provider Agnostic**: Connect to OpenAI, Anthropic, Google AI, local models, and more using a single API
- **Unified Interface**: Access all models through the standard OpenAI API format
- **Dynamic Routing**: Route requests to different LLM providers based on model name patterns
- **Stream Support**: Full streaming support for real-time responses
- **API Key Management**: Configurable API key validation and access control
- **Easy Configuration**: Simple TOML/YAML/JSON/Python configuration files for setup
- **Extensible by Design**: Minimal core with clearly defined extension points, enabling seamless customization and expansion without modifying the core system.
## 🚀 Getting Started<a id="-getting-started"></a>
### Requirements
Python 3.11 | 3.12 | 3.13
### Installation<a id="installation"></a>
```bash
pip install ai-proxy-server
```
For proxying to Anthropic API or Google Gemini via Vertex AI or Google AI Studio, install optional dependencies:
```
pip install ai-proxy-server[anthropic,google]
```
or
```
pip install ai-proxy-server[all]
```
### Quick Start<a id="quick-start"></a>
#### 1. Create a `config.toml` file:
```toml
host = "0.0.0.0"
port = 8000
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
[routing]
"gpt*" = "openai.*"
"claude*" = "anthropic.*"
"*" = "openai.gpt-3.5-turbo"
[groups.default]
api_keys = ["YOUR_API_KEY_HERE"]
```
> **Note** ℹ️
> To enhance security, consider storing upstream API keys in operating system environment variables rather than embedding them directly in the configuration file. You can reference these variables in the configuration using the env:<VAR_NAME> syntax.
#### 2. Start the server:
```bash
ai-proxy-server
```
Alternatively, run it as a Python module:
```bash
python -m lm_proxy
```
#### 3. Use it with any OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY_HERE",
base_url="http://localhost:8000/v1"
)
completion = client.chat.completions.create(
model="gpt-5", # This will be routed to OpenAI based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(completion.choices[0].message.content)
```
Or use the same endpoint with Claude models:
```python
completion = client.chat.completions.create(
model="claude-opus-4-1-20250805", # This will be routed to Anthropic based on config
messages=[{"role": "user", "content": "Hello, world!"}]
)
```
## 📝 Configuration<a id="-configuration"></a>
AI Proxy Server is configured through a TOML/YAML/JSON/Python file that specifies connections, routing rules, and access control.
### Basic Structure<a id="basic-structure"></a>
```toml
host = "0.0.0.0" # Interface to bind to
port = 8000 # Port to listen on
dev_autoreload = false # Enable for development
# API key validation function (optional)
api_key_check = "lm_proxy.api_key_check.check_api_key_in_config"
# LLM Provider Connections
[connections]
[connections.openai]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.google]
api_type = "google"
api_key = "env:GOOGLE_API_KEY"
[connections.anthropic]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
# Routing rules (model_pattern = "connection.model")
[routing]
"gpt*" = "openai.*" # Route all GPT models to OpenAI
"claude*" = "anthropic.*" # Route all Claude models to Anthropic
"gemini*" = "google.*" # Route all Gemini models to Google
"*" = "openai.gpt-3.5-turbo" # Default fallback
# Access control groups
[groups.default]
api_keys = [
"KEY1",
"KEY2"
]
# optional
[[loggers]]
class = 'lm_proxy.loggers.BaseLogger'
[loggers.log_writer]
class = 'lm_proxy.loggers.log_writers.JsonLogWriter'
file_name = 'storage/json.log'
[loggers.entry_transformer]
class = 'lm_proxy.loggers.LogEntryTransformer'
completion_tokens = "response.usage.completion_tokens"
prompt_tokens = "response.usage.prompt_tokens"
prompt = "request.messages"
response = "response"
group = "group"
connection = "connection"
api_key_id = "api_key_id"
remote_addr = "remote_addr"
created_at = "created_at"
duration = "duration"
```
### Environment Variables<a id="environment-variables"></a>
You can reference environment variables in your configuration file by prefixing values with `env:`.
For example:
```toml
[connections.openai]
api_key = "env:OPENAI_API_KEY"
```
At runtime, AI Proxy Server automatically retrieves the value of the target variable
(OPENAI_API_KEY) from your operating system's environment or from a .env file, if present.
### .env Files
By default, AI Proxy Server looks for a `.env` file in the current working directory
and loads environment variables from it.
You can refer to the [.env.template](https://github.com/Nayjest/lm-proxy/blob/main/.env.template)
file for an example:
```dotenv
OPENAI_API_KEY=sk-u........
GOOGLE_API_KEY=AI........
ANTHROPIC_API_KEY=sk-ant-api03--vE........
# "1", "TRUE", "YES", "ON", "ENABLED", "Y", "+" are true, case-insensitive.
# See https://github.com/Nayjest/ai-microcore/blob/v4.4.3/microcore/configuration.py#L36
LM_PROXY_DEBUG=no
```
You can also control `.env` file usage with the `--env` command-line option:
```bash
# Use a custom .env file path
ai-proxy-server --env="path/to/your/.env"
# Disable .env loading
ai-proxy-server --env=""
```
## 🔑 Proxy API Keys vs. Provider API Keys<a id="-proxy-api-keys-vs-provider-api-keys"></a>
AI Proxy Server utilizes two distinct types of API keys to facilitate secure and efficient request handling.
- **Proxy API Key (Virtual API Key, Client API Key):**
A unique key generated and managed within AI Proxy Server.
Clients use these keys to authenticate their requests to the proxy's API endpoints.
Each Client API Key is associated with a specific group, which defines the scope of access and permissions for the client's requests.
These keys allow users to securely interact with the proxy without direct access to external service credentials.
- **Provider API Key (Upstream API Key):**
A key provided by external LLM inference providers (e.g., OpenAI, Anthropic, Mistral, etc.) and configured within the AI Proxy Server.
The proxy uses these keys to authenticate and forward validated client requests to the respective external services.
Provider API Keys remain hidden from end users, ensuring secure and transparent communication with provider APIs.
This distinction ensures a clear separation of concerns:
Virtual API Keys manage user authentication and access within the proxy,
while Upstream API Keys handle secure communication with external providers.
## 🔌 API Usage<a id="-api-usage"></a>
AI Proxy Server implements the OpenAI chat completions API endpoint. You can use any OpenAI-compatible client to interact with it.
### Chat Completions Endpoint<a id="chat-completions-endpoint"></a>
```http
POST /v1/chat/completions
```
#### Request Format
```json
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"stream": false
}
```
#### Response Format
```json
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
]
}
```
### Models List Endpoint<a id="models-list-endpoint"></a>
List and describe all models available through the API.
```http
GET /v1/models
```
The **AI Proxy Server** dynamically builds the models list based on routing rules defined in `config.routing`.
Routing keys can reference both **exact model names** and **model name patterns** (e.g., `"gpt*"`, `"claude*"`, etc.).
By default, wildcard patterns are displayed as-is in the models list (e.g., `"gpt*"`, `"claude*"`).
This behavior can be customized via the `model_listing_mode` configuration option:
```
model_listing_mode = "as_is" | "ignore_wildcards" | "expand_wildcards"
```
Available modes:
- **`as_is`** *(default)* — Lists all entries exactly as defined in the routing configuration, including wildcard patterns.
- **`ignore_wildcards`** — Excludes wildcard patterns, showing only explicitly defined model names.
- **`expand_wildcards`** — Expands wildcard patterns by querying each connected backend for available models *(feature not yet implemented)*.
To obtain a complete and accurate model list in the current implementation,
all supported models must be explicitly defined in the routing configuration, for example:
```toml
[routing]
"gpt-4" = "my_openai_connection.*"
"gpt-5" = "my_openai_connection.*"
"gpt-8"= "my_openai_connection.gpt-3.5-turbo"
"claude-4.5-sonnet" = "my_anthropic_connection.claude-sonnet-4-5-20250929"
"claude-4.1-opus" = "my_anthropic_connection.claude-opus-4-1-20250805"
[connections]
[connections.my_openai_connection]
api_type = "open_ai"
api_base = "https://api.openai.com/v1/"
api_key = "env:OPENAI_API_KEY"
[connections.my_anthropic_connection]
api_type = "anthropic"
api_key = "env:ANTHROPIC_API_KEY"
```
#### Response Format
```json
{
"object": "list",
"data": [
{
"id": "gpt-6",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
},
{
"id": "claude-5-sonnet",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
}
]
}
```
## 🔒 User Groups Configuration<a id="-user-groups-configuration"></a>
The `[groups]` section in the configuration defines access control rules for different user groups.
Each group can have its own set of virtual API keys and permitted connections.
### Basic Group Definition<a id="basic-group-definition"></a>
```toml
[groups.default]
api_keys = ["KEY1", "KEY2"]
allowed_connections = "*" # Allow access to all connections
```
### Group-based Access Control<a id="group-based-access-control"></a>
You can create multiple groups to segment your users and control their access:
```toml
# Admin group with full access
[groups.admin]
api_keys = ["ADMIN_KEY_1", "ADMIN_KEY_2"]
allowed_connections = "*" # Access to all connections
# Regular users with limited access
[groups.users]
api_keys = ["USER_KEY_1", "USER_KEY_2"]
allowed_connections = "openai,anthropic" # Only allowed to use specific connections
# Free tier with minimal access
[groups.free]
api_keys = ["FREE_KEY_1", "FREE_KEY_2"]
allowed_connections = "openai" # Only allowed to use OpenAI connection
```
### Connection Restrictions<a id="connection-restrictions"></a>
The `allowed_connections` parameter controls which upstream providers a group can access:
- `"*"` - Group can use all configured connections
- `"openai,anthropic"` - Comma-separated list of specific connections the group can use
This allows fine-grained control over which users can access which AI providers, enabling features like:
- Restricting expensive models to premium users
- Creating specialized access tiers for different user groups
- Implementing usage quotas per group
- Billing and cost allocation by user group
### Virtual API Key Validation<a id="virtual-api-key-validation"></a>
#### Overview
AI Proxy Server includes 2 built-in methods for validating Virtual API keys:
- `lm_proxy.api_key_check.check_api_key_in_config` - verifies API keys against those defined in the config file; used by default
- `lm_proxy.api_key_check.CheckAPIKeyWithRequest` - validates API keys via an external HTTP service
The API key check method can be configured using the `api_key_check` configuration key.
Its value can be either a reference to a Python function in the format `my_module.sub_module1.sub_module2.fn_name`,
or an object containing parameters for a class-based validator.
In the .py config representation, the validator function can be passed directly as a callable.
#### Example configuration for external API key validation using HTTP request to Keycloak / OpenID Connect
This example shows how to validate API keys against an external service (e.g., Keycloak):
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true # interpret response JSON as user info object for further processing / logging
use_cache = true # requires installing cachetools if True: pip install cachetools
cache_ttl = 60 # Cache duration in seconds
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
#### Custom API Key Validation / Extending functionality
For more advanced authentication needs,
you can implement a custom validator function:
```python
# my_validators.py
def validate_api_key(api_key: str) -> str | None:
"""
Validate an API key and return the group name if valid.
Args:
api_key: The API key to validate
Returns:
The name of the group if valid, None otherwise
"""
if api_key == "secret-key":
return "admin"
elif api_key.startswith("user-"):
return "users"
return None
```
Then reference it in your config:
```toml
api_key_check = "my_validators.validate_api_key"
```
> **NOTE**
> In this case, the `api_keys` lists in groups are ignored, and the custom function is responsible for all validation logic.
## 🛠️ Advanced Usage<a id="-advanced-usage"></a>
### Dynamic Model Routing<a id="dynamic-model-routing"></a>
The routing section allows flexible pattern matching with wildcards:
```toml
[routing]
"gpt-4*" = "openai.gpt-4" # Route gpt-4 requests to OpenAI GPT-4
"gpt-3.5*" = "openai.gpt-3.5-turbo" # Route gpt-3.5 requests to OpenAI
"claude*" = "anthropic.*" # Pass model name as-is to Anthropic
"gemini*" = "google.*" # Pass model name as-is to Google
"custom*" = "local.llama-7b" # Map any "custom*" to a specific local model
"*" = "openai.gpt-3.5-turbo" # Default fallback for unmatched models
```
Keys are model name patterns (with `*` wildcard support), and values are connection/model mappings.
Connection names reference those defined in the `[connections]` section.
### Load Balancing Example<a id="load-balancing-example"></a>
- [Simple load-balancer configuration](https://github.com/Nayjest/lm-proxy/blob/main/examples/load_balancer_config.py)
This example demonstrates how to set up a load balancer that randomly
distributes requests across multiple language model servers using the lm_proxy.
### Google Vertex AI Configuration Example<a id="google-vertex-ai-configuration-example"></a>
- [vertex-ai.toml](https://github.com/Nayjest/lm-proxy/blob/main/examples/vertex-ai.toml)
This example demonstrates how to connect AI Proxy Server to Google Gemini model via Vertex AI API
### Using Tokens from OIDC Provider as Virtual/Client API Keys<a id="using-tokens-from-oidc-provider-as-virtualclient-api-keys"></a>
You can configure AI Proxy Server to validate tokens from OpenID Connect (OIDC) providers like Keycloak, Auth0, or Okta as API keys.
The following configuration validates Keycloak access tokens by calling the userinfo endpoint:
```toml
[api_key_check]
class = "lm_proxy.api_key_check.CheckAPIKeyWithRequest"
method = "POST"
url = "http://keycloak:8080/realms/master/protocol/openid-connect/userinfo"
response_as_user_info = true
use_cache = true
cache_ttl = 60
[api_key_check.headers]
Authorization = "Bearer {api_key}"
```
**Configuration Parameters:**
- `class` - The API key validation handler class ([lm_proxy.api_key_check.CheckAPIKeyWithRequest](https://github.com/Nayjest/lm-proxy/blob/main/lm_proxy/api_key_check/with_request.py))
- `method` - HTTP method for the validation request (typically `POST` or `GET`)
- `url` - The OIDC provider's userinfo endpoint URL
- `response_as_user_info` - Parse the response as user information for further usage in AI Proxy Server (extend logged info, determine user group, etc.)
- `use_cache` - Enable caching of validation results (requires installing the `cachetools` package if enabled: `pip install cachetools`)
- `cache_ttl` - Cache time-to-live in seconds (reduces load on identity provider)
- `headers` - Dictionary of headers to send with the validation request
> **Note**: The `{api_key}` placeholder can be used in headers or in the URL. AI Proxy Server substitutes it with the API key from the client to perform the check.
**Usage:**
Clients pass their OIDC access token as the API key when making requests to AI Proxy Server.
## 🪝 Request Handlers (Middleware)<a id="-request-handlers--middleware"></a>
Handlers intercept and modify requests *before* they reach the upstream LLM provider. They enable cross-cutting concerns such as rate limiting, logging, auditing, and header manipulation.
Handlers are defined in the `before` list within the configuration file and execute sequentially in the order specified.
### Built-in Handlers
AI Proxy Server includes several built-in handlers for common operational needs.
#### Rate Limiter
The `RateLimiter` protects upstream credentials and manages traffic load using a sliding window algorithm.
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `max_requests` | int | Maximum number of requests allowed per window |
| `window_seconds` | int | Duration of the sliding window in seconds |
| `per` | string | Scope of the limit: `api_key`, `ip`, `connection`, `group`, or `global` |
**Configuration:**
```toml
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 10
window_seconds = 60
per = "api_key"
[[before]]
class = "lm_proxy.handlers.RateLimiter"
max_requests = 1000
window_seconds = 300
per = "global"
```
#### HTTP Headers Forwarder
The `HTTPHeadersForwarder` passes specific headers from incoming client requests to the upstream provider—useful for distributed tracing or tenant context propagation.
Sensitive headers (`Authorization`, `Host`, `Content-Length`) are stripped by default to prevent protocol corruption and credential leaks.
```toml
[[before]]
class = "lm_proxy.handlers.HTTPHeadersForwarder"
white_list_headers = ["x-trace-id", "x-correlation-id", "x-tenant-id"]
```
See also [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md).
### Custom Handlers
Extend functionality by implementing custom handlers in Python. A handler is any callable (function or class instance) that accepts a `RequestContext`.
#### Interface
```python
from lm_proxy.base_types import RequestContext
async def my_custom_handler(ctx: RequestContext) -> None:
# Implementation here
pass
```
#### Example: Audit Logger
```python
# my_extensions.py
import logging
from lm_proxy.base_types import RequestContext
class AuditLogger:
def __init__(self, prefix: str = "AUDIT"):
self.prefix = prefix
async def __call__(self, ctx: RequestContext) -> None:
user = ctx.user_info.get("name", "anonymous")
logging.info(f"[{self.prefix}] User '{user}' requested model '{ctx.model}'")
```
**Registration:**
```toml
[[before]]
class = "my_extensions.AuditLogger"
prefix = "SECURITY_AUDIT"
```
## 🧩 Add-on Components<a id="-add-on-components"></a>
### Database Connector<a id="database-connector"></a>
[ai-proxy-server-db-connector](https://github.com/nayjest/lm-proxy-db-connector) is a lightweight SQLAlchemy-based connector that enables AI Proxy Server to work with relational databases including PostgreSQL, MySQL/MariaDB, SQLite, Oracle, Microsoft SQL Server, and many others.
**Key Features:**
- Configure database connections directly through AI Proxy Server configuration
- Share database connections across components, extensions, and custom functions
- Built-in database logger for structured logging of AI request data
## 📚 Guides & Reference<a id="-guides--reference"></a>
For more detailed information, check out these articles:
- [HTTP Header Management](https://github.com/Nayjest/lm-proxy/blob/main/doc/http_headers.md)
## 🚧 Known Limitations<a id="-known-limitations"></a>
- **Multiple generations (n > 1):** When proxying requests to Google or Anthropic APIs, only the first generation is returned. Multi-generation support is tracked in [#35](https://github.com/Nayjest/lm-proxy/issues/35).
- **Model listing with wildcards / forwarding actual model metadata:** The `/v1/models` endpoint does not query upstream providers to expand wildcard patterns (e.g., `gpt*`) or fetch model metadata. Only explicitly defined model names are listed [#36](https://github.com/Nayjest/lm-proxy/issues/36).
## 🔍 Debugging<a id="-debugging"></a>
### Overview
When **debugging mode** is enabled,
AI Proxy Server provides detailed logging information to help diagnose issues:
- Stack traces for exceptions are shown in the console
- Logging level is set to DEBUG instead of INFO
> **Warning** ⚠️
> Never enable debugging mode in production environments, as it may expose sensitive information to the application logs.
### Enabling Debugging Mode
To enable debugging, set the `LM_PROXY_DEBUG` environment variable to a truthy value (e.g., "1", "true", "yes").
> **Tip** 💡
> Environment variables can also be defined in a `.env` file.
Alternatively, you can enable or disable debugging via the command-line arguments:
- `--debug` to enable debugging
- `--no-debug` to disable debugging
> **Note** ℹ️
> CLI arguments override environment variable settings.
## 🤝 Contributing<a id="-contributing"></a>
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## 📄 License<a id="-license"></a>
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
© 2025–2026 [Vitalii Stepanenko](mailto:mail@vitaliy.in)
| text/markdown | Vitalii Stepanenko | mail@vitaliy.in | Vitalii Stepanenko | mail@vitaliy.in | MIT License
Copyright (c) 2025–2026 Vitalii Stepanenko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | llm, large language models, ai, gpt, openai, proxy, http, proxy-server, llm gateway, openai, anthropic, google genai | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: O... | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"ai-microcore<6,>=5.1.2",
"anthropic<1,>=0.77; extra == \"all\"",
"anthropic<1,>=0.77; extra == \"anthropic\"",
"fastapi<1,>=0.121.3",
"google-genai<2,>=1.62.0; extra == \"all\"",
"google-genai<2,>=1.62.0; extra == \"google\"",
"pydantic<2.13.0,>=2.12.5",
"pytest<8.5.0,>=8.4.2; extra == \"test\"",
"... | [] | [] | [] | [
"Bug Tracker, https://github.com/Nayjest/lm-proxy/issues",
"Source Code, https://github.com/Nayjest/lm-proxy"
] | twine/6.2.0 CPython/3.13.1 | 2026-02-19T13:45:54.226114 | ai_proxy_server-3.0.2.tar.gz | 27,986 | ed/31/23700e8f5c3ccd017b02918ecb2c979cfc01d4dd6ae49dc8d54ff08c0e8e/ai_proxy_server-3.0.2.tar.gz | source | sdist | null | false | 806181c47db184114b91906db60a3d41 | f2e261d47afaca59ea9a4a7d1fb613465359421a86494f9230624606fc0f0501 | ed3123700e8f5c3ccd017b02918ecb2c979cfc01d4dd6ae49dc8d54ff08c0e8e | null | [] | 239 |
2.4 | gnc-smoothie | 0.0.2 | Model fitting using IRLS and variants | # GNC Smoothie package
- [M-estimation](#m-estimation)
- [The Welsch influence function](#the-welsch-influence-function)
- [Iteratively reweighted least squares](#iteratively-reweighted-least-squares)
- [Supervised Gauss-Newton algorithm](#supervised-gauss-newton-algorithm)
- [GNC Smoothie software](#gnc-smoothie-software)
- [Designing your own GNC-Smoothie model classes](#designing-your-own-gnc-smoothie-model-classes)
Python library supporting M-estimation using two algorithms: the well-known Iteratively Reweighted Least Squares (IRLS)
and our custom Supervised Gauss-Newton algorithm. Author: Philip McLauchlan `philipmclauchlan6@gmail.com`.
First some introductory theory.
## M-estimation
M-estimation is a generalisation of maximum-likelihood estimation.
We assume a known population probability density function (PDF) $ f(.) $, parametrised by a vector of parameters $ {\bf x} $,
and a set of independent and identically distributed data $ {\bf z}_i $, $ i=1,...,n $ sampled from the population.
The general model for the observations is
$$
{\bf z}_i = {\bf h}_i({\bf x}) + \text{noise}
$$
for some observation model function $ {\bf h}_i({\bf x}) $.
The distribution of the noise determined by the population PDF, which is defined as some function
$$
f({\bf h}_i({\bf x}) - {\bf z}_i) = f({\bf r}_i)
$$
defining the $ i $'th data error or "residual" vector $ {\bf r}_i $ as
$$
{\bf r}_i({\bf x}) = {\bf h}_i({\bf x}) - {\bf z}_i
$$
For instance for Normal distributed observation errors with standard deviation $ \sigma $ we would have
$$
f({\bf r}_i) = \,\mathrm{e}^{-\frac{|| {\bf r}_i ||^2}{2\sigma^2}}
$$
The maximum likelihood estimator of $ {\bf x} $ can be computed as
$$
\widehat{\bf x} = \underset{{\bf x}}{\text{arg}\,\text{max}} \left( \prod_{i=1}^n f({\bf r}_i({\bf x})) \right)
$$
or equivalently,
$$
\widehat{\bf x} = \underset{{\bf x}}{\text{arg}\,\text{min}} \left( \sum_{i=1}^n - \log f({\bf r}_i({\bf x})) \right)
$$
M-estimation generalises this method by substituting a different function into the above sum, so we instead compute
$$
\widehat{\bf x} = \underset{{\bf x}}{\text{arg}\,\text{min}} \left( \sum_{i=1}^n \rho(|| {\bf r}_i({\bf x}) ||) \right)
$$
for some function $ \rho(r_i) $ where
$$
r_i = || {\bf r}_i({\bf x}) ||
$$
We write the objective function above as
$$
F({\bf x}) = \sum_{i=1}^n \rho(|| {\bf r}_i({\bf x}) ||)
$$
or
$$
F({\bf x}) = \sum_{i=1}^n \rho(r_i({\bf x}))
$$
In the special case of normally distributed observation errors, this give rise to standard least-squares,
$ \rho(r) \sim r^2 $, the squared error in the observations.
The development and popularisation of M-estimation was driven by the need to fit models to data with outliers, i.e.
data not sampled from the population pdf but from a distinct distribution or distributions.
When outliers are present the least-squares method breaks down because single outliers
can have a huge influence, leading to a wildly incorrect value for $ \widehat{\bf x} $.
To allow for outliers $ \rho(r) $ is shaped by reducing the value of $ \rho(r) $ for large $ r $ error values.
The choice of influence function $ \psi(r) = d\rho(r)/dr $ is driven by a trade-off between the desire to
provide a good accuracy in the resulting
estimate for $ \widehat{\bf x} $ while providing robustness to noise in the data.
For instance, instead of the quadratic function required for least-squares, the pseudo-Huber influence
function [1] is asymptotically linear in order to provide some level of robustness.
## The Welsch influence function
Redescending influence functions have the property that their gradient tends to zero at either ends of the range.
This allows them to be robust to outliers with large errors. On the negative side, redescending influence functions
have the problem that the objective function above minimised by M-estimation may have multiple local minima.
It is difficult to ensure that the global minimum is reached. When the standard method "iteratively reweighted least-squares"
(IRLS) is used, the result will depend on the quality of the initial value of $ {\bf x} $
used for the iteration.
We start with the Welsch influence function [2]. This uses a negative Gaussian:
$$
\rho(r) = \frac{\sigma^2}{2} \left( 1 - \,\mathrm{e}^{-\frac{r^2}{2\sigma^2}} \right)
$$
$$
\psi(r) = \frac{d\rho(r)}{dr} = \frac{r}{2} \,\mathrm{e}^{-\frac{r^2}{2\sigma^2}}
$$
where the width $ \sigma $ of the Gaussian is known as the "wavelength" of the Welsch influence function.
Using a Gaussian influence function, whose gradient tends to zero for large errors, ensures robustness to large errors,
because their influence on the solution will be very small.
However in general it is presumed in the literature that solving M-estimation using redescending influence functions
requires a good initial estimate of the solution and comes with no guarantees of convergence. Recent work [3] in IRLS using
Graduated-Non-Convextity [4] has clarified that this is not always the case, and for many practical problems we can
achieve the *global* optimum solution without any initial model estimate being provided.
## Iteratively reweighted least squares
IRLS is the standard technique used to solve the non-linear optimisation problems that arise in M-estimation using robust
influence functions. IRLS assumes that the non-robust least-squares solution
for $ {\bf x} $ is soluble in closed form, given some "weights" assigned
to the data points. In other words there is a (simple) algorithm that can be used to solve the optimisation problem
$$
\widehat{\bf x} = \underset{{\bf x}}{\text{arg}\,\text{min}} \left( \sum_{i=1}^n w_i || {\bf h}_i({\bf x}) - {\bf z}_i ||^2)\right)
$$
or
$$
\widehat{\bf x} = \underset{{\bf x}}{\text{arg}\,\text{min}} \left( F_{\text{LS}}({\bf x}) \right)
$$
where
$$
F_{\text{LS}}({\bf x}) = \sum_{i=1}^n w_i r_i({\bf x})^2 = \sum_{i=1}^n w_i ||{\bf h}_i({\bf x}) - {\bf z}_i||^2
$$
for weights $ w_i $.
IRLS is based on the observation that the solution $ \widehat{\bf x} $ must be a stationary point of the
objective function $ F_{\text{LS}} $ in the above equation, so we must have
$$
\frac{dF_{\text{LS}}}{d{\bf x}} = \sum_{i=1}^n w_i r_i \frac{dr_i}{d{\bf x}} = {\bf 0}
$$
The stationary point condition for solving the original optimisation problem for $ \widehat{\bf x} $ is the similar equation
$$
\frac{dF}{d{\bf x}} = \sum_{i=1}^n \frac{d\rho(r_i)}{d{\bf x}} = \sum_{i=1}^n \frac{d\rho(r_i)}{dr_i}\frac{dr_i}{d{\bf x}} = \sum_{i=1}^n \psi(r_i)\frac{dr_i}{d{\bf x}} = {\bf 0}
$$
Comparing the equations involving $ \frac{dF_{\text{LS}}}{d{\bf x}} $ and $ \frac{dF}{d{\bf x}} $ above,
we see that the appropriate weight to use is
$$
w_i = \frac{1}{r_i} \psi(r_i)
$$
This choice will ensure that solving for $ \frac{dF_{\text{LS}}}{d{\bf x}} $ will also solve for $ \frac{dF}{d{\bf x}} $
to first order, and hopefully improve the solution.
IRLS repeats the following two steps to convergence, given an initial state estimate $ \widehat{\bf x} $:
1. Estimate weights using the above equation for $ w_i $ for each data item, to be used when calculating the next value
for $ \widehat{\bf x} $.
$ r_i $ and its derivative are evaluated at the current solution $ \widehat{\bf x} $.
1. Calculate the next estimate for $ \widehat{\bf x} $, using the updated weights $ w_i $ from the previous step.
IRLS normally requires a good initial estimate $ \widehat{\bf x} $ of $ {\bf x} $ to avoid local
minimal in the objective function.
## Supervised Gauss-Newton algorithm
Now we describe an alternative to IRLS, which we term Supervised Gauss-Newton or ``Sup-GN``
Beginning with the objective function $ F({\bf x}) $, let us assume that we have an existing estimate $ \widehat{\bf x}^{*} $
of $ {\bf x} $. We can then try to improve this estimate by solving
$$
\frac{dF({\bf x})}{d{\bf x}} = {\bf 0}
$$
We then build the first-order approximation to the weighting function $ \psi(r) $ that solves for an improved $ \widehat{\bf x} $:
$$
\frac{dF({\bf x})}{d{\bf x}} + \frac{d^2 F}{d{\bf x}^2} (\widehat{\bf x} - \widehat{\bf x}^{*}) = {\bf 0}
$$
where the derivatives are evaluated at $ {\bf x}=\widehat{\bf x}^{*} $, or
$$
\sum_{i=1}^n \psi(r_i) \frac{dr_i}{d{\bf x}} + \sum_{i=1}^n \left( \frac{d^2\rho(r_i)}{dr_i^2} \frac{d r_i}{d{\bf x}}^\intercal \frac{d r_i}{d{\bf x}} + \psi(r_i) \frac{d^2 r_i}{d{\bf x}^2} \right) (\widehat{\bf x} - \widehat{\bf x}^{*}) = {\bf 0}
$$
where $ r_i $ and the derivatives are again evaluated at $ {\bf x}=\widehat{\bf x}^{*} $.
Noting the equation for $ r_i $ in the section above, we can write
$$
\frac{dr_i}{d{\bf x}} = \frac{dr_i}{d{\bf r}_i} \frac{d{\bf r}_i}{d{\bf x}} = \frac{1}{r_i} {\bf r}_i^\intercal \frac{d{\bf r}_i}{d{\bf x}}
$$
and from this derive
$$
\frac{d^2 r_i}{d{\bf x}^2} = \frac{1}{r_i^3} \left( \frac{d{\bf r}_i}{d{\bf x}} \right)^{\intercal} \left( r_i^2 I - {\bf r}_i {\bf r}_i^\intercal \right) \frac{d{\bf r}_i}{d{\bf x}} + \frac{1}{r_i} {\bf r}_i^{\intercal} \frac{d^2{\bf r}_i}{d{\bf x}^2}
$$
We assume that the data error $ {\bf r}_{i} $ is a smooth function of $ {\bf x} $ and so ignore the second
derivative term involving $ d^2{\bf r}_i/d{\bf x}^2 $.
Substituting the above (without the second term) into the equation for $ \widehat{\bf x} - \widehat{\bf x}^{*} $ above
and combining with the equation for $ \frac{dr_i}{d{\bf x}} $ provides the result
$$
\sum_{i=1}^n {\bf a} + (A + B) (\widehat{\bf x} - \widehat{\bf x}^{*}) = {\bf 0}
$$
where
$$
{\bf a} = \sum_{i=1}^n \frac{1}{r_i} \psi(r_i) {\bf r}_i^\intercal \frac{d{\bf r}_i}{d{\bf x}}
$$
$$
A = \sum_{i=1}^n \frac{1}{r_i} \psi(r_i) \left(\frac{d{\bf r}_i}{d{\bf x}}\right)^\intercal \frac{d{\bf r}_i}{d{\bf x}}
$$
$$
B = \sum_{i=1}^n \frac{1}{r_i^3} \left(r_i\frac{d^2\rho}{dr_i^2} - \psi(r_i) \right) \left(\frac{d{\bf r}_i}{d{\bf x}}\right)^\intercal {\bf r}_i {\bf r}_i^\intercal \frac{d{\bf r}_i}{d{\bf x}}
$$
For the Welsch influence function we obtain
$$
\frac{1}{r_i} \psi(r_i) = \frac{1}{2} \,\mathrm{e}^{-\frac{r^2}{2\sigma^2}}
$$
$$
\frac{1}{r_i^3} \left(r_i\frac{d^2\rho}{dr_i^2} - \psi(r_i) \right) = -\frac{1}{2\sigma^2} \,\mathrm{e}^{-\frac{r^2}{2\sigma^2}}
$$
We can solve the Gauss-Newton update equations to provide
updated parameters $ \widehat{\bf x} $ given residuals, derivatives and
hence matrices $ A $, $ B $ evaluated at the previous parameters $ \widehat{\bf x}^{*} $.
However a direct Gauss-Newton iteration gives no guarantee of
convergence. We propose the following "damped" Gauss-Newton updates, in the manner of Levenberg-Marquardt [5] damping:
$$
\sum_{i=1}^n \psi(r_i) \frac{dr_i}{d{\bf x}} + (A + \lambda B) (\widehat{\bf x} - \widehat{\bf x}^{*}) = {\bf 0}
$$
where $ \lambda $ in the range $ [0,1] $ is a damping factor. When $ \lambda=1 $ (no damping)
we have a pure Gauss-Newton update. When $ \lambda=0 $ (maximum damping), we apply an update that is
exactly equivalent to IRLS for linear data models (proof omitted).
As a result we can treat the extreme value $ \lambda=0 $ as a "safe" iteration
that will guarantee, at least for linear models, a convergent update.
The Sup-GN algorithm then proceeeds as follows:
First initialize $ \widehat{\bf x}^{*} $ in the same way as IRLS (least-squares solution with weights $ w_i $ set to one), and
set $ \lambda=1 $. Then given a damping adjustment factor $ k<1 $:
1. Solve the damped Gauss-Newton update equation above to produce an updated estimate $ \widehat{\bf x} $.
1. Check the objective function $ F() $ evaluated at $ \widehat{\bf x}^{*} $ and $ \widehat{\bf x} $.
If we managed to improve the objective function, we can reduce the damping, otherwise we need to reject the new
estimate and increase the damping:
- If $ F(\widehat{\bf x}) < F(\widehat{\bf x}^{*}) $, set $ \lambda \leftarrow k\lambda $ and $ \widehat{\bf x}^{*} \leftarrow \widehat{\bf x} $.
- Else set $ \lambda \leftarrow \min(1,\frac{\lambda}{k}) $.
1. Iterate to convergence.
The advantage of this algorithm over IRLS is that it provides much faster convergence when we are near the solution.
It is well known that Gauss-Newton iterations can provide quadratic convergence [6], and we are taking advantage of this, whilst
still maintaining the option of pure IRLS iterations to guarantee convergence.
## GNC Smoothie software
- [IRLS class](#irls-class)
- [Supervised Gauss-Newton class](#supervised-gauss-newton-class)
- [Example code for the IRLS and Sup-GN classes](#example-code-for-the-irls-and-sup-gn-classes)
- [Base class for IRLS and Sup-GN algorithms](#base-class-for-irls-and-sup-gn-algorithms)
- [Accelerated Robust Linear regression](#accelerated-robust-linear-regression)
- [Alternative API for Robust Linear regression](#alternative-api-for-robust-linear-regression)
The Python library is based on `numpy` and contains the following top-level modules:
### IRLS class
Implementation in [irls.py](src/gnc_smoothie/irls.py)
Top-level 'IRLS' class. Once you have constructed an instance of this class, call the `run()`
method to run it. This returns `True` on successful convergence, `False` on failure.
The final model and model reference (see below) are stored in `final_model` and `final_model_ref`,
whether the `run()` method succeeds or not.
Here are the parameters that need to be passed to the `IRLS` class constructor. Optional parameters follow.
- `param_instance` Defines the GNC schedule to be followed by IRLS. If GNC is not being used then
this can be a `GNC_NullParams` instance imported from [gnc_null_params.py](src/gnc_smoothie/gnc_null_params.py).
Should have an internal `influence_func_instance`
that specifies the IRLS influence function to be used. The influence_func_instance
should provide the following method:
- `summary(self) -> str`
Returns a string containing the values of the internal parameters.
`param_instance` itself should provide the following methods:
- `reset(self, init: bool = True) -> None`
Resets the internal influence_func_instance according to the stage of the
GNC schedule indicated by the init parameter. If init is `True`, reset to the
starting value to prepare for the GNC process to start. If init is `False`,
reset to the final stage of GNC.
- `n_steps(self) -> int:`
Returns the number of steps in the GNC schedule.
- `alpha(self) -> float`
Returns the stage reached in the GNC schedule, as a value between zero (start)
and one (end)
- `increment(self) -> None` Updates the influence_func_instance to the next step in the GNC schedule.
- `data` An array of data items. Each data item should itself be an array.
Now the optional parameters for the `IRLS` class constructor:
- `model_instance` A Python-based model being fitted to the data, an instance of a class you design
that provides at minimum following methods:
- `cache_model(self, model, model_ref=None) -> None`
Use this method to cache the model, prior to the `residual()` method being called on each data item.
If the model contains reference parameters e.g. for estimating rotation, these are passed
as `model_ref`.
- `residual(self, data_item) -> np.array`
Calculates the residual (error) of the `data_item` given the model.
These methods may also be required depending on the model design
- `residual2(self, data_item) -> np.array`
Calculates the residual (error) of the `data_item` of the type of data passed in
the `data2` array.
- `residual3(self, data_item) -> np.array`
Calculates the residual (error) of the `data_item` of the type of data passed in
the `data3` array.
- `linear_model_size(self) -> int`
Returns the number of parameters in the model if the model is linear.
The `BaseIRLS` class uses an internal weighted_fit() method to fit a linear model
to the data with specified weights, so that the programmer does not have to
implement it. If the model is non-linear, omit this method.
- `weighted_fit(self, data, weight, scale) -> (np.array, np.array)`
If `linear_model_size()` is not provided, the model is not linear. If a closed-form
solution for the best model given the data with weights nevertheless exists,
implement it in yout class using this method.
The `scale` array indicates that certain data items are less accurate and so have a
scale value > 1, indicating that the influence function for that data item
should be stretched by the given scale factor.
For non-linear problems with no closed-form solution, pass a suitable starting
point as the `model_start` (and optionally model_ref_start) parameters, see below.
In that case the `weighted_fit()` should be omitted.
If second and possibly third types of data item are being used, add arguments `data2`
`weight2`, `scale2` and `data3`, `weight3`, `scale3` as appropriate.
- `evaluator_instance` A vectorised evaluator that provides a fast implementation of the model
specific to the influence function. It should be a class instance that provides at least
the following methods.
- `set_residual_size(self, residual_size: npt.ArrayLike) -> None`
Sets the residual size for each data type.
- `update_weights(self, model: npt.ArrayLike, model_ref, influence_func_instance,
data: npt.ArrayLike, weight: npt.ArrayLike, scale: npt.ArrayLike,
new_weight: npt.ArrayLike) -> None`
Update IRLS weights.
- `weighted_fit(self, data: npt.ArrayLike, weight: npt.ArrayLike, scale: npt.ArrayLike) -> np.array`
Return the model fitted to the data, taking the weights into account.
We expect that the implementation of the above vectorised functions will use [Cython](https://cython.org/).
- `weight` An array of float weight values for each data item.
If not provided, weights are initialised to one.
- `scale` An array of scale values, indicating that one or more data items are known to
have reduced accuracy, i.e. a wider influence function. The scale indicates the stretching
to apply to the influence function for that data item.
- `data2` A second array of data items. Each data item should itself be an array.
Use this if you have a second type of data item.
- `weight2` An array of float weight values for each of the second data items `data2`.
If not provided with `data2`, weights are initialised to one
- `scale2` An array of scale values, indicating that one or more data items in `data2`
are known to have reduced accuracy, i.e. a wider influence function. The scale indicates the stretching
to apply to the influence function for that data item.
- `data3` A third array of data items. Each data item should itself be an array.
Use this if you have a third type of data item.
- `weight3` An array of float weight values for each of the third data items `data3`.
If not provided with `data3`, weights are initialised to one
- `scale3` An array of scale values, indicating that one or more data items in `data3`
are known to have reduced accuracy, i.e. a wider influence function. The scale indicates the stretching
to apply to the influence function for that data item.
- `numeric_derivs_influence: bool` Whether to calculate derivatives of the influence function numerically
from a provided `rho()` method or directly using a provided `rhop()` method.
- `max_niterations: int` Maximum number of IRLS iterations to apply before aborting.
- `diff_thres: float` Terminate when successful update changes the model model parameters by less than this value.
- `messages_file: TextIO` File to print debugging information.
- `model_start` Starting value for model model parameters.
- `model_ref_start` Starting reference parameters for model, e.g. if optimising rotation
- `debug: bool` Whether to add extra debugging data to the `IRLS` class instance on exit:
- `debug_n_iterations` The number of iterations actually applied.
- `debug_model_list` A list of the model parameters at each iteration.
- `debug_diffs` A list of norm of model parameter changes applied at each iteration,
as a list of difference values.
- `debug_diff_alpha` A list of alpha values corresponding to the difference values `debug_diffs`, indicating
the GNC stage reached when each model change was applied. Alpha equal to zero indicates
the start of the GNC schedule, alpha equal to one indicates the final stage of the
GNC schedule.
- `debug_update_weights_time` The total time spent updating weights in seconds.
- `debug_weighted_fit_time` The total time spent fitting the model to data in seconds
- `debug_total_time` The total time spent in the algorithm in seconds.
### Supervised Gauss-Newton class
Implementation at [sup_gauss_newton.py](src/gnc_smoothie/sup_gauss_newton.py)
Top-level `SupGaussNewton` class, an implementation of Supervised Gauss-Newton (Sup-GN).
Sup-GN is an alternative to IRLS most suitable for the two cases:
- Linear model where the data relates to the model via a linear function.
- Non-linear model where there is no closed-form solution to calculating the
model parameters from the weighted data.
Use the basic IRLS in the remaining case where a non-trivial closed-form solution for the model
is available, such as 3D point cloud registration (SVD solution). IRLS is not suitable for
non-linear problems where a closed-form solution is not available, but Sup-GN can be used in
such problems so long as a reasonable starting point for the model can be supplied (see the
`model_start` and `model_ref_start` parameters below). For linear models Sup-GN provides a simpler
model implementation than IRLS, since the closed-form solution for the model is calculated
internally. Also Sup-GN converges quadratically for linear models when close to the solution.
Once you have constructed an instance of the `SupGaussNewton` class, call the `run()`
method to run it. This returns `True` on successful convergence, `False` on failure.
The final model and model reference (see below) are stored in `final_model` and `final_model_ref`,
whether the `run()` method succeeds or not.
The parameters to the `SupGaussNewton` constructor are very similar to the `IRLS` class,
but there are some twists due to Sup-GN requiring differentiation of the model residual.
Here are the parameters you need to pass to the `SupGaussNewton` class:
- `param_instance` Defines the GNC schedule to be followed by IRLS. If GNC is not being used then
this can be a `GNC_NullParams` instance imported from [gnc_null_params.py](src/gnc_smoothie/gnc_null_params.py).
Should have an internal `influence_func_instance`
that specifies the IRLS influence function to be used. This `influence_func_instance`
should provide these methods:
- `objective_func_sign(self) -> float`
Returns either one or minus one depending on whether the objective function
increases for large residuals (one) or decreases (minus one). Typical IRLS
objective functions such as Huber and Geman-McClure increase for large residuals,
so most functions will return one. The version of Welsch we have implemented in
[gnc_welsch_params.py](src/gnc_smoothie/gnc_welsch_params.py)
uses a nagative sense, which slightly simplifies the
implementation, because otherwise we would have to add one to the objective
function in order to keep it positive.
- `rho(self, rsqr: float, s: float) -> float`
The objective function given
- `rsqr` The square of the L2 norm of the residual vector
- `s` The scale of the data item indicating its known inaccuracy, so a value >= 1.
Returns the value of the objective function.
- `rhop(self, rsqr: float, s: float) -> float`
The influence function, which is equal to the derivative with respect to $ r $
of `rho(rsqr,s)` divided by $ r $, where $ r $ is the L2 norm of the residual vector.
If `numeric_derivs_influence` is set to `True` (see below) then the derivatives
are calculated numerically from `rho()` and `rhop()` should be omitted.
- `Bterm(self, rsqr: float, s: float) -> float`
Implements $ (r*\rho''(r) - \rho'(r))/(r^3) $ where ' indicates derivative.
If `numeric_derivs_influence` is set to `True` (see below) then the derivatives
are calculated numerically from `rho()` and `Bterm()` should be omitted.
- `summary(self) -> str`
Returns a string containing the values of the internal parameters.
`param_instance` itself should provide the following methods:
- `reset(self, init: bool = True) -> None`
Resets the internal influence_func_instance according to the stage of the
GNC schedule indicated by the init parameter. If init is `True`, reset to the
starting value to prepare for the GNC process to start. If init is `False`,
reset to the final stage of GNC.
- `n_steps(self) -> int:`
Returns the number of steps in the GNC schedule.
- `alpha(self) -> float`
Returns the stage reached in the GNC schedule, as a value between zero (start)
and one (end)
- `increment(self) -> None` Updates the influence_func_instance to the next step in the GNC schedule.
- `data` An array of data items. Each data item should itself be an array.
Now the optional parameters for the `SupGaussNewton` class constructor:
- `model_instance` A Python-based model being fitted to the data, an instance of a class you design
that provides at minimum the following methods:
- `cache_model(self, model, model_ref=None)`
Use this method to cache the model, prior to `residual()` and `residual_gradient()`
methods being called on each data item. If the model
contains reference parameters e.g. for estimating rotation, these are passed
as `model_ref`.
- `residual(self, data_item) -> np.array`
Calculates the residual (error) of the data_item given the model.
These methods may also be required depending on the model design
- `model_is_valid(self, model, model_ref=None)`
If provided, this function can be used to reject model parameters by returning
`False` for the provided model. For instance, if a model parameter is restricted
to be positive, but is updated to have a negative value, you can flag this
problem in this method.
- `residual_gradient(self, data_item) -> np.array`
The Jacobian or derivative matrix of the residual vector with respect
to the model parameters. If the `numeric_derivs_model` parameter is set to `True`
(see below) then the derivatives are calculated numerically using the `residual()`
method. In that case omit this method.
- `residual2(self, data_item) -> np.array`
Calculates the residual (error) of the `data_item` of the type of data passed in
the `data2` array.
- `residual3(self, data_item) -> np.array`
Calculates the residual (error) of the `data_item` of the type of data passed in
the `data3` array.
- `residual_gradient2(self, data_item) -> np.array`
Calculates the Jacobian of the residual vector for a `data_item` of the
type of data passed in the `data2` array.
- `residual_gradient3(self, data_item) -> np.array`
Calculates the Jacobian of the residual vector for a `data_item` of the
type of data passed in the `data3` array.
- `linear_model_size(self) -> int`
Returns the number of parameters in the model if the model is linear.
The `BaseIRLS` class uses an internal `weighted_fit()` method to fit a linear model
to the data with specified weights, so that the programmer does not have to
implement it. If the model is non-linear, omit this method.
- `weighted_fit(self, data, weight, scale) -> (np.array, np.array)`
If `linear_model_size()` is not provided, the model is not linear. If a closed-form
solution for the best model given the data with weights nevertheless exists,
implement it in yout class. The `scale`
array indicates that certain data items are less accurate and so have a
scale value > 1, indicating that the influence function for that data item
should be stretched by the given scale factor.
For non-linear problems with no closed-form solution, pass a suitable starting
point as the `model_start` (and optionally `model_ref_start`) parameters, see below.
In that case the `weighted_fit()` method should be omitted.
If second and possibly third types of data item are being used, add arguments `data2`
`weight2`, `scale2` and `data3`, `weight3`, `scale3` as appropriate.
- `evaluator_instance` A vectorised evaluator that provides a fast implementation of the model
specific to the influence function. It should be a class instance that provides at least
the following methods.
- `set_residual_size(self, residual_size: npt.ArrayLike) -> None`
Sets the residual size for each data type.
- `objective_func(self, model: npt.ArrayLike, model_ref, influence_func_instance,
data: npt.ArrayLike, weight: npt.ArrayLike, scale: npt.ArrayLike) -> float`
Returns the total Sup-GN objective function evaluated over all data.
- `weighted_derivs(self, model: npt.ArrayLike, model_ref, influence_func_instance, lambda_val: float,
data: npt.ArrayLike, weight: npt.ArrayLike, scale: npt.ArrayLike) -> (np.array, np.array)`
Returns the sums of weighted derivatives used in the Sup-GN algorithm.
- `weighted_fit(self, data: npt.ArrayLike, weight: npt.ArrayLike, scale: npt.ArrayLike) -> np.array`
Return the model fitted to the data, taking the weights into account.
We expect that the implementation of the above vectorised functions will use [Cython](https://cython.org/).
- `weight` An array of float weight values for each data item.
If not provided, weights are initialised to one
- `scale` An array of scale values, indicating that one or more data items are known to
have reduced accuracy, i.e. a wider influence function. The scale indicates the stretching
to apply to the influence function for that data item.
- `data2` A second array of data items. Each data item should itself be an array.
Use this if you have a second type of data item.
- `weight2` An array of float weight values for each of the second data items `data2`.
If not provided with `data2`, weights are initialised to one
- `scale2` An array of scale values, indicating that one or more data items in `data2`
are known to have reduced accuracy, i.e. a wider influence function. The scale indicates the stretching
to apply to the influence function for that data item.
- `data3` A third array of data items. Each data item should itself be an array.
Use this if you have a third type of data item.
- `weight3` An array of float weight values for each of the third data items `data3`.
If not provided with `data3`, weights are initialised to one
- `scale3` An array of scale values, indicating that one or more data items in `data3`
are known to have reduced accuracy, i.e. a wider influence function. The scale indicates the stretching
to apply to the influence function for that data item.
- `numeric_derivs_model: bool` Whether to calculate derivatives of the data residual vector with respect to the
model parameters numerically using a provided `residual()` method or directly
using a provided `residual_gradient()` method.
- `numeric_derivs_influence: bool` Whether to calculate derivatives of the influence function numerically
from a provided `rho()` method or directly using a provided `rhop()` method.
- `max_niterations: int` Maximum number of Sup-GN iterations to apply before aborting
- `residual_tolerance: float` An parameter that is used to terminate Sup-GN when the improvement to the
objective function value is smaller than the provided threshold
- `lambda_start: float` Starting value for the Sup-GN damping factor $\lambda$, similar to Levenberg-Marquart damping.
In Sup-GN the level of damping is high when $\lambda$ is small, so normally it is
best to start with an optimistic small value.
- `lambda_max: float` Maximum value for $\lambda$ in Sup-GN damping. This should be in the range [0,1].
- `lambda_scale: float` Scale factor to multiply $\lambda$ by when an iteration successfully reduces/increases
the objective function (depending on the +/- sign specified by
`param_instance.influence_func_instance.influence_func_sign()`, see above).
When the iteration is not successful, the model change is reverted and $ \lambda $ is divided
by this factor to increase the damping at the next iteration.
- `lambda_thres: float` Threshold for $\lambda$ below which the Sup-GN iteration switches to pure gradient-based updates.
- `diff_thres: float` Terminate when successful update changes the model parameters by less than this value.
- `messages_file: TextIO` File to print debugging information.
- `model_start` Starting value for model parameters.
- `model_ref_start` Starting reference parameters for model, e.g. if optimising rotation.
- `debug: bool` Whether to add extra debugging data to the `SupGaussNewton` class instance on exit:
- `debug_n_iterations` The number of iterations actually applied.
- `debug_model_list` A list of the model parameters at each iteration.
- `debug_diffs` A list of norm of model parameter changes applied at each iteration,
as a list of difference values.
- `debug_diff_alpha` A list of alpha values corresponding to the difference values `debug_diffs`, indicating
the GNC stage reached when each model change was applied. Alpha equal to zero indicates
the start of the GNC schedule, alpha equal to one indicates the final stage of the
GNC schedule.
- `debug_weighted_derivs_time` The total time spend calculating derivatives in seconds.
- `debug_solve_time` The total time spent solving for the model in seconds.
- `debug_total_time` The total time spent in the algorithm in seconds.
### Example code for the IRLS and Sup-GN classes
The simplest non-trivial example of an IRLS/Sup-GN model class
is to support fitting a straight line through 2D data, with the model $ y=ax+b $, where $ a $ is the gradient of the line and
$ b $ is the intercept. The model class for line fitting might look like this:
```
import numpy as np
# Line model is y = a*x + b
class LineFit:
def __init__(self):
pass
# copy model parameters and apply any internal calculations
def cache_model(self, model, model_ref=None) -> None:
self.__a = model[0]
self.__b = model[1]
# r = a*xi + b - yi
def residual(self, data_item) -> np.array:
x = data_item[0]
y = data_item[1]
return np.array([self.__a*x + self.__b - y])
# dr/d(a b) = (x 1)
def residual_gradient(self, data_item) -> np.array:
x = data_item[0]
return np.array([[x, 1.0]])
# return number of parameters in model if the model is linear,
# otherwise omit this method
def linear_model_size(self) -> int:
return 2 # a,b
```
In this case the data items will have two values each, for $ x,y $. So an example data array could be
```
data = np.array([[0.0, 0.90], [0.1, 0.95], [0.2, 1.0], [0.3, 1.05], [0.4, 1.1]])
```
Then the code to build and run IRLS could look like ths.
```
from gnc_smoothie.sup_gauss_newton import SupGaussNewton
from gnc_smoothie.gnc_null_params import GNC_NullParams
from gnc_smoothie.welsch_influence_func import WelschInfluenceFunc
sigma = 0.2
param_instance = GNC_NullParams(WelschInfluenceFunc(sigma))
model_instance = LineFit()
optimiser_instance = SupGaussNewton(param_instance, data, model_instance=model_instance)
if optimiser_instance.run():
model = optimiser_instance.final_model
print("line a b:",model)
```
The correct line parameters should be printed:
```
line a b: [0.5 0.9]
```
To use Supervised Gauss-Newton instead simply substitute `SupGaussNewton` for `IRLS` in the above code.
### Base class for IRLS and Sup-GN algorithms
Implementation in [base_irls.py](src/gnc_smoothie/base_irls.py)
Implements the many features in common between IRLS and Sup-GN. Should not be used directly in
your code.
### Accelerated Robust Linear regression
Implementation in [linear_regressor_welsch.py](src/gnc_smoothie/linear_model/linear_regressor_welsch.py).
We have implemented a vectorised Cython-based version of robust linear regression.
There is also a pure Python-based reference implementation at
[linear_regressor.py](src/gnc_smoothie/linear_model/linear_regressor.py), used mainly for
regression testing the Cython implementation. The residual model is
$$
{\bf r}_i({\bf x}) = \left( \begin{array}{c} {\bf x}_1.{\bf z}_{i1} + x_1 - z_{i1} \\ {\bf x}_2.{\bf z}_{i2} + x_2 - z_{i2} \\ ... \\ {\bf x}_m.{\bf z}_{im} + x_m - z_{im} \end{array} \right)
$$
where we organise the state estimate as a matrix
$$
X = \left( \begin{array}{cc} {\bf x}_1^\intercal & x_1 \\ {\bf x}_2^\intercal & x_2 \\ ... & ... \\ {\bf x}_m^\intercal & x_m \end{array} \right)
$$
and $m$ is the dimension of the residual vector. We can write the $i$'th observation in matrix form as
$$
Z_i = \left( \begin{array}{cc} {\bf z}_{i1}^\intercal & z_{i1} \\ {\bf z}_{i2}^\intercal & z_{i2} \\ ... & ... \\ {\bf z}_{im}^\intercal & z_{im} \end{array} \right)
$$
To use the vectorised linear regression,
incorporating our recommended Welsch influence function [2], you will need the following line:
```
from gnc_smoothie.linear_model.linear_regressor_welsch import LinearRegressorWelsch
```
Then you will need to create an linear regression instance with a line like
```
linear_regressor = LinearRegressorWelsch(1.0, sigma_limit=100.0, num_sigma_steps=20)
```
The only obligatory parameter is
- `sigma_base: float` The final small value of $\sigma$ in the [GNC Welsch influence function schedule](#gnc-welsch-schedule-class)
Optional parameters are:
- `sigma_limit: float` The initial high value of $\sigma$ in the [GNC Welsch influence function schedule](#gnc-welsch-schedule-class)
- `num_sigma_steps: int` The number of steps of $\sigma$ in the [GNC Welsch influence function schedule](#gnc-welsch-schedule-class)
- `max_niterations: int` Maximum number of Sup-GN iterations to apply before aborting.
- `lambda_start: float` Starting value for the Sup-GN damping factor $\lambda$, similar to Levenberg-Marquart damping.
In Sup-GN the level of damping is high when $\lambda$ is small, so normally it is
best to start with an optimistic small value.
- `lambda_max: float` Maximum value for $\lambda$ in Sup-GN damping. This should be in the range [0,1].
- `lambda_scale: float` Scale factor to multiply $\lambda$ by when an iteration successfully reduces/increases
the objective function (depending on the +/- sign specified by
`param_instance.influence_func_instance.influence_func_sign()`, see above).
When the iteration is not successful, the model change is reverted and $ \lambda $ is divided
by this factor to increase the damping at the next iteration.
- `lambda_thres: float` Threshold for $\lambda$ below which the Sup-GN iteration switches to pure gradient-based updates.
- `diff_thres: float` Terminate when successful update changes the model model parameters by less than this value.
- `use_slow_version: bool` Whether to use the slower pure Python [implementation](src/gnc_smoothie/linear_model/linear_regressor.py).
- `messages_file: TextIO` File to print debugging information.
- `debug: bool` Whether to add extra debugging data to the `LinearRegressorWelsch` class instance on exit:
- `debug_n_iterations` The number of iterations actually applied.
- `debug_model_list` A list of the model parameters at e | text/markdown | null | Philip McLauchlan <philipmclauchlan6@gmail.com> | null | Philip McLauchlan <philipmclauchlan6@gmail.com> | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [
"Homepage, https://github.com/PhilFM/Robust-M-Estimation.git"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T13:45:41.316656 | gnc_smoothie-0.0.2.tar.gz | 672,341 | e1/34/e5b9f7c57a84c54682110b116fd5d2fc2a5ca7da33425d66779eb8668290/gnc_smoothie-0.0.2.tar.gz | source | sdist | null | false | 0c648ea3dc6fef0eee4a80c730c10afd | 888b9ab8d4f46ea960bbc858c16ddfcf83dda447b795827ea9789d2a0dfc3199 | e134e5b9f7c57a84c54682110b116fd5d2fc2a5ca7da33425d66779eb8668290 | MIT | [
"LICENSE"
] | 380 |
2.4 | videocontext | 0.1.0 | Extract video context (transcripts, metadata) for AI CLI consumption | # VideoContext
Extract video context (transcripts, metadata) for AI CLI consumption.
YouTube is the largest developer knowledge base — tutorials, talks, demos — but AI CLI tools can't access it. VideoContext bridges that gap: extract video content into structured text that any AI CLI can consume via piping.
## Documentation
- Architecture: `docs/ARCHITECTURE.md`
- Development workflow: `docs/DEVELOPMENT.md`
- Troubleshooting: `docs/TROUBLESHOOTING.md`
- Release setup: `docs/RELEASE_SETUP.md`
## Install
```bash
pip install -e .
# Optional: frame extraction support
pip install -e ".[vision]"
```
## Testing
```bash
# Use test extras (pytest available if preferred)
pip install -e ".[test]"
# Built-in test run (works without pytest)
python -m unittest discover -s tests -v
# Or use shortcuts
make test
```
## Linting & Pre-commit
```bash
# Install dev tooling (ruff, pre-commit, packaging helpers)
pip install -e ".[dev,test]"
make dev-install
# Run lint checks
make lint
make lint-fix
# Install git hooks, then run them across the repo
make precommit-install
make precommit-run
```
`make precommit-run` is scoped to tracked files under `projects/videocontext`.
## Release
```bash
# Bump version in pyproject + __init__ and seed changelog entry
python scripts/bump_version.py 0.2.0
# Or with Makefile
make bump V=0.2.0
# Validate tests before tagging
python -m unittest discover -s tests -v
make release-check
# Create release tag (triggers publish workflow)
git tag videocontext-v0.2.0
git push origin videocontext-v0.2.0
```
The release workflow uses trusted publishing (`.github/workflows/videocontext-release.yml`), so configure your PyPI project to trust this repository/workflow.
Detailed setup steps are in `docs/RELEASE_SETUP.md`.
## Live Smoke Test (Optional)
```bash
# Requires network + YouTube access
./scripts/smoke_e2e.sh
make smoke URL="https://www.youtube.com/watch?v=dQw4w9WgXcQ" OUT=./smoke_out
# Custom URL/output directory
./scripts/smoke_e2e.sh "https://www.youtube.com/watch?v=dQw4w9WgXcQ" ./smoke_out
```
## Usage
```bash
# Full context — metadata + transcript (default: markdown)
videocontext context "https://youtube.com/watch?v=abc123"
# Pipe directly to Claude Code
videocontext context "https://youtube.com/watch?v=abc123" | claude
# Transcript only
videocontext transcript "https://youtube.com/watch?v=abc123"
# Metadata only
videocontext metadata "https://youtube.com/watch?v=abc123"
# Extract key frames (scene-detect default)
videocontext frames "https://youtube.com/watch?v=abc123" --output-dir ./frames
# Extract frames at fixed interval
videocontext frames "https://youtube.com/watch?v=abc123" --interval 15 --max-frames 12
# Short alias
vc context "https://youtube.com/watch?v=abc123"
# JSON output
vc context "https://youtube.com/watch?v=abc123" -f json
# Save to file
vc context "https://youtube.com/watch?v=abc123" -o notes.md
```
## Commands
| Command | Description |
|---------|-------------|
| `context` | Full video context (metadata + transcript) |
| `transcript` | Transcript only |
| `metadata` | Metadata only (title, description, chapters) |
| `frames` | Key frame extraction (scene-detect or fixed-interval, requires `[vision]` extra) |
## No API Keys Required
VideoContext uses `youtube-transcript-api` and `yt-dlp` — no Google API key needed.
| text/markdown | Soloarch | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"youtube-transcript-api>=1.2.0",
"yt-dlp>=2025.1.0",
"click>=8.0.0",
"rich>=13.0.0",
"scenedetect[opencv]>=0.6.0; extra == \"vision\"",
"pytest>=8.0.0; extra == \"test\"",
"ruff>=0.11.0; extra == \"dev\"",
"pre-commit>=4.0.0; extra == \"dev\"",
"build>=1.2.0; extra == \"dev\"",
"twine>=6.0.0; extr... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:45:36.968558 | videocontext-0.1.0.tar.gz | 14,055 | 66/23/6255ec89d3a520c51ed0e68bff29a9941c5d5d5a9a400bc52a4ce6419b6f/videocontext-0.1.0.tar.gz | source | sdist | null | false | 2cffc03d264bff12f51d74f9d57522b1 | 5326471747fdde61eac50890c9ecd5cf9740c0e1f85f5eef9f9c836b6b51a7b8 | 66236255ec89d3a520c51ed0e68bff29a9941c5d5d5a9a400bc52a4ce6419b6f | MIT | [] | 251 |
2.1 | typedb-driver | 3.8.0a1 | TypeDB Driver for Python | # TypeDB Python Driver
## Driver Architecture
To learn about the mechanism that TypeDB drivers use to set up communication with databases running on the TypeDB
Server, refer to the [Drivers Overview](https://typedb.com/docs/core-concepts/drivers/overview).
## API Reference
To learn about the methods available for executing queries and retrieving their answers using Python, refer to
the [API Reference](https://typedb.com/docs/reference/typedb-grpc-drivers/python).
## Install TypeDB Python Driver through Pip
1. Install `typedb-driver` using `pip`:
```bash
pip install typedb-driver
```
2. If multiple Python versions are available, you may wish to use:
```
pip3 install typedb-driver
```
3. Make sure a [TypeDB Server](https://typedb.com/docs/home/install/) is
running.
4. In your python program, import from `typedb.driver` (see [Example usage](#example-usage) or `tests/integration` for examples):
```py
from typedb.driver import *
driver = TypeDB.driver(addresses=TypeDB.DEFAULT_ADDRESS, ...)
```
## Example usage
<!-- EXAMPLE_START_MARKER -->
```py
from typedb.driver import *
class TypeDBExample:
def typedb_example(self):
# Open a driver connection. Specify your parameters if needed
# The connection will be automatically closed on the "with" block exit
with TypeDB.driver(TypeDB.DEFAULT_ADDRESS, Credentials("admin", "password"),
DriverOptions(DriverTlsConfig.disabled())) as driver:
# Create a database
driver.databases.create("typedb")
database = driver.databases.get("typedb")
# Use "try" blocks to catch driver exceptions
try:
# Open transactions of 3 types
tx = driver.transaction(database.name, TransactionType.READ)
# Execute any TypeDB query using TypeQL. Wrong queries are rejected with an explicit exception
result_promise = tx.query("define entity i-cannot-be-defined-in-read-transactions;")
print("The result is still promised, so it needs resolving even in case of errors!")
result_promise.resolve()
except TypeDBDriverException as expected_exception:
print(f"Once the query's promise is resolved, the exception is revealed: {expected_exception}")
finally:
# Don't forget to close the transaction!
tx.close()
# Open a schema transaction to make schema changes
# Transactions can be opened with configurable options. This option limits its lifetime
options = TransactionOptions(transaction_timeout_millis=10_000)
# Use "with" blocks to forget about "close" operations (similarly to connections)
with driver.transaction(database.name, TransactionType.SCHEMA, options) as tx:
define_query = """
define
entity person, owns name, owns age;
attribute name, value string;
attribute age, value integer;
"""
answer = tx.query(define_query).resolve()
if answer.is_ok():
print(f"OK results do not give any extra interesting information, but they mean that the query "
f"is successfully executed!")
# Commit automatically closes the transaction. It can still be safely called inside "with" blocks
tx.commit()
# Open a read transaction to safely read anything without database modifications
with driver.transaction(database.name, TransactionType.READ) as tx:
answer = tx.query("match entity $x;").resolve()
# Collect concept rows that represent the answer as a table
rows = list(answer.as_concept_rows())
row = rows[0]
# Collect column names to get concepts by index if the variable names are lost
header = list(row.column_names())
column_name = header[0]
# Get concept by the variable name (column name)
concept_by_name = row.get(column_name)
# Get concept by the header's index
concept_by_index = row.get_index(0)
print(f"Getting concepts by variable names ({concept_by_name.get_label()}) and "
f"indexes ({concept_by_index.get_label()}) is equally correct. ")
# Check if it's an entity type before the conversion
if concept_by_name.is_entity_type():
print(f"Both represent the defined entity type: '{concept_by_name.as_entity_type().get_label()}' "
f"(in case of a doubt: '{concept_by_index.as_entity_type().get_label()}')")
# Continue querying in the same transaction if needed
answer = tx.query("match attribute $a;").resolve()
# Concept rows can be used as any other iterator
rows = [row for row in answer.as_concept_rows()]
for row in rows:
# Same for column names
column_names_iter = row.column_names()
column_name = next(column_names_iter)
concept_by_name = row.get(column_name)
# Check if it's an attribute type before the conversion
if concept_by_name.is_attribute_type():
attribute_type = concept_by_name.as_attribute_type()
print(f"Defined attribute type's label: '{attribute_type.get_label()}', "
f"value type: '{attribute_type.try_get_value_type()}'")
print(f"It is also possible to just print the concept itself: '{concept_by_name}'")
# Open a write transaction to insert data
with driver.transaction(database.name, TransactionType.WRITE) as tx:
insert_query = "insert $z isa person, has age 10; $x isa person, has age 20, has name \"John\";"
answer = tx.query(insert_query).resolve()
# Insert queries also return concept rows
rows = list(answer.as_concept_rows())
row = rows[0]
for column_name in row.column_names():
inserted_concept = row.get(column_name)
print(f"Successfully inserted ${column_name}: {inserted_concept}")
if inserted_concept.is_entity():
print("This time, it's an entity, not a type!")
# It is possible to ask for the column names again
header = [name for name in row.column_names()]
x = row.get_index(header.index("x"))
print("As we expect an entity instance, we can try to get its IID (unique identification): "
"{x.try_get_iid()}. ")
if x.is_entity():
print(f"It can also be retrieved directly and safely after a cast: {x.as_entity().get_iid()}")
# Do not forget to commit if the changes should be persisted
print('CAUTION: Committing or closing (including leaving the "with" block) a transaction will '
'invalidate all its uncollected answer iterators')
tx.commit()
# Open another write transaction to try inserting even more data
with driver.transaction(database.name, TransactionType.WRITE) as tx:
# When loading a large dataset, it's often better not to resolve every query's promise immediately.
# Instead, collect promises and handle them later. Alternatively, if a commit is expected in the end,
# just call `commit`, which will wait for all ongoing operations to finish before executing.
queries = ["insert $a isa person, has name \"Alice\";", "insert $b isa person, has name \"Bob\";"]
for query in queries:
tx.query(query)
tx.commit()
with driver.transaction(database.name, TransactionType.WRITE) as tx:
# Commit will still fail if at least one of the queries produce an error.
queries = ["insert $c isa not-person, has name \"Chris\";", "insert $d isa person, has name \"David\";"]
promises = []
for query in queries:
promises.append(tx.query(query))
try:
tx.commit()
assert False, "TypeDBDriverException is expected"
except TypeDBDriverException as expected_exception:
print(f"Commit result will contain the unresolved query's error: {expected_exception}")
# Open a read transaction to verify that the previously inserted data is saved
with driver.transaction(database.name, TransactionType.READ) as tx:
# Queries can also be executed with configurable options. This option forces the database
# to include types of instance concepts in ConceptRows answers
options = QueryOptions(include_instance_types=True)
# A match query can be used for concept row outputs
var = "x"
answer = tx.query(f"match ${var} isa person;", options).resolve()
# Simple match queries always return concept rows
count = 0
for row in answer.as_concept_rows():
x = row.get(var)
x_type = x.as_entity().get_type().as_entity_type()
count += 1
print(f"Found a person {x} of type {x_type}")
print(f"Total persons found: {count}")
# A fetch query can be used for concept document outputs with flexible structure
fetch_query = """
match
$x isa! person, has $a;
$a isa! $t;
fetch {
"single attribute type": $t,
"single attribute": $a,
"all attributes": { $x.* },
};
"""
answer = tx.query(fetch_query).resolve()
# Fetch queries always return concept documents
count = 0
for document in answer.as_concept_documents():
count += 1
print(f"Fetched a document: {document}.")
print(f"This document contains an attribute of type: {document['single attribute type']['label']}")
print(f"Total documents fetched: {count}")
print("More examples can be found in the API reference and the documentation.\nWelcome to TypeDB!")
```
<!-- EXAMPLE_END_MARKER -->
| text/markdown | TypeDB Community | core@typedb.com | null | null | Apache-2.0 | typedb database graph knowledgebase knowledge-engineering | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Environment :: Console",
"Topic :: Database :: Front-Ends... | [] | https://github.com/typedb/typedb-driver | null | >0 | [] | [] | [] | [
"parse==1.18.0"
] | [] | [] | [] | [] | twine/4.0.2 CPython/3.9.20 | 2026-02-19T13:45:18.229663 | typedb_driver-3.8.0a1-py313-none-manylinux_2_17_aarch64.whl | 7,489,889 | 19/4a/f0d776cf9932378785e49fd7ba4ffd251424a0bf10c26093c1a25e54c28c/typedb_driver-3.8.0a1-py313-none-manylinux_2_17_aarch64.whl | py313 | bdist_wheel | null | false | 006fc13f7ad18e419f82d616114d3296 | 4115d8dc685ad8cc251cf91319464fd93634b711b2026a747d5a61ef248185e8 | 194af0d776cf9932378785e49fd7ba4ffd251424a0bf10c26093c1a25e54c28c | null | [] | 1,338 |
2.4 | spritefridge | 1.4.2 | A python toolbox for processing SPRITE-seq within the cooler universe | # spritefridge
A python toolbox for processing SPRITEseq data
## Installation
To be able to run everything correctly we need a few prerequisits installed especially bedtools.
Furthermore, at the time of writing this some dependencies refused to compile when installing with pip (`krbalancing`).
Installing these is easiest done using conda. For convenience we provide an environment file (`env.yml`) with this package
Installation thus works like
```
conda env create -f env.yml
conda activate sprite
pip install spritefridge
```
## Usage
`spritefridge` comprises five tools to process and annotate SPRITE-seq data and results. Below are some example commands. For more details please
refer to the generated help messages `spritefridge <subcommand> -h`
### extractbc
`extractbc` aims to extract barcodes from reads according to a list of used barcodes and barcode layouts (i.e. how the barcodes are aranged in read sequence)
A typical command looks like this
```
spritefridge extractbc \
-r1 r1.fq.gz \
-r2 r2.fq.gz \
-bc barcodes.tsv \
-l1 DPM \
-l2 'Y|SPACER|ODD|SPACER|EVEN|SPACER|ODD' \
-m 'DPM:0,Y:0,EVEN:2,ODD:2' \
-o out.bcextract.fq.gz \
-p 4
```
This command will read in the barcodes and the try to find barcodes in the respective read sequence in the order given by the layouts starting from 5' end.
`-m` gives the allowed mismatches for the barcode identification. In addition to `out.bcextract.fq.gz` which contains reads with the extracted barcodes appended to their names, the tool also outputs statistics for how many reads were found with 1, 2, 3, ... barcodes. `-p` specifies the number of processes to use for extraction. `-l1` and `-l2` can also be left empty if barcodes are only to be extracted from one read.
### pairs
`pairs` identifies barcode clusters from aligned reads and writes them into pairs files for each cluster size
```
spritefridge pairs \
-b in.bam \
-o pairs/out \
-cl 2 \
-ch 1000 \
--separator '['
```
This command will read in alignments from `in.bam` (needs to be filtered for multimappers and quality) groups the reads by barcodes and then writes all possible pairs for each cluster of sizes between 2 and 1000 reads to a file named `pairs/out_<clustersize>.pairs`. This tool also outputs a dedicated bedfile containing all reads from each cluster to be used to annotated the Cooler bins later on (see `annotate`). Additionally, one can specify the a list of barcode name prefixes to ignore when generating the clusters via `--ignoreprefix` e.g. when having RPM and DPM sequences present which should really be in the same cluster (`--ignoreprefix "RPM,DPM"`)
### combine
`combine` merges cool files generated from cluster pairs files according to the SPRITE-seq recommendation by multiplying the counts of each Cooler by 2/n,
where n is the cluster size, before merging. The cluster size is inferred from the file name which needs to be of the pattern `<name>_<clustersize>.cool`
```
spritefridge combine \
-i coolers/* \
-o merged.cool \
--floatcounts
```
`--floatcounts` ensures that merged counts are stored as float and not be casted to int
### annotate
`annotate` takes in a bedfile (see `pairs`) and annotated each bin with the overlapping reads of each cluster.
```
spritefridge annotate \
-i merged.mcool \
-b clusters.bed
```
`merged.mcool` is a zoomified version of the `merged.cool` file
### balance
`balance` is used to balance the contact matrices of the resulting mcool file using iterative correction and Knight-Ruiz matrix balancing
genomewide and per chromosome
```
spritefridge balance \
-m testdata/sprite.new.mcool \
-p 2 \
--overwrite
```
`-p` specifies the number of processes to use for iterative correction and `--overwrite` will overwrite any existing weights with the same name in the Cooler
| text/markdown | null | Daniel Malzl <daniel@menchelab.com> | null | null | MIT License Copyright (c) 2024 Daniel Malzl Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | bioinformatics, SPRITE sequencing, sequencing, NGS, cooler, barcode extraction, pairs file generation, 4DN, 3D genome | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cooler>=0.10.2",
"pybedtools>=0.10.0",
"pysam>=0.22.1"
] | [] | [] | [] | [
"Homepage, https://github.com/dmalzl/spritefridge"
] | twine/6.0.1 CPython/3.12.8 | 2026-02-19T13:45:17.440346 | spritefridge-1.4.2.tar.gz | 70,331 | 85/ee/710200d648b5b602737ad39a186c50501e2721ac0270b2f5c0164a10a654/spritefridge-1.4.2.tar.gz | source | sdist | null | false | 2c6fe64f40f6d8332b28bb8481cd4f8e | 18f4a7d0403813e696ac49f170801c0fd22d742e7bbaf7c2373ca0ab75fe1f57 | 85ee710200d648b5b602737ad39a186c50501e2721ac0270b2f5c0164a10a654 | null | [] | 235 |
2.4 | euclidlib | 2026.2 | Unofficial package to read data from the Euclid mission | # euclidlib
[](https://pypi.org/project/euclidlib/)
[](https://github.com/euclidlib/euclidlib/actions/workflows/tests.yml)
[](https://pre-commit.com/)
[](https://docs.pytest.org/)
[](https://docs.astral.sh/ruff/)
[](https://prettier.io/)
[](https://mypy.readthedocs.io/)
[](#contributors)
## Table of Contents
- [Introduction](#introduction)
- [Installation](#installation)
- [Contributing](#contributing)
- [License](#license)
- [Contributors](#contributors)
## Introduction
`euclidlib` is an unofficial Python package designed to access official Euclid mission products provided by the Science Ground Segment. Its goal is to offer the Euclid community a user-friendly, ready-to-use library that enables immediate work with science-ready Euclid data.
The package is maintained on a best-effort basis by volunteers and contributors within the Euclid community. See the contributor list below.
## Installation
As simple as:
```sh
pip install euclidlib
```
### Prerequisites
- `python>3.7`
- `fitsio`
- `numpy`
## Structure and Format of `euclidlib`
The design of the `euclidlib` package closely follows the organisation of the [Euclid Data Product Description Documentation](http://st-dm.pages.euclid-sgs.uk/data-product-doc/dm10/) and reflects the structure of the Euclid Science Ground Segment.
```mermaid
graph TD
EUCLIDLIB[euclidlib]
LE3[le3]
PK_WL[pk_wl]
TWOPCF_WL[twopcf_wl]
PK_GC[pk_gc]
TWOPCF_GC[twopcf_gc]
PHZ[phz]
EUCLIDLIB --> LE3
EUCLIDLIB --> PHZ
LE3 --> PK_WL
LE3 --> TWOPCF_WL
LE3 --> PK_GC
LE3 --> TWOPCF_GC
```
`euclidlib` provides all data products in a unified, Pythonic format based on dataclasses, ensuring consistent, intuitive, and easy-to-use interfaces across all supported products. Please consult the full documentation for additional details.
## Contributing
If you would like to contribute, follow the following steps:
1. Open an issue to let the `euclidlib` maintainers know about your contribution plans (new Euclid product? New feature? A suggestion?)
2. Create a new branch:
```sh
git checkout -b feature/your-feature-name
```
3. Commit your changes:
```sh
git commit -m 'Add some feature'
```
4. Push to the branch:
```sh
git push origin feature/your-feature-name
```
5. Open a pull request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Contributors
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind are welcome!
To discover the meaning of each icon, hover your mouse over it.
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tbody>
<tr>
<td align="center" valign="top" width="14.28%"><a href="http://gcanasherrera.com"><img src="https://avatars.githubusercontent.com/u/13239454?v=4?s=100" width="100px;" alt="Guadalupe Cañas-Herrera"/><br /><sub><b>Guadalupe Cañas-Herrera</b></sub></a><br /><a href="#code-gcanasherrera" title="Code">💻</a> <a href="#review-gcanasherrera" title="Reviewed Pull Requests">👀</a> <a href="#ideas-gcanasherrera" title="Ideas, Planning, & Feedback">🤔</a> <a href="#maintenance-gcanasherrera" title="Maintenance">🚧</a> <a href="#test-gcanasherrera" title="Tests">⚠️</a> <a href="#example-gcanasherrera" title="Examples">💡</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://ntessore.page"><img src="https://avatars.githubusercontent.com/u/3993688?v=4?s=100" width="100px;" alt="Nicolas Tessore"/><br /><sub><b>Nicolas Tessore</b></sub></a><br /><a href="#code-ntessore" title="Code">💻</a> <a href="#review-ntessore" title="Reviewed Pull Requests">👀</a> <a href="#ideas-ntessore" title="Ideas, Planning, & Feedback">🤔</a> <a href="#example-ntessore" title="Examples">💡</a> <a href="#maintenance-ntessore" title="Maintenance">🚧</a> <a href="#test-ntessore" title="Tests">⚠️</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/zahrabaghkhani"><img src="https://avatars.githubusercontent.com/u/47903409?v=4?s=100" width="100px;" alt="Zahra Baghkhani"/><br /><sub><b>Zahra Baghkhani</b></sub></a><br /><a href="#code-zahrabaghkhani" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://jaimeruizzapatero.net/"><img src="https://avatars.githubusercontent.com/u/39957598?v=4?s=100" width="100px;" alt="Jaime RZ"/><br /><sub><b>Jaime RZ</b></sub></a><br /><a href="#review-JaimeRZP" title="Reviewed Pull Requests">👀</a> <a href="#ideas-JaimeRZP" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/itutusaus"><img src="https://avatars.githubusercontent.com/u/20775836?v=4?s=100" width="100px;" alt="itutusaus"/><br /><sub><b>itutusaus</b></sub></a><br /><a href="#review-itutusaus" title="Reviewed Pull Requests">👀</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/FelicitasKeil"><img src="https://avatars.githubusercontent.com/u/70713596?v=4?s=100" width="100px;" alt="Felicitas Keil"/><br /><sub><b>Felicitas Keil</b></sub></a><br /><a href="#code-FelicitasKeil" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/WillHartley"><img src="https://avatars.githubusercontent.com/u/6814229?v=4?s=100" width="100px;" alt="WillHartley"/><br /><sub><b>WillHartley</b></sub></a><br /><a href="#ideas-WillHartley" title="Ideas, Planning, & Feedback">🤔</a> <a href="#data-WillHartley" title="Data">🔣</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/FlorianDubath"><img src="https://avatars.githubusercontent.com/u/9742907?v=4?s=100" width="100px;" alt="FlorianDubath"/><br /><sub><b>FlorianDubath</b></sub></a><br /><a href="#ideas-FlorianDubath" title="Ideas, Planning, & Feedback">🤔</a> <a href="#data-FlorianDubath" title="Data">🔣</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/jacopo-salvalaggio"><img src="https://avatars.githubusercontent.com/u/99494103?v=4?s=100" width="100px;" alt="Jacopo Salvalaggio"/><br /><sub><b>Jacopo Salvalaggio</b></sub></a><br /><a href="#code-jacopo-salvalaggio" title="Code">💻</a> <a href="#ideas-jacopo-salvalaggio" title="Ideas, Planning, & Feedback">🤔</a> <a href="#data-jacopo-salvalaggio" title="Data">🔣</a></td>
</tr>
</tbody>
</table>
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
| text/markdown | null | Nicolas Tessore <n.tessore@ucl.ac.uk>, Guadalupe Canas-Herrera <guadalupe.canasherrera@esa.int> | null | Nicolas Tessore <n.tessore@ucl.ac.uk>, Guadalupe Canas-Herrera <guadalupe.canasherrera@esa.int> | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"cosmolib",
"fitsio",
"numpy",
"pytest>=6.0; extra == \"test\""
] | [] | [] | [] | [
"Repository, https://github.com/euclidlib/euclidlib",
"Issues, https://github.com/euclidlib/euclidlib/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:45:15.103120 | euclidlib-2026.2.tar.gz | 2,826,346 | f1/ef/e468dc018bcc7a20479e0dc320e74129113dfc1e5f454b74a468a1638633/euclidlib-2026.2.tar.gz | source | sdist | null | false | b76a3e5d0b979fa63eaf3a5251f0d486 | 94205f2748970837e90ad0d0ee5ee3c30618d369b05ee68f0c1fc8cc4dd4fbf5 | f1efe468dc018bcc7a20479e0dc320e74129113dfc1e5f454b74a468a1638633 | MIT | [
"LICENSE"
] | 250 |
2.4 | pydantic-settings | 2.13.1 | Settings management using Pydantic | # pydantic-settings
[](https://github.com/pydantic/pydantic-settings/actions/workflows/ci.yml?query=branch%3Amain)
[](https://codecov.io/gh/pydantic/pydantic-settings)
[](https://pypi.python.org/pypi/pydantic-settings)
[](https://github.com/pydantic/pydantic-settings/blob/main/LICENSE)
[](https://pepy.tech/project/pydantic-settings)
[](https://github.com/pydantic/pydantic-settings)
Settings management using Pydantic.
See [documentation](https://docs.pydantic.dev/latest/concepts/pydantic_settings/) for more details.
| text/markdown | null | Samuel Colvin <s@muelcolvin.com>, Eric Jolibois <em.jolibois@gmail.com>, Hasan Ramezani <hasan.r67@gmail.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Environment :: MacOS X",
"Framework :: Pydantic",
"Framework :: Pydantic :: 2",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Ap... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.7.0",
"python-dotenv>=0.21.0",
"typing-inspection>=0.4.0",
"boto3-stubs[secretsmanager]; extra == \"aws-secrets-manager\"",
"boto3>=1.35.0; extra == \"aws-secrets-manager\"",
"azure-identity>=1.16.0; extra == \"azure-key-vault\"",
"azure-keyvault-secrets>=4.8.0; extra == \"azure-key-vault\"... | [] | [] | [] | [
"Homepage, https://github.com/pydantic/pydantic-settings",
"Funding, https://github.com/sponsors/samuelcolvin",
"Source, https://github.com/pydantic/pydantic-settings",
"Changelog, https://github.com/pydantic/pydantic-settings/releases",
"Documentation, https://docs.pydantic.dev/dev-v2/concepts/pydantic_set... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:45:08.055663 | pydantic_settings-2.13.1.tar.gz | 223,826 | 52/6d/fffca34caecc4a3f97bda81b2098da5e8ab7efc9a66e819074a11955d87e/pydantic_settings-2.13.1.tar.gz | source | sdist | null | false | 11f7d886512c22a68c740c3e2b58651f | b4c11847b15237fb0171e1462bf540e294affb9b86db4d9aa5c01730bdbe4025 | 526dfffca34caecc4a3f97bda81b2098da5e8ab7efc9a66e819074a11955d87e | MIT | [
"LICENSE"
] | 8,998,024 |
2.4 | deen-api-client | 1.1.0 | Python client for Deen API from Imaniro- Access authentic hadith collections. | # Deen API Python Client
A Python client for the Deen API, providing easy access to Islamic resources including Hadith, Quran verses, and Duas.
## Installation
```bash
pip install deen-api-client
```
# Quick Start
```python
from deen_api import ImaniroDeenAPIClient
# Initialize client with your API key
client = ImaniroDeenAPIClient(api_key="sk_12345")
# Get hadiths from Sahih al-Bukhari
hadiths = client.get_hadiths(book="Sahih al-Bukhari", max_limits=5)
for hadith in hadiths:
print(f"Book: {hadith.book}")
print(f"Chapter: {hadith.chapter}")
print(f"Text: {hadith.text}")
print(f"Translation: {hadith.translation}")
print("---")
# Get Quran verses
verses = client.get_quran_verses(surah="Al-Fatiha", max_limits=3)
# Get Duas
duas = client.get_duas(category="morning", max_limits=5)
```
# Features
- Hadith Access: Retrieve hadiths from various books
- Quran Verses: Access Quranic verses with translations(under development)
- Islamic Duas: Get supplications for various occasions(under development)
- Error Handling: Comprehensive exception handling
# Error Handling
The client provides specific exception types:
```python
from deen_api import AuthenticationError, RateLimitError, NotFoundError
try:
hadiths = client.get_hadiths(book="Sahih al-Bukhari")
except AuthenticationError:
print("Invalid API key")
except RateLimitError:
print("Rate limit exceeded")
except NotFoundError:
print("Resource not found")
```
## Example Usage Files
### `examples/hadith_example.py`
```python
from deen_api import ImaniroDeenAPIClient
def hadith_example():
client = ImaniroDeenAPIClient(api_key="sk_12345")
try:
# Get hadiths from Sahih al-Bukhari
hadiths = client.get_hadiths(book="Sahih al-Bukhari", max_limits=3)
print("Hadiths from Sahih al-Bukhari:")
for i, hadith in enumerate(hadiths, 1):
print(f"\n{i}. {hadith.hadith}")
print(f"Translation: {hadith.translation}")
print("-" * 50)
except Exception as e:
print(f"Error: {e}")
if __name__ == "__main__":
hadith_example()
```
| text/markdown | Imaniro pvt ltd | info@imaniro.com | null | null | null | islamic, sunnah, bukhari, api, hadiths, quran, dua, muslim, daily hadiths | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python ... | [] | https://github.com/imaniro-tech/deen-api-python-client | null | >=3.7 | [] | [] | [] | [
"requests>=2.25.0",
"dataclasses>=0.6; python_version < \"3.7\"",
"pytest>=6.0.0; extra == \"dev\"",
"requests-mock>=1.9.0; extra == \"dev\"",
"pytest-cov>=2.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Documentation, https://deen.imaniro.com/documentation",
"Source, https://github.com/imaniro-tech/deen-api-python-client",
"Tracker, https://github.com/imaniro-tech/deen-api-python-client/issues"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-19T13:44:15.333866 | deen_api_client-1.1.0.tar.gz | 6,413 | 4f/24/21268abf84f4df632e0a300ee5838f74547e1e07091ab075934c7ec10c85/deen_api_client-1.1.0.tar.gz | source | sdist | null | false | b2595c2afeacf951ad3459d4a29e26b1 | a49866258e8773ab09c7e78a04a76249a5fae29c88ce0b5bc6a6b1fd18173c3a | 4f2421268abf84f4df632e0a300ee5838f74547e1e07091ab075934c7ec10c85 | null | [
"LICENSE"
] | 256 |
2.4 | quokka-sharp | 2.7.2 | Quokka Sharp |
Quokka Sharp for simulating and equivalence checking
of quantum circuits based on weighted model counting.
Please intall a weighted model counting tool first.
Please refer to https://github.com/System-Verification-Lab/Quokka-Sharp for more details.
| null | System Verification Lab | j.mei@liacs.leidenuniv.nl | null | null | null | python, quantum circuits | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux"
] | [] | null | null | null | [] | [] | [] | [
"numpy>=2.2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T13:44:12.082022 | quokka_sharp-2.7.2.tar.gz | 33,684 | 68/bc/f58990ac401c079c20b14dd6ba077a1cf14b811cb8bbc3bf85ef0b770daf/quokka_sharp-2.7.2.tar.gz | source | sdist | null | false | 1a72f4cd54a5bcb17a0f98791e9ffbc1 | aa347b40fd9b1b15619f985684f27a155ed63791a3a7326d45d6d569492c4dfc | 68bcf58990ac401c079c20b14dd6ba077a1cf14b811cb8bbc3bf85ef0b770daf | null | [] | 251 |
2.4 | gcp-chirp | 0.1.0 | A premium CLI tool for Google Cloud Text-to-Speech Chirp 3 HD voices | # 🎙️ gcp-chirp
A premium CLI tool for working with **Google Cloud Text-to-Speech Chirp 3 HD** voices. Chirp 3 HD is a new generation of expressive, high-fidelity voices powered by Google's latest LLMs.
## 📋 Prerequisites
Before you begin, ensure you have the following installed and configured:
1. **FFmpeg**: Required for audio processing.
```bash
# macOS
brew install ffmpeg
```
2. **Google Cloud Project**:
- Enable the **Text-to-Speech API**.
- Set up **Application Default Credentials (ADC)**. Follow the [official documentation](https://docs.cloud.google.com/docs/authentication/application-default-credentials).
- *Note*: You may need to perform a one-time setup for the `gcloud` CLI:
```bash
gcloud auth application-default login
```
## 🚀 Features
- **Expressive Synthesis**: High-fidelity audio with natural intonation.
- **Voice Listing**: Easily list available Chirp 3 HD voices across languages.
- **Modern CLI**: Beautiful output powered by `rich` and `typer`.
- **Managed by uv**: Lightning fast dependency management.
## 📦 Installation
### Via PyPI (Recommended)
This is the easiest way to install the tool globally:
```bash
uv tool install gcp-chirp
```
### From Source
If you are developing or want to run it locally:
```bash
git clone https://github.com/msampathkumar/gcp-chirp
cd gcp-chirp
uv sync
```
## 🗑 Uninstallation
To remove the tool and its local data:
1. **Clean local configuration**:
```bash
gcp-chirp uninstall
```
2. **Remove the global tool**:
```bash
uv tool uninstall gcp-chirp
```
## 🔑 Authentication
The tool uses Google Cloud **Application Default Credentials (ADC)** by default.
1. **ADC (Recommended)**: Run the following command if you haven't already:
```bash
gcloud auth application-default login
```
2. **Service Account Key**: If using a service account JSON:
```bash
export GOOGLE_APPLICATION_CREDENTIALS="path/to/your/key.json"
```
3. **CLI Option**: Alternatively, pass the path directly via the `--creds` option in the `say` command.
## 🛠 Usage
### 🚀 Quick Start (One-time Setup)
Run the setup wizard to check dependencies (FFmpeg, gcloud), configure authentication, and set initial defaults:
```bash
uv run gcp-chirp setup
```
### Configuration Precedence
The tool follows a strict precedence for settings:
1. **CLI Flags**: (e.g., `--project`, `--voice`, `--play`) always win.
2. **Config File**: (`~/.gcp-chirp/settings.yaml`).
3. **Environment Variables**: (e.g., `GOOGLE_CLOUD_PROJECT`).
### ⚙️ Configuration Commands
#### Interactive Setup
Set your default project, voice, and output preferences:
```bash
uv run gcp-chirp config
```
#### View Current Configuration
```bash
uv run gcp-chirp config --show
```
#### Reset Configuration
Reset all settings to factory defaults:
```bash
uv run gcp-chirp config-reset
```
### 📋 Action Commands
#### List Voices
List available voices for a language (uses default language if omitted):
```bash
uv run gcp-chirp list --project my-temporary-project
```
#### Synthesize Speech
Convert text to audio with overrides:
```bash
uv run gcp-chirp say "Hello, I am synthesized using a specific project and voice!" --voice en-US-Chirp3-HD-Charon --project my-project --play
```
*Note: Files are saved in the configured `output_dir` with a timestamped filename unless `--output` is provided.*
## 🏗 Track Status
Managed via `conductor/tracks.md`.
---
*Built with ❤️ for the Empire.*
| text/markdown | null | Sampath M <sampathm@google.com> | null | null | null | ai, chirp, chirp-3-hd, cli, google-cloud, text-to-speech, tts | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Multimedia :: Sound/Audio :: Speech"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"google-cloud-texttospeech>=2.34.0",
"python-dotenv>=1.2.1",
"pyyaml>=6.0.3",
"rich>=14.3.2",
"typer>=0.24.0"
] | [] | [] | [] | [
"Homepage, https://github.com/msampathkumar/gcp-chirp",
"Repository, https://github.com/msampathkumar/gcp-chirp"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:43:57.262044 | gcp_chirp-0.1.0.tar.gz | 40,880 | d5/9f/e602da1bfa73c3d38af246ac665604c6d028181ed7999b03cf9a8b9896a8/gcp_chirp-0.1.0.tar.gz | source | sdist | null | false | e529d5ce1d54938ba12aa1ad413f8fd4 | 145ca52df8f0a3b72990ecc8012b20759f4605e3818c91a07228d4057a842ae3 | d59fe602da1bfa73c3d38af246ac665604c6d028181ed7999b03cf9a8b9896a8 | null | [
"LICENSE"
] | 244 |
2.4 | pym2v | 0.2.0 | Python wrapper to interact with the Eurogard m2v IoT platform | # pym2v



Python wrapper to interact with [m2v][1] industrial IoT platform from [Eurogard][2].
## Prerequisites
- Python 3.12+
- Programmatic access to the Eurogard API
## Installation
pym2v is available as a Python package and can be installed via pip or [uv][3].
### Via pip
1. Create a virtual environment: `python3 -m venv .venv`
1. Activate the virtual environment: `source .venv/bin/active`
1. Install pym2v via pip: `pip install pym2v`
### Via uv
1. Install pym2v via uv: `uv add pym2v`
## Configuration
To authenticate with the Eurogard API, you need to provide the following credentials:
- Username
- Password
- Client ID
- Client Secret
You can do this either by using an `.env` file or by setting environment variables directly.
### Using an .env file
Rename the `.env.example` at the root of the project to `.env`, and replace the placeholder values with your actual credentials.
```
EUROGARD_BASEURL=https://eurogard.cloud
EUROGARD_USERNAME=your_username_here
EUROGARD_PASSWORD=your_password_here
EUROGARD_CLIENT_ID=your_client_id_here
EUROGARD_CLIENT_SECRET=your_client_secret_here
```
## Usage
Import the `EurogardAPI` object and create an instance of it
```python
from datetime import datetime, timedelta
from pym2v.api import EurogardAPI
api = EurogardAPI()
```
Retrieve a list of machines
```python
machines = api.get_machines()
```
Get the UUID of the machine your are interested in
```python
MACHINE_NAME = "1337Machine"
machine_uuid = api.get_machine_uuid(MACHINE_NAME, machines)
```
Get the names of measurements for which you like to pull data
```python
result = api.get_machine_measurement_names(machine_uuid)
```
Turn the data returned by the API into a DataFrame for easier handling
```python
import polars as pl
measurement_names_df = pl.DataFrame(result["entities"])
```
Get actual data
```python
START_DATE = datetime(2025, 1, 1)
END_DATE = datetime(2025, 1, 13)
INTERVAL = timedelta(seconds=60)
MAX_FRAME_LENGTH = timedelta(days=30)
NAMES = [col.strip() for col in measurement_names_df.get_column("name").to_list()]
data_df = api.get_long_frame_from_names(
machine_uuid=machine_uuid,
names=NAMES,
start=START_DATE,
end=END_DATE,
interval=INTERVAL,
max_frame_length=MAX_FRAME_LENGTH,
)
```
## Contributing
Check out [CONTRIBUTING.md](CONTRIBUTING.md) for further information.
[1]: https://eurogard.de
[2]: https://eurogard.de/software/m2v/
[3]: https://docs.astral.sh/uv/
| text/markdown | Stefan Langenbach | Stefan Langenbach <stefan.langenbach@bytecare.tech> | Stefan Langenbach | Stefan Langenbach <stefan.langenbach@bytecare.tech> | null | eurogard, m2v, iot, iiot, api | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Topic :: Software Development",
"Typing :: Typed"
] | [] | null | null | <3.14,>=3.12 | [] | [] | [] | [
"httpx>=0.28.0",
"httpx-auth>=0.23.0",
"polars>=1.0.0",
"pydantic-settings>=2.6.1",
"tenacity>=9.0.0",
"tqdm>=4.67.1"
] | [] | [] | [] | [
"Homepage, https://www.bytecare.tech",
"Documentation, https://www.bytecare.tech/pym2v",
"Repository, https://github.com/bytecaretech/pym2v"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:43:17.192783 | pym2v-0.2.0.tar.gz | 20,052 | ec/7d/76811c5231f7ef8794698e7f64bf967c52173251957751ab9350e0839861/pym2v-0.2.0.tar.gz | source | sdist | null | false | 5474c10ae3897df7290464a5ae6dc787 | 48fbd12ca18d6cead890e8ca5d73384102cb6827cde4e8b8af5c333af4ba0d27 | ec7d76811c5231f7ef8794698e7f64bf967c52173251957751ab9350e0839861 | AGPL-3.0-only | [
"LICENSE"
] | 233 |
2.4 | layoutir | 1.0.4 | Production-grade Document Ingestion & Canonicalization Engine | # Document IR - Production Document Ingestion Engine
**An IR-first, extensible document compiler for AI systems.**
This is NOT a PDF-to-Markdown script. It is a production-grade document ingestion and canonicalization engine designed with compiler-like architecture: Input → IR → Backends.
## Architecture
### Design Philosophy
Think like a compiler engineer:
- **Input Layer**: Format-specific parsers (currently PDF via Docling)
- **AST/IR**: Canonical intermediate representation with strict schema
- **Backends**: Multiple export formats (Markdown, Text, Parquet)
### Layer Separation (Non-Negotiable)
```
┌─────────────────────────────────────────┐
│ Input Adapter Layer │
│ Format-specific parsing only │
└────────────────────┬────────────────────┘
│
┌────────────────────▼────────────────────┐
│ Extraction Layer │
│ Extract raw structural elements │
└────────────────────┬────────────────────┘
│
┌────────────────────▼────────────────────┐
│ Normalization Layer │
│ Convert to canonical IR with hashing │
└────────────────────┬────────────────────┘
│
┌────────────────────▼────────────────────┐
│ Canonical IR Layer │
│ Typed schema, stable IDs, relationships│
└────────────────────┬────────────────────┘
│
┌────────────────────▼────────────────────┐
│ Export Layer │
│ Markdown, Text, Parquet, Assets │
└─────────────────────────────────────────┘
```
## Key Features
### ✅ Deterministic & Idempotent
- Hash-based stable IDs (document, block, table, image, chunk)
- Running pipeline twice produces identical output
- No UUIDs, no randomness
### ✅ Canonical IR Schema
```python
Document
├── document_id: str (hash-based)
├── schema_version: str
├── parser_version: str
├── metadata: DocumentMetadata
├── blocks: List[Block]
│ ├── block_id: str (deterministic)
│ ├── type: BlockType (heading, paragraph, table, image, etc.)
│ ├── content: str
│ ├── page_number: int
│ ├── bbox: BoundingBox
│ └── metadata: dict
└── relationships: List[Relationship]
```
### ✅ Pluggable Chunking
- `SemanticSectionChunker`: Section-based (headings)
- `TokenWindowChunker`: Fixed token windows with overlap
- `LayoutAwareChunker`: Layout-aware (stub)
All chunking operates on IR, not raw text.
### ✅ Multiple Export Formats
- **Markdown**: Human-readable with formatting
- **Plain Text**: Simple text extraction
- **Parquet**: Efficient structured storage for tables/blocks
- **Assets**: Extracted images (PNG) and tables (CSV)
### ✅ Structured Output
```
/<document_id>/
manifest.json # Processing metadata
ir.json # Canonical IR
chunks.json # Chunk definitions
/assets/
/images/ # Extracted images
/tables/ # Tables as CSV
/exports/
/markdown/ # Markdown output
/text/ # Plain text output
/parquet/ # Parquet datasets
/logs/ # Processing logs
```
## Installation
**IMPORTANT**: LayoutIR requires PyTorch with CUDA 13.0 support for GPU acceleration. Install PyTorch first:
```bash
# Step 1: Install PyTorch with CUDA 13.0 (REQUIRED)
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130
# Step 2: Install LayoutIR
pip install layoutir
```
### Alternative Installation Methods
```bash
# Install from source
git clone https://github.com/RahulPatnaik/layoutir.git
cd layoutir
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130
pip install -e .
```
**Note**: The package intentionally does not include PyTorch in its base dependencies to ensure you get the correct CUDA version. Any existing PyTorch installation will be overwritten by the CUDA 13.0 version.
## Usage
### Basic Usage
```bash
# Using the CLI
layoutir --input file.pdf --output ./out
# Or using Python directly
python -m layoutir.cli --input file.pdf --output ./out
```
### Advanced Options
```bash
# Semantic chunking (default)
layoutir --input file.pdf --output ./out --chunk-strategy semantic
# Token-based chunking with custom size
layoutir --input file.pdf --output ./out \
--chunk-strategy token \
--chunk-size 1024 \
--chunk-overlap 128
# Enable GPU acceleration
layoutir --input file.pdf --output ./out --use-gpu
# Debug mode with structured logging
layoutir --input file.pdf --output ./out \
--log-level DEBUG \
--structured-logs
```
### Python API
```python
from pathlib import Path
from layoutir import Pipeline
from layoutir.adapters import DoclingAdapter
from layoutir.chunking import SemanticSectionChunker
# Create pipeline
adapter = DoclingAdapter(use_gpu=True)
chunker = SemanticSectionChunker(max_heading_level=2)
pipeline = Pipeline(adapter=adapter, chunk_strategy=chunker)
# Process document
document = pipeline.process(
input_path=Path("document.pdf"),
output_dir=Path("./output")
)
# Access results
print(f"Extracted {len(document.blocks)} blocks")
print(f"Document ID: {document.document_id}")
```
## Project Structure
```
src/layoutir/
├── schema.py # Canonical IR schema (Pydantic)
├── pipeline.py # Main orchestrator
│
├── adapters/ # Input adapters
│ ├── base.py # Abstract interface
│ └── docling_adapter.py # PDF via Docling
│
├── extraction/ # Raw element extraction
│ └── docling_extractor.py
│
├── normalization/ # IR normalization
│ └── normalizer.py
│
├── chunking/ # Chunking strategies
│ └── strategies.py
│
├── exporters/ # Export backends
│ ├── markdown_exporter.py
│ ├── text_exporter.py
│ ├── parquet_exporter.py
│ └── asset_writer.py
│
└── utils/
├── hashing.py # Deterministic ID generation
└── logging_config.py # Structured logging
ingest.py # CLI entrypoint
benchmark.py # Performance benchmark
test_pipeline.py # Integration test
```
## Design Constraints
### ✅ What We DO
- Strict layer separation
- Deterministic processing
- Schema validation
- Pluggable strategies
- Observability/timing
- Efficient storage (Parquet)
### ❌ What We DON'T DO
- Mix business logic into adapters
- Hardcode paths or configurations
- Use non-deterministic IDs (UUIDs)
- Combine IR and export logic
- Skip schema validation
- Load entire files into memory unnecessarily
## Extensibility
### Adding New Input Formats
1. Implement `InputAdapter` interface:
```python
class DocxAdapter(InputAdapter):
def parse(self, file_path: Path) -> Any: ...
def supports_format(self, file_path: Path) -> bool: ...
def get_parser_version(self) -> str: ...
```
2. Implement corresponding extractor
3. Update pipeline to use new adapter
### Adding New Chunk Strategies
```python
class CustomChunker(ChunkStrategy):
def chunk(self, document: Document) -> List[Chunk]:
# Operate on IR blocks
...
```
### Adding New Export Formats
```python
class JsonExporter(Exporter):
def export(self, document: Document, output_dir: Path, chunks: List[Chunk]):
# Export from canonical IR
...
```
## Performance
Designed to handle 200+ page PDFs efficiently:
- Streaming processing where possible
- Lazy loading of heavy dependencies
- GPU acceleration support
- Parallel export operations
- Efficient Parquet storage for tables
## Observability
- Structured JSON logging
- Stage-level timing metrics
- Extraction statistics
- Deterministic output for debugging
## Schema Versioning
Current schema version: `1.0.0`
Future schema changes will be tracked via semantic versioning:
- Major: Breaking changes to IR structure
- Minor: Backwards-compatible additions
- Patch: Bug fixes
## Future Enhancements
- [ ] DOCX input adapter
- [ ] HTML input adapter
- [ ] Advanced layout-aware chunking
- [ ] Parallel page processing
- [ ] Incremental updates (only reprocess changed pages)
- [ ] Vector embeddings export
- [ ] OCR fallback for scanned PDFs
## License
See project root for license information.
## Contributing
This is a research/prototype phase project. See main project README for contribution guidelines.
# layoutir
| text/markdown | null | Rahul Patnaik <rpatnaik2005@gmail.com> | null | Rahul Patnaik <rpatnaik2005@gmail.com> | Apache-2.0 | pdf, document, parsing, ir, layout, extraction, chunking | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic>=2.0.0",
"docling>=1.0.0",
"pandas>=2.0.0",
"pyarrow>=10.0.0",
"torch>=2.5.0; extra == \"cuda\"",
"torchvision>=0.20.0; extra == \"cuda\"",
"pytest>=7.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/RahulPatnaik/layoutir",
"Documentation, https://github.com/RahulPatnaik/layoutir/blob/main/README.md",
"Repository, https://github.com/RahulPatnaik/layoutir"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:42:57.395430 | layoutir-1.0.4.tar.gz | 44,001 | 5f/37/1a96ebc308245ad98e287be373b3d61ca176395ceacfd318e4fe4e846c80/layoutir-1.0.4.tar.gz | source | sdist | null | false | f946dff7f2a5d24b1935606d79e84a65 | 17e8453c6d7585ba767399fd324587250f20512738929d597585f17a45d57bf9 | 5f371a96ebc308245ad98e287be373b3d61ca176395ceacfd318e4fe4e846c80 | null | [
"LICENSE"
] | 264 |
2.4 | pytola | 0.1.0 | Pytola: Essential Utilities for Python Devs. | =======
Pytola
=======
.. image:: https://img.shields.io/pypi/v/pytola.svg
:target: https://pypi.python.org/pypi/pytola
.. image:: https://img.shields.io/travis/gooker_young/pytola.svg
:target: https://travis-ci.com/gooker_young/pytola
.. image:: https://readthedocs.org/projects/pytola/badge/?version=latest
:target: https://pytola.readthedocs.io/en/latest/?version=latest
:alt: Documentation Status
Pytola: Essential Utilities for Python Devs - Now with Rust-powered performance!
* Free software: MIT license
* Documentation: https://pytola.readthedocs.io/zh-cn/stable/
Features
--------
* **高性能数学函数**: 使用 Rust 实现的核心算法,提供比纯 Python 更快的执行速度
* **无缝集成**: Python 接口自动回退到纯 Python 实现,无需额外配置
* **易于使用**: 简单直观的 API 设计
* **跨平台**: 支持 Windows、macOS 和 Linux
Installation
------------
从 PyPI 安装::
pip install pytola
或者从源码安装::
git clone https://gitee.com/gooker_young/pytola.git
cd pytola
pip install .
开发安装::
git clone https://gitee.com/gooker_young/pytola.git
cd pytola
pip install -e .
Usage
-----
基础使用::
from pytola import pytola
# Fibonacci 数列计算
result = pytola.fibonacci(10) # 返回 55
print(f"Fibonacci(10) = {result}")
# 阶乘计算
result = pytola.factorial(5) # 返回 120
print(f"Factorial(5) = {result}")
# 素数检测
is_prime = pytola.is_prime(17) # 返回 True
print(f"Is 17 prime? {is_prime}")
# 检查是否有 Rust 扩展
has_rust = pytola.has_rust_extension()
print(f"Rust extension available: {has_rust}")
API Reference
-------------
fibonacci(n: int) -> int
计算第 n 个斐波那契数
factorial(n: int) -> int
计算 n 的阶乘
is_prime(n: int) -> bool
检查数字 n 是否为素数
has_rust_extension() -> bool
检查是否启用了 Rust 扩展
别名函数:
- fib(n) → fibonacci(n)
- fact(n) → factorial(n)
- prime_check(n) → is_prime(n)
Performance
-----------
Pytola 使用 Rust 实现核心算法,在处理大数值时相比纯 Python 实现有显著性能提升:
+----------------+--------------+---------------+
| 函数 | Python 时间 | Rust 时间 |
+================+==============+===============+
| fibonacci(35) | ~2.5s | ~0.001s |
+----------------+--------------+---------------+
| factorial(100) | ~0.1s | ~0.0001s |
+----------------+--------------+---------------+
构建要求
--------
**生产环境:**
- Python >= 3.8
- pip
**开发环境:**
- Python >= 3.8
- Rust 工具链 (推荐使用 rustup)
- maturin >= 1.0
- cargo
开发流程
--------
使用 Makefile 命令:
.. code-block:: bash
# 清理构建文件
make clean
# 构建发布版本
make build
# 运行测试
make test
# 开发模式安装
make develop
# 检查代码质量
make check
或者直接使用 maturin:
.. code-block:: bash
# 开发模式安装
maturin develop
# 构建 wheel 包
maturin build --release
# 发布到 PyPI
maturin publish
Credits
-------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
| text/x-rst; charset=UTF-8 | null | gooker_young <gooker_young@qq.com> | null | gooker_young <gooker_young@qq.com> | MIT license | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"coverage>=7.6.1; extra == \"dev\"",
"hatch>=1.14.1; extra == \"dev\"",
"maturin>=1.12.2; extra == \"dev\"",
"nuitka>=2.8.9; extra == \"dev\"",
"pip>=25.0.1; extra == \"dev\"",
"pip>=25.0.1; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"pytest-benchmark>=4.0.0; extra == \"dev\"",
"pytes... | [] | [] | [] | [
"Documentation, https://pytola.readthedocs.io/zh-cn/stable/",
"Issues, https://gitee.com/gooker_young/pytola/issues",
"Repository, https://gitee.com/gooker_young/pytola"
] | maturin/1.12.2 | 2026-02-19T13:42:33.320673 | pytola-0.1.0.tar.gz | 9,518 | 3d/db/e2a657775a48b5a6dae6299a1f5328cc828c4bfff01fac98bab3f666c9a7/pytola-0.1.0.tar.gz | source | sdist | null | false | 1bafc7fa8354e4a15507d042dd5fcc04 | 7b3dff2194e3fd562ba6398c9f31c6d213d32e51a975df90f729f62da7279a2b | 3ddbe2a657775a48b5a6dae6299a1f5328cc828c4bfff01fac98bab3f666c9a7 | null | [] | 247 |
2.4 | numgrids | 0.4.0 | Working with numerical grids made easy. | <h1 align="center">numgrids</h1>
<p align="center"> Working with numerical grids made easy.</p>
**Main Features**
- Quickly define numerical grids for any rectangular or curvilinear coordinate system
- Differentiation and integration
- Interpolation
- Easy manipulation of meshed functions
- Using high precision spectral methods (FFT + Chebyshev) wherever possible
- Fully compatible with *numpy*
## Installation
```shell
pip install numgrids
```
## Quick Start
To get started, have a look at the <a href="https://github.com/maroba/numgrids">quick start</a>.
| text/markdown | null | Matthias Baer <matthias.r.baer@googlemail.com> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.22",
"scipy>=1.10.1",
"matplotlib>=3.5",
"findiff>=0.10"
] | [] | [] | [] | [
"Homepage, https://github.com/maroba/numgrids"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T13:41:17.127810 | numgrids-0.4.0.tar.gz | 34,543 | b9/71/49357485b9ecbc31582e49007f98d905bdd9f07d298d4690a1d9c6c93da5/numgrids-0.4.0.tar.gz | source | sdist | null | false | 09dacc6ecec224947a08e82a8ea62e4f | 39f49c3f3763f71bd521387f764e215d67e11cf44be13be96cdf9dfc3d28ea9b | b97149357485b9ecbc31582e49007f98d905bdd9f07d298d4690a1d9c6c93da5 | MIT | [
"LICENSE"
] | 231 |
2.4 | attipy | 0.0.9 | A Python library for attitude and linear motion estimation using inertial sensor data | # AttiPy
AttiPy is a lightweight Python library for representing and estimating the attitude
(orientation) and linear motion of a body using IMU measurements and optional external
aiding. It provides a multiplicative extended Kalman filter (MEKF) for position,
velocity and attitude (PVA) estimation, and an attitude abstraction with clearly defined
reference frames and rotation conventions.
## Installation
```bash
pip install attipy
```
## Quick start
Convert to/from a variety of attitude representations using the ``Attitude`` class:
```python
import attipy as ap
import numpy as np
# From Euler angles to unit quaternion
att = ap.Attitude.from_euler([0.0, 0.0, 0.0])
q = att.as_quaternion()
```
Estimate roll and pitch from IMU measurements (accelerometer and gyroscope) using
the ``MEKF`` class:
```python
import attipy as ap
import numpy as np
# Position, velocity, attitude and IMU reference signals
fs = 10.0 # Hz
t, pos, vel, euler, f, w = ap.pva_sim(fs)
# Add IMU measurement noise
acc_noise_density = 0.001 # (m/s^2) / sqrt(Hz)
gyro_noise_density = 0.0001 # (rad/s) / sqrt(Hz)
bg = (0.001, 0.002, 0.003) # rad/s
rng = np.random.default_rng(42)
f_meas = f + acc_noise_density * np.sqrt(fs) * rng.standard_normal(f.shape)
w_meas = w + bg + gyro_noise_density * np.sqrt(fs) * rng.standard_normal(w.shape)
# Estimate attitude using MEKF
att = ap.Attitude.from_euler(euler[0])
mekf = ap.MEKF(fs, att)
euler_est = []
for f_i, w_i in zip(f_meas, w_meas):
mekf.update(f_i, w_i)
euler_est.append(mekf.attitude.as_euler())
euler_est = np.asarray(euler_est)
```
To limit integration drift, the MEKF corrects its state estimates using long-term
stable aiding measurements. When no aiding measurements are available (as in the
example above), stationarity is assumed to ensure convergence. By default, zero-velocity
aiding with a 10 m/s standard deviation is applied; this constrains roll and pitch only,
as these are the only degrees of freedom observable from specific force measurements
and the known direction of gravity. Under sustained linear acceleration, velocity
and/or position aiding is recommended to maintain accurate attitude estimates.
The following example demonstrates how to estimate position, velocity and attitude
(including yaw) from IMU and aiding measurements.
```python
import attipy as ap
import numpy as np
# Position, velocity, attitude and IMU reference signals
fs = 10.0 # Hz
t, pos, vel, euler, f, w = ap.pva_sim(fs)
yaw = euler[:, 2]
# Add IMU measurement noise
acc_noise_density = 0.001 # (m/s^2) / sqrt(Hz)
gyro_noise_density = 0.0001 # (rad/s) / sqrt(Hz)
bg = (0.001, 0.002, 0.003) # rad/s
rng = np.random.default_rng(42)
f_meas = f + acc_noise_density * np.sqrt(fs) * rng.standard_normal(f.shape)
w_meas = w + bg + gyro_noise_density * np.sqrt(fs) * rng.standard_normal(w.shape)
# Add velocity and heading measurement noise
pos_var = 0.1 # m
vel_var = 0.01 # (m/s)^2
yaw_var = 0.0001 # rad^2
rng = np.random.default_rng(42)
pos_meas = pos + np.sqrt(pos_var) * rng.standard_normal(pos.shape)
vel_meas = vel + np.sqrt(vel_var) * rng.standard_normal(vel.shape)
yaw_meas = yaw + np.sqrt(yaw_var) * rng.standard_normal(yaw.shape)
# Estimate position, velocity and attitude using MEKF
att = ap.Attitude.from_euler(euler[0])
mekf = ap.MEKF(fs, att)
pos_est, vel_est, euler_est = [], [], []
for f_i, w_i, p_i, v_i, y_i in zip(f_meas, w_meas, pos_meas, vel_meas, yaw_meas):
mekf.update(
f_i,
w_i,
pos=p_i,
pos_var=pos_var*np.ones(3),
vel=v_i,
vel_var=vel_var*np.ones(3),
yaw=y_i,
yaw_var=yaw_var
)
pos_est.append(mekf.position)
vel_est.append(mekf.velocity)
euler_est.append(mekf.attitude.as_euler())
pos_est = np.asarray(pos_est)
vel_est = np.asarray(vel_est)
euler_est = np.asarray(euler_est)
```
## Limitations and assumptions
- Intended for small-area, low-velocity applications; Earth rotation is neglected,
and gravitational acceleration is assumed constant.
| text/markdown | null | "Vegard R. Solum" <vegard.rorvik.solum@gmail.com> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"numba",
"numpy"
] | [] | [] | [] | [
"Homepage, https://github.com/vegardrsolum/attipy",
"Issues, https://github.com/vegardrsolum/attipy/issues"
] | twine/6.2.0 CPython/3.12.10 | 2026-02-19T13:41:14.479948 | attipy-0.0.9.tar.gz | 26,395 | 97/c7/8edf981a6cf9a2207c16b9b91f70e700f212c907273b6bbbddc799f0b262/attipy-0.0.9.tar.gz | source | sdist | null | false | fe3fa61a3b1647d1208c234b3af3dce2 | 0ad3897dc8d6f9f59029a2fa061a6cfec26cfe811998750fe5f8070edaa599ba | 97c78edf981a6cf9a2207c16b9b91f70e700f212c907273b6bbbddc799f0b262 | null | [
"LICENSE"
] | 247 |
2.4 | cybertron-spark | 0.1.11 | Automação do Zendesk com Selenium, Zenpy e integração Cybertron | Kit de Ferramentas para Automação Zendesk
Uma biblioteca Python projetada para acelerar o desenvolvimento de robôs de automação de processos (RPA) que interagem com a plataforma Zendesk. Ela combina a automação de interface de usuário (UI) com o Selenium e a interação via API com o Zenpy, além de fornecer componentes integrados para monitoramento (Heartbeat) e logging estruturado (BotsLogger).
Principais Funcionalidades 🚀
Gerenciamento de Driver: Crie e gerencie drivers do Selenium (Chrome, Firefox) de forma simplificada, com suporte para execução local e remota (Selenoid).
Automação UI Zendesk: Um conjunto de métodos de alto nível para interagir com a interface do Zendesk, como fazer login, preencher campos, submeter tickets, aplicar macros e muito mais.
Integração API Zendesk: Um wrapper conveniente para a biblioteca Zenpy que facilita a autenticação e a extração de dados da API do Zendesk, como buscar tickets em uma visualização.
Monitoramento e Alertas: A classe Heartbeat permite enviar atualizações de status e alertas para um painel de monitoramento centralizado via API.
Logging Estruturado: A classe BotsLogger gera logs em formato JSON, classifica erros automaticamente (críticos vs. avisos) e decide a melhor ação (parar o bot ou tentar novamente), integrando-se ao sistema de alertas.
Instalação
Instale a biblioteca usando o pip:
Bash
pip install dadosic-zencraft
Como Usar
A biblioteca é dividida em componentes modulares que podem ser usados em conjunto para construir um robô robusto.
1. Configurando o Driver do Selenium (Driver_Selenium)
Esta classe abstrai a configuração do WebDriver.
Python
from dadosic-zencraft import Driver_Selenium
# Configura um driver remoto (Selenoid) para o Chrome em modo headless
config_driver = Driver_Selenium(
ipselenoid='http://seu-ip-selenoid:4444/wd/hub',
browser='chrome',
headless=True
)
# Cria a instância do driver
driver = config_driver.criar_driver()
print("Driver criado com sucesso!")
# ... seu código de automação aqui ...
driver.quit()
2. Monitorando o Robô (Heartbeat)
Envie o status do seu robô para um painel de acompanhamento.
from dadosic-zencraft import Heartbeat, execucao, aguardando
# Instancia o Heartbeat com as credenciais da sua API de monitoramento
heartbeat = Heartbeat(
bot_id='MEU_BOT_ZENDESK_01',
endpoint='https://api.meupainel.com/status',
token='SEU_TOKEN_SECRETO'
)
# Envia um status
heartbeat.alertas(status=execucao)
print("Status 'EM EXECUÇÃO' enviado.")
# Envia o ID de um ticket processado para contagem
heartbeat.alertas(ticket_id='12345')
print("Registro do ticket 12345 enviado.")
# Ao final, informa que o bot está ocioso
heartbeat.alertas(status=aguardando)
print("Status 'AGUARDANDO CASOS' enviado.")
3. Usando a API do Zendesk (Zendesk_Zenpy)
Busque tickets de uma visualização específica antes de iniciar a automação da UI.
from dadosic-zencraft import Zendesk_Zenpy
# Autentica na API do Zendesk
zen_api = Zendesk_Zenpy(
zlogin='seu_email@empresa.com/token',
zpass='SEU_TOKEN_DA_API_ZENDESK',
instancia='sua-instancia-zendesk'
)
# Pega todos os tickets da visualização com ID 9000
ID_DA_FILA = 9000
tickets_para_processar = zen_api.pegar_tickets(fila=ID_DA_FILA)
if tickets_para_processar:
print(f"Encontrados {len(tickets_para_processar)} tickets: {tickets_para_processar}")
else:
print("Nenhum ticket encontrado na fila.")
4. Automatizando a Interface (Zendesk_Selenium)
Após obter um ticket, use o Selenium para interagir com ele.
Python
from dadosic-zencraft import Zendesk_Selenium, Driver_Selenium
# (Supondo que você já tenha um 'driver' criado)
# driver = Driver_Selenium(...).criar_driver()
# Instancia o controlador da UI do Zendesk
zen_ui = Zendesk_Selenium(
driver=driver,
usuario='seu_usuario_zendesk',
senha='sua_senha_zendesk',
instancia='sua-instancia-zendesk'
)
# Realiza o login
zen_ui.login()
print("Login realizado com sucesso.")
# Navega até um ticket e realiza ações
ticket_id = '12345'
driver.get(f"https://sua-instancia-zendesk.zendesk.com/agent/tickets/{ticket_id}")
zen_ui.esperar_carregamento()
# Aplica uma macro
zen_ui.aplicar_macro("Nome da Macro::Opção")
# Adiciona uma observação interna
zen_ui.enviar_mensagem("Este ticket foi processado pelo robô.")
# Submete o ticket como resolvido
zen_ui.enviar_ticket('resolvido')
print(f"Ticket {ticket_id} resolvido.")
driver.quit()
5. Logging Inteligente (BotsLogger)
Capture exceções, classifique-as e envie alertas automaticamente.
Python
from dadosic-zencraft import BotsLogger, Heartbeat, erro
# O Logger precisa de uma instância do Heartbeat para enviar alertas
heartbeat = Heartbeat(...)
logger = BotsLogger(heartbeat_instancia=heartbeat)
ticket_id = '12345'
try:
# Simula um erro
raise ValueError("Ocorreu um problema inesperado.")
except Exception as e:
print("Capturando exceção...")
# O logger analisa o erro, envia um alerta via heartbeat e retorna uma ação
acao_sugerida = logger.error(
error=e,
message=f"Falha ao processar o ticket {ticket_id}",
ticket_id=ticket_id,
error_type='WARNING'
)
# Você pode usar a sugestão para controlar o fluxo do seu robô
if acao_sugerida == 'stop':
print("Ação sugerida: Parar o robô.")
heartbeat.alertas(status=erro)
# sys.exit()
elif acao_sugerida == 'retry':
print("Ação sugerida: Tentar novamente mais tarde.")
Exemplo Completo: Estrutura de um Robô
Juntando todos os componentes para criar um fluxo de automação robusto.
Python
from rpa_zendesk_toolkit import (
Heartbeat, BotsLogger, Driver_Selenium, Zendesk_Zenpy, Zendesk_Selenium,
execucao, aguardando, erro
)
import time
# --- 1. CONFIGURAÇÃO INICIAL ---
HEARTBEAT_CONFIG = {
'bot_id': 'ZENDESK_PROCESSOR_01',
'endpoint': 'https://api.meupainel.com/status',
'token': 'SEU_TOKEN_SECRETO'
}
ZENDESK_API_CREDS = {
'zlogin': 'seu_email@empresa.com/token',
'zpass': 'SEU_TOKEN_DA_API_ZENDESK',
'instancia': 'sua-instancia'
}
ZENDESK_UI_CREDS = {
'usuario': 'seu_usuario_zendesk',
'senha': 'sua_senha_zendesk',
'instancia': 'sua-instancia'
}
SELENOID_IP = 'http://seu-ip-selenoid:4444/wd/hub'
ID_FILA_ZENDESK = 9000
# --- 2. INICIALIZAÇÃO DOS COMPONENTES ---
heartbeat = Heartbeat(**HEARTBEAT_CONFIG)
logger = BotsLogger(heartbeat_instancia=heartbeat)
zen_api = Zendesk_Zenpy(**ZENDESK_API_CREDS)
# --- 3. LOOP PRINCIPAL DO ROBÔ ---
def main():
driver = None
try:
heartbeat.alertas(status=aguardando)
tickets = zen_api.pegar_tickets(fila=ID_FILA_ZENDESK)
if not tickets:
print("Nenhum ticket na fila. Aguardando...")
return
heartbeat.alertas(status=execucao)
# Cria o driver apenas se houver tickets para processar
driver_manager = Driver_Selenium(ipselenoid=SELENOID_IP)
driver = driver_manager.criar_driver()
zen_ui = Zendesk_Selenium(driver=driver, **ZENDESK_UI_CREDS)
zen_ui.login()
for ticket_id in tickets:
try:
print(f"Processando ticket: {ticket_id}")
driver.get(f"https://{ZENDESK_UI_CREDS['instancia']}.zendesk.com/agent/tickets/{ticket_id}")
zen_ui.esperar_carregamento()
# --- Lógica de negócio do seu robô ---
zen_ui.enviar_mensagem(f"Processamento automático iniciado pelo bot {HEARTBEAT_CONFIG['bot_id']}.")
time.sleep(2) # Simula trabalho
zen_ui.enviar_ticket('pendente')
zen_ui.esperar_carregamento()
# ------------------------------------
heartbeat.alertas(ticket_id=ticket_id) # Registra sucesso
except Exception as e:
logger.error(
error=e,
message=f"Erro ao processar ticket {ticket_id}",
ticket_id=ticket_id
)
# Pular para o próximo ticket em caso de erro
except Exception as e:
acao = logger.error(
error=e,
message="Erro crítico na execução principal do robô",
ticket_id="N/A",
error_type='CRITICAL'
)
if acao == 'stop':
heartbeat.alertas(status=erro)
# Adicione lógica para parar o robô de forma segura
finally:
if driver:
driver.quit()
print("Ciclo finalizado.")
if __name__ == '__main__':
while True:
main()
time.sleep(60) # Espera 1 minuto antes de verificar a fila novamente
Licença
Este projeto está licenciado sob a licença MIT. Veja o arquivo LICENSE para mais detalhes.
| text/markdown | Gustavo Sartorio | strov3rl@gmail.com | null | null | MIT | zendesk selenium transformers cybertron | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"selenium",
"zenpy",
"requests",
"urllib3",
"python-dotenv",
"pandas",
"pandas-gbq",
"google-auth"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.12 | 2026-02-19T13:39:15.281482 | cybertron_spark-0.1.11.tar.gz | 16,170 | a2/40/9a95044070762c2c6ee20d14c50079a42d86eae2054431c2f70de6db074b/cybertron_spark-0.1.11.tar.gz | source | sdist | null | false | 2f6c6a4cbbe4406e21542ce3fd7b8e52 | 2ba411faf71dba3560327e46aca17b9a4487a8cbe71104979f67787b56b39362 | a2409a95044070762c2c6ee20d14c50079a42d86eae2054431c2f70de6db074b | null | [
"LICENCE"
] | 199 |
2.4 | hiten-apicore | 0.1.1 | Lightweight API infrastructure toolkit for standardized responses, errors, and retry handling. | # Apicore
Lightweight API infrastructure toolkit for Django REST Framework providing standardized responses, errors, permissions, and CRUD operations.
## Features
- BaseModel with timestamps and soft delete
- Standardized API responses
- Custom exception classes
- Permission classes (IsOwner, IsActiveUser, ReadOnly)
- BaseViewSet with CRUD operations
- Pagination with metadata
- Request tracing decorator
- Common constants
## Installation
```bash
pip install apicore
```
## Usage
### BaseModel
```python
from apicore import BaseModel
class Product(BaseModel):
name = models.CharField(max_length=100)
price = models.DecimalField(max_digits=10, decimal_places=2)
```
### BaseViewSet
```python
from apicore import BaseViewSet, StandardPagination
from rest_framework.permissions import IsAuthenticated
class ProductViewSet(BaseViewSet):
queryset = Product.objects.all()
serializer_class = ProductSerializer
permission_classes = [IsAuthenticated]
pagination_class = StandardPagination
```
### Permissions
```python
from apicore import IsOwner, IsActiveUser, ReadOnly
class MyViewSet(BaseViewSet):
permission_classes = [IsActiveUser, IsOwner]
```
### Error Handling
```python
from apicore import NotFoundError, ValidationError
def my_view(request):
if not obj:
raise NotFoundError("Product not found")
if not valid:
raise ValidationError("Invalid data", errors={"field": "error"})
```
### Request Tracing
```python
from apicore import trace_request
class MyViewSet(BaseViewSet):
@trace_request
def list(self, request, *args, **kwargs):
return super().list(request, *args, **kwargs)
```
## Response Format
Success:
```json
{
"status": "success",
"data": {...},
"message": "Operation successful",
"meta": {...}
}
```
Error:
```json
{
"status": "error",
"message": "Error message",
"error_code": "ERROR_CODE",
"errors": {...}
}
```
## License
MIT
| text/markdown | null | Hiten Joshi <hiten.mmt@example.com> | null | null | MIT | api, apicore, infrastructure, backend, retry, response, logging | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"djangorestframework>=3.14.0",
"django>=4.0"
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/apicore",
"Repository, https://github.com/yourusername/apicore",
"Issues, https://github.com/yourusername/apicore/issues"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T13:39:06.073824 | hiten_apicore-0.1.1.tar.gz | 9,743 | f2/77/44379fd632892f36cf613c82b28876efd48a411779e38f63e770752283a0/hiten_apicore-0.1.1.tar.gz | source | sdist | null | false | fa8f031de56669b54980111cfbb48bcb | 66855f2cfdc8a4cfefc4f21f7b366d5bfcf3b92443d6ad2d56543e968ee08e5c | f27744379fd632892f36cf613c82b28876efd48a411779e38f63e770752283a0 | null | [
"LICENSE"
] | 240 |
2.4 | llwp | 2.0.36 | LLWP is a fast, efficient and easy solution for exploring and assigning spectra - relying on Loomis-Wood plots. | # LLWP - Luis' Loomis-Wood Program
LLWP allows you to efficiently and confidently assign (typically rotational or rovibrational) spectra by relying on Loomis-Wood plots.
A quickstart guide is given down below. For more information see LLWP's [website](https://llwp.astro.uni-koeln.de).
If you want to acknowledge LLWP, please cite the paper [LLWP - A new Loomis-Wood software at the example of Acetone-13C1](https://doi.org/10.1016/j.jms.2022.111674).
Feel free to contact me in case of any problems or for feature requests.
## Quickstart Guide
The preferred way to install LLWP is via Python's package manager pip.
Run the following command in a terminal to install LLWP:
```bash
pip install llwp
```
After installing LLWP via pip you can run it from any terminal by simply running
```bash
llwp
```
To see and assign your first series
1. open your spectrum and prediction files via drag and drop or *Files > Add Files*
2. specify the correct reference series in the *Reference Series* window
3. choose the fitfunction under *Fit > Choose Fit Function*
4. select the area around the experimental peak with the mouse to fit the data
### ASAP Mode
To start the [ASAP](https://doi.org/10.1016/j.jms.2015.02.014) mode of LLWP run
```bash
asap
```
To see and assign your first cross-correlation peaks
1. open your spectrum, \*.egy, and \*.cat file via drag and drop or *Files > Add Files*
2. specify the correct energy levels in the *ASAP Settings* window
3. specify the correct unit conversion factor for the \*.cat file in the *Units Cat File* field (e.g. 3.335641e-05 for \*.cat file in MHz and \*.egy file in wavenumbers)
3. press *Calculate Cross Correlation*
3. choose the fitfunction under *Fit > Choose Fit Function*
4. select the area around the experimental peak with the mouse to fit the data
| text/markdown | null | Luis Bonah <bonah@ph1.uni-koeln.de> | null | null | null | LLWP, Loomis-Wood Plots, Spectroscopy | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"numpy",
"pandas",
"matplotlib",
"wrapt",
"pyckett>=0.1.28",
"scipy",
"PyQt6"
] | [] | [] | [] | [
"Homepage, https://llwp.astro.uni-koeln.de/"
] | twine/6.1.0 CPython/3.9.1 | 2026-02-19T13:38:29.240758 | llwp-2.0.36.tar.gz | 81,308 | 53/a4/67e2ba6dce21b19fa086b5aa456e04cf883b30d6a39697a5a27f1cb9d514/llwp-2.0.36.tar.gz | source | sdist | null | false | aa0e82bdfb8f10f9f49980ce0403178f | 610cabd9c2f1024d31eb5264eb1fbbe498085038704e0c9170b14cf300cc3e21 | 53a467e2ba6dce21b19fa086b5aa456e04cf883b30d6a39697a5a27f1cb9d514 | null | [
"LICENSE"
] | 232 |
2.4 | mthds | 0.0.2 | The Python interface for methods — base structures for structured outputs and the base runner for executing methods via API. | # mthds
The Python interface for methods — base structures for structured outputs and the base runner for executing methods via API.
Learn more at [mthds.ai](https://mthds.ai) and browse the Hub at [mthds.sh](https://mthds.sh).
## Runners
This package provides the base structures that define methods and their structured outputs, as well as the base runner that executes methods through API calls. Other runners have been implemented on top of it:
- [Pipelex](https://github.com/Pipelex/pipelex) — a full-featured runner
## Related packages
- [`mthds`](https://www.npmjs.com/package/mthds) (npm) — **CLI to install methods** + light client
## Installation
```bash
pip install mthds
```
| text/markdown | null | "Evotis S.A.S." <oss@pipelex.com> | null | Pipelex staff <oss@pipelex.com> | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming La... | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"backports-strenum>=1.3.0; python_version < \"3.11\"",
"httpx<1.0.0,>=0.23.0",
"pydantic<3.0.0,>=2.10.6",
"mypy==1.19.1; extra == \"dev\"",
"pylint==4.0.4; extra == \"dev\"",
"pyright==1.1.408; extra == \"dev\"",
"ruff==0.14.13; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://mthds.ai",
"Repository, https://github.com/mthds-ai/mthds",
"Documentation, https://docs.mthds.ai/",
"Changelog, https://github.com/mthds-ai/mthds/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:38:28.094284 | mthds-0.0.2.tar.gz | 60,229 | b4/1c/207fed4b05c8dbcfe6f6c0802b0c59cdc683466f02f95625475248e1ff20/mthds-0.0.2.tar.gz | source | sdist | null | false | 469322a0b4c2d3cff04799d0d326adf0 | 273aaec6e8332f462f772c1bd58cf5baaea4a69224de2fa024b08ebe7ed005f1 | b41c207fed4b05c8dbcfe6f6c0802b0c59cdc683466f02f95625475248e1ff20 | MIT | [
"LICENSE"
] | 450 |
2.4 | django-kinde-auth | 0.1.0 | Reusable Django app for passwordless authentication via Kinde | # django-kinde-auth
Reusable Django app for passwordless authentication via [Kinde](https://kinde.com). Drop-in plugin for any Django project.
## Install
```bash
pip install django-kinde-auth
```
Or with uv:
```bash
uv add django-kinde-auth
```
## Setup
1. **Add the app** to `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
# ...
"django_kinde_auth",
]
```
2. **Configure Kinde** (see [Kinde configuration](#kinde-configuration) below).
3. **Settings** (in Django settings or environment):
- `KINDE_CLIENT_ID` – from Kinde Back-end app
- `KINDE_CLIENT_SECRET` – from Kinde Back-end app
- `KINDE_ISSUER_URL` – e.g. `https://<your_subdomain>.kinde.com`
- `KINDE_CALLBACK_URL` – full URL of your callback (e.g. `https://yourapp.com/auth/callback/`)
- `KINDE_LOGOUT_REDIRECT` – (optional) URL after logout, default `/`
- `KINDE_LOGIN_REDIRECT` – (optional) URL after successful login, default `/`
- `KINDE_REQUIRE_FOR_ADMIN` – (optional) when `True`, unauthenticated `/admin/` is redirected to Kinde; when `False`, use plain Django login. Default `False`. Set in settings or env (`KINDE_REQUIRE_FOR_ADMIN=true`).
4. **URLs** – include the app URLs (e.g. under `/auth/`):
```python
path("auth/", include("django_kinde_auth.urls", namespace="kinde_auth")),
```
5. **Templates** – add the context processor so `kinde_authenticated` and `kinde_user` are available globally:
```python
TEMPLATES[0]["OPTIONS"]["context_processors"].append(
"django_kinde_auth.context_processors.kinde_auth"
)
```
6. **Optional: require Kinde for admin** – add the middleware and set `KINDE_REQUIRE_FOR_ADMIN = True`:
```python
MIDDLEWARE = [
# ...
"django_kinde_auth.middleware.RequireKindeLoginForAdminMiddleware",
]
```
## Usage in templates
- `{% if kinde_authenticated %}` … show logged-in UI; `kinde_user.full_name`, `kinde_user.email`, `kinde_user.initials`, etc.
- Sign in: `{% url 'kinde_auth:login' %}`
- Sign up: `{% url 'kinde_auth:register' %}`
- Sign out: `{% url 'kinde_auth:logout' %}`
## Usage in views
```python
from django_kinde_auth.views import get_user_context
def my_view(request):
ctx = get_user_context(request)
if not ctx["kinde_authenticated"]:
return redirect("kinde_auth:login")
# use ctx["kinde_user"]
```
## Kinde configuration
In the [Kinde dashboard](https://app.kinde.com) (Settings → Applications → your Back-end app):
1. **Callback URL** – Add the exact callback URL your app uses, e.g. `http://127.0.0.1:8000/auth/callback/` (local) or `https://yourdomain.com/auth/callback/` (production).
2. **Logout redirect URL** (if required) – e.g. `https://yourdomain.com/` or `http://127.0.0.1:8000/`.
3. **App keys** – Copy Client ID and Client Secret into your Django settings or env.
4. **Issuer** – Set `KINDE_ISSUER_URL` to `https://<your_subdomain>.kinde.com`.
5. **Passwordless** – In Kinde, configure the connection types you want (e.g. magic link, social).
## User creation and lifecycle
**Kinde is the source of truth.** Users are created or invited in Kinde (or self-register at `/auth/register/`). On first login we create the Django `User` automatically (username `kinde_<id>`, staff by default). To give admin access, set `is_staff`/`is_superuser` in Django after their first login, or set `KINDE_SYNC_SUPERUSER = True` in settings.
## Publishing (GitHub / PyPI)
- **GitHub:** Push the `django-kinde-auth/` directory to its own repo (or keep it in a monorepo).
- **PyPI:** From the `django-kinde-auth/` directory:
```bash
pip install build twine
python -m build
twine upload dist/*
```
Or with uv: `uv run build` then `uv run twine upload dist/*`. Bump `version` in `pyproject.toml` for each release.
## License
MIT
| text/markdown | null | null | null | null | MIT | django, kinde, auth, oauth, passwordless | [
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyth... | [] | null | null | >=3.10 | [] | [] | [] | [
"Django>=4.2",
"kinde-python-sdk<2,>=1.2.1",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.2 | 2026-02-19T13:37:09.155524 | django_kinde_auth-0.1.0.tar.gz | 9,601 | dc/ea/2effcac26880ff41c981670675eac89e9987819e0d8db1dffba2b26bec18/django_kinde_auth-0.1.0.tar.gz | source | sdist | null | false | af0ec7f08287d0339d6cc7efd79046fd | ae86eba288dcc63ea49a136d472596b19d3e66004d1f9f7db4c738a234b80012 | dcea2effcac26880ff41c981670675eac89e9987819e0d8db1dffba2b26bec18 | null | [
"LICENSE"
] | 259 |
2.4 | parze | 0.2.5 | Python SDK for the Parze API | # Parze Python SDK
Official Python client for the Parze document parsing API.
## Installation
```bash
pip install parze
```
## Quick Start
```python
from parze import ParzeClient
# Initialize client with your API key
client = ParzeClient(api_key="pk_live_your_key_here")
# Parse a document
result = client.parse("invoice.pdf")
print(result["text"])
# Extract structured data (one step)
schema = {
"invoice_number": {"type": "string", "description": "Invoice number"},
"total_amount": {"type": "string", "description": "Total amount"},
"date": {"type": "string", "description": "Invoice date"}
}
extraction = client.extract(file="invoice.pdf", extraction_schema=schema)
print(extraction["extraction"])
# To avoid double billing, parse first and then extract with job_id
parse_result = client.parse("invoice.pdf")
extraction = client.extract(parse_result["text"], schema, parse_result["job_id"])
# Validate document quality (pre-validation)
validation = client.validate(
"invoice.pdf",
validation_type="pre",
validation_rules={
"quality_checks": {
"min_resolution": 150,
"check_readability": True,
"check_completeness": True
}
}
)
print(validation)
# Get AI-suggested schema
suggested = client.suggest_schema(text)
print(suggested)
```
## API Reference
### `parse(file, output_format="structured", preserve_tables=True, extraction_mode="auto")`
Parse a document into structured text.
**Parameters:**
- `file` (str or file object): Path to file or file object
- `output_format` (str): "structured", "markdown", or "json"
- `preserve_tables` (bool): Preserve table structure
- `extraction_mode` (str): "auto", "ocr_only", "llm_only", or "identity_doc"
**Returns:** Dict with parsed text and metadata
### `extract(text=None, extraction_schema=None, job_id=None, file=None, extraction_mode="auto", preserve_tables=True)`
Extract structured data from a file or parsed text using a schema.
**Parameters:**
- `text` (str, optional): Document text (from parse)
- `extraction_schema` (dict, required): Schema defining fields to extract
- `job_id` (str, required if text is provided): Job ID from parse response
- `file` (str or file object, optional): Path to file or file object (if provided, parse runs internally)
- `extraction_mode` (str, optional): "auto", "ocr_only", "llm_only", or "identity_doc" (file-based only)
- `preserve_tables` (bool, optional): Preserve table structure during parsing (file-based only)
**Returns:** Dict with extracted data and confidence scores
### `suggest_schema(text)`
Get AI-suggested extraction schema based on document text.
**Parameters:**
- `text` (str): Document text
**Returns:** Dict with suggested schema
### `text_to_schema(description)`
Convert natural language description to extraction schema.
**Parameters:**
- `description` (str): Natural language description of fields
**Returns:** Dict with generated schema
### `validate(files, validation_type="pre", validation_rules=None, extraction_schema=None, job_id=None)`
Validate document quality (pre) or extracted data (post).
**Parameters:**
- `files` (str, file object, or list): Path(s) to file(s) or file objects
- `validation_type` (str): "pre" or "post"
- `validation_rules` (dict, optional): Validation rules payload
- `extraction_schema` (dict, optional): Required for post-validation
- `job_id` (str, optional): Job ID from parse
**Returns:** Dict with validation results
## Get API Key
Get your API key from [platform.parze.ai](https://platform.parze.ai)
| text/markdown | gideononyewuenyi | null | null | null | MIT License
Copyright (c) 2024 Parze
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| parze, sdk, document-processing, api | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.7",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules"... | [] | null | null | >=3.7 | [] | [] | [] | [
"requests>=2.31"
] | [] | [] | [] | [
"Homepage, https://platform.parze.ai"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-19T13:37:08.522525 | parze-0.2.5.tar.gz | 4,750 | d9/1b/dff463e97558126a9e3af1f0e40b9f980196a300a40100f57098c0f3b184/parze-0.2.5.tar.gz | source | sdist | null | false | 4c485ebbe4212ada6eb149c4cf964431 | 15c7f18ce2248a25b85ca5475065aa8a1ed559e3da2ae55d6d7c3e3f1be709eb | d91bdff463e97558126a9e3af1f0e40b9f980196a300a40100f57098c0f3b184 | null | [
"LICENSE"
] | 233 |
2.4 | markdown-query | 0.5.16 | Python bindings for mq, a jq-like command-line tool for Markdown processing | <h1 align="center">mq-python</h1>
[](https://pypi.org/project/markdown-query/)
[](https://github.com/harehare/mq/actions/workflows/ci.yml)

[](https://codecov.io/gh/harehare/mq)
[](https://codspeed.io/harehare/mq)
Python bindings for the mq Markdown processor.
## Installation
```bash
pip install markdown-query
```
## Usage
### Basic Usage
Use the `run` function to process Markdown with mq queries:
```python
import mq
# Extract all level 1 headings
result = mq.run(".h1", "# Hello World\n\n## Heading2\n\nText")
print(result.values) # ['# Hello World']
# Extract all level 2 headings
result = mq.run(".h2", "# Main Title\n\n## Section A\n\n## Section B")
print(result.values) # ['## Section A', '## Section B']
# Get all results as a single string
print(result.text) # '## Section A\n## Section B'
```
### Filtering and Transforming
Use mq query syntax to filter and transform Markdown:
```python
import mq
markdown = """
# Product
## Features
Great features here.
## Installation
Install instructions.
"""
# Filter headings containing specific text
result = mq.run('.h2 | select(contains("Feature"))', markdown)
print(result.values) # ['## Features']
# Extract list items
result = mq.run(".[]", "# List\n\n- Item 1\n- Item 2\n- Item 3")
print(result.values) # ['- Item 1', '- Item 2', '- Item 3']
# Extract code blocks
result = mq.run(".code", "# Code\n\n```python\nprint('Hello')\n```")
print(result.values) # ["```python\nprint('Hello')\n```"]
```
### Input Formats
mq supports multiple input formats:
```python
import mq
# Markdown (default)
options = mq.Options()
options.input_format = mq.InputFormat.MARKDOWN
result = mq.run(".h1", "# Heading", options)
# MDX (Markdown with JSX)
options = mq.Options()
options.input_format = mq.InputFormat.MDX
result = mq.run("select(is_mdx())", "# MDX\n\n<Component />", options)
print(result.values) # ['<Component />']
# HTML
options = mq.Options()
options.input_format = mq.InputFormat.HTML
result = mq.run('select(contains("Hello"))', "<h1>Hello</h1><p>World</p>", options)
print(result.values) # ['# Hello']
# Plain text
options = mq.Options()
options.input_format = mq.InputFormat.TEXT
result = mq.run('select(contains("2"))', "Line 1\nLine 2\nLine 3", options)
print(result.values) # ['Line 2']
```
Available input formats:
- `InputFormat.MARKDOWN` - Standard Markdown (default)
- `InputFormat.MDX` - Markdown with JSX
- `InputFormat.HTML` - HTML content
- `InputFormat.TEXT` - Plain text
- `InputFormat.RAW` - Raw string input
- `InputFormat.NULL` - Null input
### Rendering Options
Customize the output rendering:
```python
import mq
options = mq.Options()
options.input_format = mq.InputFormat.MARKDOWN
options.list_style = mq.ListStyle.PLUS # Use '+' for list items
options.link_title_style = mq.TitleSurroundStyle.SINGLE # Use single quotes for link titles
options.link_url_style = mq.UrlSurroundStyle.ANGLE # Use angle brackets for URLs
result = mq.run(".", markdown, options)
```
Available options:
- `ListStyle`: `DASH` (default), `PLUS`, `STAR`
- `TitleSurroundStyle`: `DOUBLE` (default), `SINGLE`, `PAREN`
- `UrlSurroundStyle`: `NONE` (default), `ANGLE`
### HTML to Markdown Conversion
Convert HTML to Markdown:
```python
import mq
html = "<h1>Hello World</h1><p>This is a <strong>test</strong>.</p>"
markdown = mq.html_to_markdown(html)
print(markdown) # '# Hello World\n\nThis is a **test**.'
# With conversion options
options = mq.ConversionOptions()
options.extract_scripts_as_code_blocks = True # Convert <script> tags to code blocks
options.generate_front_matter = True # Generate front matter from metadata
options.use_title_as_h1 = True # Use <title> as h1 heading
markdown = mq.html_to_markdown(html, options)
```
### Working with Results
The `run` function returns an `MQResult` object:
```python
import mq
result = mq.run(".h", "# H1\n\n## H2\n\n### H3")
# Get the number of results
print(len(result)) # 3
# Access individual results by index
print(result[0].text) # '# H1'
# Iterate over results
for value in result.values:
print(value)
# Get all results as a single string
print(result.text) # '# H1\n## H2\n### H3'
# Check if a value is in the result
print("# H1" in result.values) # True
```
Each `MQValue` has the following properties:
- `text` - The string representation of the value
- `values` - For arrays, returns the list of values
- `markdown_type` - The type of Markdown element (e.g., `Heading`, `Code`, `List`)
- `is_array()` - Check if the value is an array
- `is_markdown()` - Check if the value is a Markdown element
### Error Handling
Invalid queries raise a `RuntimeError`:
```python
import mq
try:
result = mq.run(".invalid!!!", "# Heading")
except RuntimeError as e:
print(f"Query error: {e}")
```
## Development
### Building from Source
```bash
git clone https://github.com/harehare/mq
cd mq/crates/mq-python
pip install maturin
maturin develop
```
### Running Tests
```bash
pytest tests/
```
## Support
- 🐛 [Report bugs](https://github.com/harehare/mq/issues)
- 💡 [Request features](https://github.com/harehare/mq/issues)
- 📖 [Read the documentation](https://mqlang.org/book/)
- 📦 [PyPI package](https://pypi.org/project/markdown-query/)
## License
Licensed under the MIT License.
| text/markdown; charset=UTF-8; variant=GFM | Takahiro Sato <harehare1110@gmail.com> | harehare <harehare1110@gmail.com> | null | null | MIT | markdown, jq, command-line, tool | [
"Programming Language :: Rust",
"Programming Language :: Python"
] | [] | https://mqlang.org/ | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://mqlang.org/book/",
"Homepage, https://mqlang.org/",
"Issues, https://github.com/harehare/mq/issues",
"Repository, https://github.com/harehare/mq.git"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T13:35:39.213220 | markdown_query-0.5.16-pp311-pypy311_pp73-win_amd64.whl | 1,253,203 | de/ec/d55eb3a8146fc8a7554530542cf33df58cdd2eaeaf853c54ba0de1da4550/markdown_query-0.5.16-pp311-pypy311_pp73-win_amd64.whl | pp311 | bdist_wheel | null | false | 4707346333abc220c7c0bf83b9c3ea3f | fd5fc45e4bd354055cdcb99050e921cdc54d6bef30c6a58c31c949eed0d09a46 | deecd55eb3a8146fc8a7554530542cf33df58cdd2eaeaf853c54ba0de1da4550 | null | [] | 530 |
2.4 | pytest-flakefighters | 0.5.1 | Pytest plugin implementing flaky test failure detection and classification. | # Pytest FlakeFighters
[](https://www.repostatus.org/#active)
[](https://pypi.org/project/pytest-flakefighters)
[](https://pypi.org/project/pytest-flakefighters)

[](https://codecov.io/gh/test-flare/pytest-flakefighters)
[](https://pytest-flakefighters.readthedocs.io/en/latest/?badge=latest)

### Pytest plugin implementing flaky test failure detection and classification.
Read more about flaky tests [here](https://docs.pytest.org/en/stable/explanation/flaky.html).
## Features
- Implements the [DeFlaker algorithm](http://www.deflaker.org/get-rid-of-your-flakes/) for pytest
- Implements two traceback-matching classifiers from [Alshammari et al. (2024)](https://doi.org/10.1109/ICST60714.2024.00031).
- Implements a novel coverage-independence classifier that classifies tests as flaky if they fail independently of passing test cases that exercise overlapping code.
- Optionally rerun or suppress flaky failures
- Output results to JSON, HTML, or JUnitXML
- Save test outcome history to a remote or local database
## Comparison with Other Plugins
Flakefighters is a pytest plugin developed as part of the [TestFLARE](https://test-flare.github.io/) project.
The plugin provides a "Swiss army knife" of techniques (called flakefighters) to detect flaky tests.
Where existing flaky test plugins such as [pytest-rerunfailures](https://github.com/pytest-dev/pytest-rerunfailures) and [pytest-flaky](https://github.com/box/flaky) are primarily focused on rerunning (potentially) flaky tests until they pass, our main aim is to identify flaky tests by classifying test failures as genuine or flaky.
The [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder) plugin does this by simply rerunning tests multiple times and observing the result.
By contrast, Flakefighters incorporates several cutting edge flaky test detection techniques from research to automatically classify test failures as either genuine: indicating either a fault in the code or a mis-specified test case, or flaky: indicating a test with a nondeterministic outcome.
Flaky tests are then reported separately in the test report, and can be optionally rerun or suppressed so they don't block CI/CD pipelines.
| Feature | [pytest-flakefighters](https://github.com/test-flare/pytest-flakefighters) | [pytest-rerunfailures](https://github.com/pytest-dev/pytest-rerunfailures) | [pytest-flaky](https://github.com/box/flaky) | [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder) | [pytest-replay](https://github.com/ESSS/pytest-replay) |
| :--- | :--- | :--- | :--- | :--- | :--- |
| **Purpose** | Classify test failures as genuine or flaky | Rerun failing tests in case they are flaky | Decorator-based reruns | Copy tests to observe nondeterministic outcomes | Reproduce flaky failures from CI when running with [xdist](https://github.com/pytest-dev/pytest-xdist) |
| **Detection Method** | DeFlaker algorithm + coverage analysis | None | None | Reruns | None |
| **Reporting** | Terminal, HTML, JSON, JUnitXML | Terminal | Terminal | Terminal | Terminal |
| **History Tracking** | Database of test outcomes over commits | None | None | None | None |
| **Rerun Option** | Optional | Required | Required | Required | Required |
| **Suppression Option** | Optional | None | None | None | None |
| **Debugging support** | Insight into *why* tests are flaky | None | None | None | Reliable reproduction of flaky failures |
### When to Use pytest-flakefighters
Use pytest-flakefighters when you want to:
* **Understand WHY** tests are flaky, not just hide the symptoms
* **Classify** flaky tests by root cause (coverage-independent, traceback-matched, etc.)
* **Track** test flakiness over time and across commits
* **Make informed decisions** about whether failures are legitimate
### When to use alternatives
* [pytest-rerunfailures](https://github.com/pytest-dev/pytest-rerunfailures): Quick fix for CI builds
* [pytest-flaky](https://github.com/box/flaky): A few tests are known to be flaky
* [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder): Brute force search for flaky tests
* [pytest-replay](https://github.com/ESSS/pytest-replay): Debugging specific flaky failures
### Can They Work Together?
Yes! pytest-flakefighters can be combined with other flaky test plugins:
* Use **pytest-flakefighters** to identify and classify flaky tests
* Use [pytest-rerunfailures](https://github.com/pytest-dev/pytest-rerunfailures) or [pytest-flaky](https://github.com/box/flaky) as a temporary measure while fixing them
* Use [pytest-replay](https://github.com/ESSS/pytest-replay) to debug specific instances identified by flakefighters
* Use [pytest-xdist](https://github.com/pytest-dev/pytest-xdist) to randomise the order of your test cases
---
*For more information on flaky test management best practices, see the [pytest documentation](https://docs.pytest.org/en/stable/explanation/flaky.html).*
## Installation
### With pip
You can install the extension by running `pip install pytest-flakefighters` from within your project's virtual environment.
### With uv
If you use [uv](https://github.com/astral-sh/uv) for Python package management, you can install pytest-flakefighters with `uv add pytest-flakefighters`.
This will add the plugin to your main dependencies.
```
dependencies = [
"pytest-flakefighters>=x.y.z",
]
```
However, pytest is typically a [development dependency](https://docs.astral.sh/uv/concepts/projects/dependencies/#development-dependencies), and so should be added with `uv add --dev pytest-flakefighters`.
```
[dependency-groups]
dev = [
"pytest-flakefighters>=x.y.z",
]
```
### From source (for development)
You can install \"pytest-flakefighters\" by cloning this repo and running `pip install .` from the root directory.
If you intend to develop the plugin, run `pip install -e .[dev]` instead.
If you use [uv](https://github.com/astral-sh/uv), you can install pytest-flakefighters with:
```bash
# Install with uv
uv pip install .
# For development
uv pip install -e .[dev]
```
## Usage
FlakeFighter is intended to run on git repositories that have test suites runnable with `pytest`.
Once you have installed FlakeFighter, you can run it from the root directory of your repo simply by running `pytest` in your usual way.
FlakeFighter has the following arguments.
```
--target-commit=TARGET_COMMIT
The target (newer) commit hash. Defaults to HEAD (the most recent commit).
--source-commit=SOURCE_COMMIT
The source (older) commit hash. Defaults to HEAD^ (the previous commit to target).
--repo=REPO_ROOT The commit hash to compare against.
--suppress-flaky-failures-exit-code
Return OK exit code if the only failures are flaky failures.
--no-save Do not save this run to the database of previous flakefighters runs.
-M LOAD_MAX_RUNS, --load-max-runs=LOAD_MAX_RUNS
The maximum number of previous runs to consider.
-D DATABASE_URL, --database-url=DATABASE_URL
The database URL. Defaults to 'flakefighter.db' in current working directory.
--store-max-runs=STORE_MAX_RUNS
The maximum number of previous flakefighters runs to store. Default is to store all.
--time-immemorial=TIME_IMMEMORIAL
How long to store flakefighters runs for, specified as `days:hours:minutes`. E.g. to store
tests for one week, use 7:0:0.
```
### Enabling/Disabling the Plugin
By default, pytest-flakefighters runs whenever it is installed. To disable it for a specific test run, use:
```bash
pytest --no-flakefighters
```
This is useful when you have the plugin installed but want to run quick tests without flaky test detection.
You can also configure this in your `pyproject.toml`:
```toml
[tool.pytest.ini_options]
addopts = "--no-flakefighters"
```
## Contributing
Contributions are very welcome.
Tests can be run with [pytest](https://pytest.readthedocs.io/en/latest/), please ensure the coverage at least stays the same before you submit a pull request.
## Flake Fighters
Our plugin is made up of a collection of heuristics that come together to help inform whether a test failure is genuine or flaky.
These come in two "flavours": those which run live after each test, and those which run at the end of the entire test suite.
Both extend the base class `FlakeFighter` and implement the `flaky_failure` method, which returns `True` if the test is deemed to be flaky.
## Issues
If you encounter any problems, please [file an issue](https://github.com/test-flare/pytest-flakefighters/issues) along with a detailed description.
------------------------------------------------------------------------
This [pytest](https://github.com/pytest-dev/pytest) plugin was generated with [Cookiecutter](https://github.com/audreyr/cookiecutter) along with [\@hackebrot](https://github.com/hackebrot)\'s [cookiecutter-pytest-plugin](https://github.com/pytest-dev/cookiecutter-pytest-plugin) template.
| text/markdown | TestFLARE Team | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"GitPython>=3.1.45",
"coverage>=7.10.6",
"dotenv>=0.9.9",
"nltk>=3.9",
"pandas>=2.3",
"pytest>=6.2.0",
"pyyaml>=6",
"scikit-learn>=1.7",
"sqlalchemy>=2.0.43",
"unidiff>=0.7.5",
"astroid==3.3.8; extra == \"dev\"",
"black; extra == \"dev\"",
"myst_parser; extra == \"dev\"",
"nbsphinx; extra ... | [] | [] | [] | [
"Documentation, https://pytest-flakefighters.readthedocs.io",
"Homepage, https://test-flare.github.io/",
"Issues, https://github.com/test-flare/pytest-flakefighters/issues",
"Repository, https://github.com/test-flare/pytest-flakefighters"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:34:31.808945 | pytest_flakefighters-0.5.1.tar.gz | 209,692 | e1/de/b3d9aa2c2c4e6db4a9f63cfd5dae2a4802d4f84c51bb58c53b070adeddda/pytest_flakefighters-0.5.1.tar.gz | source | sdist | null | false | 470cd7b2e694707f14376dbab81db1d3 | da503d2f14ed7aca3a3a4f4f80d603ca4afcad277d522b5d0efc80d1bcba66bc | e1deb3d9aa2c2c4e6db4a9f63cfd5dae2a4802d4f84c51bb58c53b070adeddda | null | [
"LICENSE"
] | 231 |
2.4 | iwa | 0.2.9 | A secure, modular, and plugin-based framework for crypto agents and ops | # Iwa
[](https://badge.fury.io/py/iwa)
[](https://hub.docker.com/r/dvilela/iwa)
[](https://www.python.org/downloads/)
[]()
[]()
[](https://opensource.org/licenses/MIT)
*Iwa (岩), meaning "rock" in Japanese, symbolizes the unshakeable stability and immutable foundation required for secure financial infrastructure.*
</br>
<p align="center">
<img width="40%" src="https://raw.githubusercontent.com/dvilelaf/iwa/main/images/iwa.png">
</p>
</br>
Iwa is a Python framework designed for managing crypto wallets and interacting with smart contracts and crypto protocols in a secure, modular, and extensible way. It's ideal for building autonomous agents and applications that require blockchain interactions.
## Features
- **Secure Key Storage**: Private keys are encrypted with AES-256-GCM and stored safely. They are never exposed to the application layer; signing happens internally via the `KeyStorage` class.
- **Modularity (Plugins)**: Protocols and features are implemented as plugins, loaded dynamically. Currently supports Gnosis (Safe, CowSwap) and Olas (Registry, Services, Staking).
- **Multi-Chain Support**: Native support for Gnosis Chain, Ethereum, and Base, with easy extensibility for others.
- **Robust Transaction Management**:
- **RPC Rotation**: Automatically switches RPC providers if one fails or is rate-limited.
- **Rate Limiting**: Token bucket algorithm with automatic backoff.
- **Retry Logic**: Automatic retries with exponential backoff for transient failures.
- **CLI & TUI Integration**: Interact with your wallet via a unified CLI or a beautiful Terminal User Interface built with Textual.
- **Web API**: RESTful API built with FastAPI for web-based integrations.
- **Modern Tooling**: Managed with `uv`, `Justfile` for automation, and ready for Docker deployment.
## Architecture
```
iwa/
├── core/ # Core wallet functionality
│ ├── keys.py # KeyStorage - Encrypted key management
│ ├── wallet.py # Wallet - High-level interface
│ ├── chain/ # Blockchain interface with rate limiting
│ ├── services/ # Service layer (accounts, balances, transactions)
│ └── contracts/ # Contract abstractions (ERC20, Safe)
├── plugins/ # Protocol integrations
│ ├── gnosis/ # Safe multisig and CowSwap DEX
│ └── olas/ # Olas Registry, Services, Staking
├── tui/ # Terminal User Interface (Textual)
└── web/ # Web API (FastAPI)
```
### Key Components
| Component | Description |
|-----------|-------------|
| `KeyStorage` | Encrypts/decrypts private keys, provides internal signing |
| `Wallet` | Main high-level interface for user interactions |
| `ChainInterface` | Manages Web3 connections with rate limiting and RPC rotation |
| `TransactionService` | Handles transaction signing and sending with retry logic |
| `PluginService` | Dynamically loads and manages protocol plugins |
## Setup & Usage
### Prerequisites
- Python 3.12+
- [uv](https://github.com/astral-sh/uv) package manager
### Installation
```bash
# Install from PyPI
pip install iwa
# Or using uv (recommended for tools)
uv tool install iwa
# Or from source
git clone https://github.com/dvilelaf/iwa.git
cd iwa
just install
```
### Configuration
Create a `secrets.env` file with your configuration:
```bash
WALLET_PASSWORD=your_secure_password
GNOSIS_RPC=https://rpc.gnosis.io,https://gnosis.drpc.org
ETHEREUM_RPC=https://mainnet.infura.io/v3/YOUR_KEY
BASE_RPC=https://mainnet.base.org
# Testing mode (default: true uses Tenderly test RPCs)
TESTING=false
# Optional
GNOSISSCAN_API_KEY=your_api_key
COINGECKO_API_KEY=your_api_key
```
### Running
```bash
# Launch TUI
just tui
# Launch Web UI
just web
# Use CLI
iwa wallet list --chain gnosis
```
### Running Tests
```bash
just test
```
### Security Checks
```bash
just security # Runs gitleaks, bandit, and pip-audit
just wallet-check # Verifies password, keys, and mnemonic integrity
```
### Docker
```bash
# Pull from Docker Hub
docker pull dvilelaf/iwa:latest
# Build locally
just docker-build
just docker-run
```
## Plugins
Plugins are located in `src/iwa/plugins`. Currently supported:
### Gnosis Plugin
- **Safe**: Create and manage Safe multisig wallets
- **CowSwap**: Token swaps via CoW Protocol with MEV protection, Max balance support, and auto-refreshing UI
### Olas Plugin
- **Registry**: Interact with Olas service registry
- **Services**: Create, deploy, and manage Olas services
- **Staking**: Stake/unstake services and claim rewards
## Transaction Flow
1. **Preparation**: A high-level method prepares a raw transaction dictionary
2. **Delegation**: The transaction is passed to `TransactionService`
3. **Signing**: `KeyStorage` decrypts the key in memory, signs, and wipes the key
4. **Sending**: The signed transaction is sent via `ChainInterface`
5. **Recovery**: Automatic RPC rotation and gas bumping on failures
6. **Receipt**: Transaction receipt is returned upon success
## Documentation
Full documentation is available in the `docs/` directory:
```bash
# Serve docs locally
just docs-serve
# Build static docs
just docs-build
```
## Development
```bash
# Format code
just format
# Lint code
just check
# Type check
just types
```
## Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the MIT License - see the LICENSE file for details.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"bip-utils>=2.9.3",
"cryptography>=46.0.2",
"eth-account>=0.13.7",
"loguru>=0.7.3",
"pydantic>=2.12.0",
"pydantic-settings>=2.11.0",
"rich>=14.2.0",
"tomli>=2.3.0",
"tomli-w>=1.2.0",
"typer>=0.19.2",
"web3>=7.13.0",
"pyyaml<7.0.0,>=6.0.3",
"ruamel.yaml>=0.18.0",
"safe-eth-py>=7.14.0",
"t... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:34:04.824504 | iwa-0.2.9.tar.gz | 470,091 | 79/bb/c49f29f1fdea98c79c4677976feb6d7bbf77a7d93ab2ad6bbe482648dfe9/iwa-0.2.9.tar.gz | source | sdist | null | false | dfcbc1608973be83027c1d80abf54f34 | 2a20d2263fc0cbf9bd90440417dfa7600e625833400fdd32c484523f01fe30ce | 79bbc49f29f1fdea98c79c4677976feb6d7bbf77a7d93ab2ad6bbe482648dfe9 | null | [
"LICENSE"
] | 306 |
2.4 | ws-bom-robot-app | 0.0.113 | A FastAPI application serving ws bom/robot/llm platform ai. | # 🤖 ws-bom-robot-app
A `FastAPI` application serving ws bom/robot/llm platform ai
## 🌵 Minimal app structure
```env
app/
|-- .env
|-- main.py
```
Fill `main.py` with the following code:
```python
from ws_bom_robot_app import main
app = main.app
```
Create a `.env` file in the root directory with the following configuration:
```properties
# robot configuration
robot_env=development
robot_user=your_username
USER_AGENT=ws-bom-robot-app
# cms (bowl) configuration
robot_cms_host='http://localhost:4000'
robot_cms_auth='users API-Key your-api-key-here'
# llm providers: fill one or more of these with your API keys
DEEPSEEK_API_KEY="your-deepseek-api-key"
OPENAI_API_KEY="your-openai-api-key"
GOOGLE_API_KEY="your-google-api-key"
ANTHROPIC_API_KEY="your-anthropic-api-key"
GROQ_API_KEY="your-groq-api-key"
# ibm
WATSONX_URL="https://eu-gb.ml.cloud.ibm.com"
WATSONX_APIKEY="your-watsonx-api-key"
WATSONX_PROJECTID="your-watsonx-project-id"
# gvertex: ensure to mount the file in docker
GOOGLE_APPLICATION_CREDENTIALS="./.data/secrets/google-credentials.json"
```
## 🚀 Run the app
- development
```bash
fastapi dev --port 6001
#uvicorn main:app --app-dir ./ws_bom_robot_app --reload --reload-dir ws_bom_robot_app --host 0.0.0.0 --port 6001
#uvicorn main:app --app-dir ./ws_bom_robot_app --host 0.0.0.0 --port 6001
```
- production
```bash
uvicorn main:app --host 0.0.0.0 --port 6001
```
- production with [multipler workers](https://fastapi.tiangolo.com/deployment/server-workers/#multiple-workers)
```bash
fastapi run --port 6001 --workers 4
#uvicorn main:app --host 0.0.0.0 --port 6001 --workers 4
#gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app --bind
```
## 📖 API documentation
- [swagger](http://localhost:6001/docs)
- [redoc](http://localhost:6001/redoc)
---
## 🐳 Docker
dockerize base image
```pwsh
<# cpu #>
docker build -f Dockerfile-robot-base-cpu -t ws-bom-robot-base:cpu .
docker tag ws-bom-robot-base:cpu ghcr.io/websolutespa/ws-bom-robot-base:cpu
docker push ghcr.io/websolutespa/ws-bom-robot-base:cpu
<# gpu #>
docker build -f Dockerfile-robot-base-gpu -t ws-bom-robot-base:gpu .
docker tag ws-bom-robot-base:gpu ghcr.io/websolutespa/ws-bom-robot-base:gpu
docker push ghcr.io/websolutespa/ws-bom-robot-base:gpu
```
dockerize app (from src)
- cpu
```pwsh
docker build -f Dockerfile -t ws-bom-robot-app:cpu --build-arg DEVICE=cpu .
docker run --rm -d --name ws-bom-robot-app --env-file .env -p 6001:6001 ws-bom-robot-app:cpu
```
- gpu
```pwsh
docker build -f Dockerfile -t ws-bom-robot-app:gpu --build-arg DEVICE=gpu .
docker run --rm -d --name ws-bom-robot-app --gpus all --env-file .env -p 6001:6001 ws-bom-robot-app:gpu
```
dockerize app (from latest)
- cpu
```pwsh
docker build -f Dockerfile-pkg -t ws-bom-robot-app-pkg:cpu --build-arg DEVICE=cpu .
docker run --rm -d --name ws-bom-robot-app-pkg --env-file .env -p 6001:6001 ws-bom-robot-app-pkg:cpu
```
- gpu
```pwsh
docker build -f Dockerfile-pkg -t ws-bom-robot-app-pkg:gpu --build-arg DEVICE=gpu .
docker run --rm -d --name ws-bom-robot-app-pkg --gpus all --env-file .env -p 6001:6001 ws-bom-robot-app-pkg:gpu
<# test gpu: nvidia-smi #>
```
docker run mounted to src (dev mode)
```pwsh
docker run --rm -d --env-file .env -v "$(pwd)/.data:/app/.data" -p 6001:6001 ws-bom-robot-app fastapi dev ./ws_bom_robot_app/main.py --host 0.0.0.0 --port 6001
docker run --rm -d --env-file .env -v "$(pwd)/.data:/app/.data" -p 6001:6001 ws-bom-robot-app uvicorn ws_bom_robot_app.main:app --reload --host 0.0.0.0 --port 6001
```
---
## 🔖 Windows requirements (for RAG functionality only)
> ⚠️ While it's strongly recommended to use a docker container for development, you can run the app on Windows with the following requirements
### libmagic (mandatory)
```bash
py -m pip install --upgrade python-magic-bin
```
### tesseract-ocr (mandatory)
[Install tesseract](https://github.com/UB-Mannheim/tesseract/wiki)
[Last win-64 release](https://github.com/tesseract-ocr/tesseract/releases/download/5.5.0/tesseract-ocr-w64-setup-5.5.0.20241111.exe)
Add tesseract executable (C:\Program Files\Tesseract-OCR) to system PATH
```pwsh
$pathToAdd = "C:\Program Files\Tesseract-OCR"; `
$currentPath = [System.Environment]::GetEnvironmentVariable("Path", [System.EnvironmentVariableTarget]::Machine); `
if ($currentPath -split ';' -notcontains $pathToAdd) { `
[System.Environment]::SetEnvironmentVariable("Path", "$currentPath;$pathToAdd", [System.EnvironmentVariableTarget]::Machine) `
}
```
### docling
Set the following environment variables
```pwsh
KMP_DUPLICATE_LIB_OK=TRUE
```
### libreoffice (optional: for robot_env set to development/production)
[Install libreoffice](https://www.libreoffice.org/download/download-libreoffice/)
[Last win-64 release](https://download.documentfoundation.org/libreoffice/stable/24.8.2/win/x86_64/LibreOffice_24.8.2_Win_x86-64.msi)
Add libreoffice executable (C:\Program Files\LibreOffice\program) to system PATH
```pwsh
$pathToAdd = "C:\Program Files\LibreOffice\program"; `
$currentPath = [System.Environment]::GetEnvironmentVariable("Path", [System.EnvironmentVariableTarget]::Machine); `
if ($currentPath -split ';' -notcontains $pathToAdd) { `
[System.Environment]::SetEnvironmentVariable("Path", "$currentPath;$pathToAdd", [System.EnvironmentVariableTarget]::Machine) `
}
```
### poppler (optional: for robot_env set to development/production)
[Download win poppler release](https://github.com/oschwartz10612/poppler-windows/releases)
Extract the zip, copy the nested folder "poppler-x.x.x." to a program folder (e.g. C:\Program Files\poppler-24.08.0)
Add poppler executable (C:\Program Files\poppler-24.08.0\Library\bin) to system PATH
```pwsh
$pathToAdd = "C:\Program Files\poppler-24.08.0\Library\bin"; `
$currentPath = [System.Environment]::GetEnvironmentVariable("Path", [System.EnvironmentVariableTarget]::Machine); `
if ($currentPath -split ';' -notcontains $pathToAdd) { `
[System.Environment]::SetEnvironmentVariable("Path", "$currentPath;$pathToAdd", [System.EnvironmentVariableTarget]::Machine) `
}
```
---
## 👷 Contributors
Build/distribute pkg from `websolutespa` bom [[Github](https://github.com/websolutespa/bom)]
> dir in `robot` project folder
```bash
cd ./src/robot
```
### 🔖 requirements
- install uv venv package management
```bash
py -m pip install --upgrade uv
# create venv
uv venv
# activate venv
#win: .venv/Scripts/activate
#linux: source .venv/bin/activate
```
- project requirements update
```bash
uv pip install --upgrade -r requirements.txt
```
- build tools
```bash
uv pip install --upgrade setuptools build twine streamlit
```
### 🪛 build
- clean dist and build package
```pwsh
if (Test-Path ./dist) {rm ./dist -r -force}; `
py -m build && twine check dist/*
```
- linux/mac
```bash
[ -d ./dist ] && rm -rf ./dist
python -m build && twine check dist/*
```
### 📦 test / 🧪 debugger
Install the package in editable project location
```pwsh
uv pip install -U -e .
uv pip show ws-bom-robot-app
```
code quality tools
```pwsh
# .\src\robot
uv pip install -U scanreq prospector[with_everything]
## unused requirements
scanreq -r requirements.txt -p ./ws_bom_robot_app
## style/linting
prospector ./ws_bom_robot_app -t pylint -t pydocstyle
## code quality/complexity
prospector ./ws_bom_robot_app -t vulture -t mccabe -t mypy
## security
prospector ./ws_bom_robot_app -t dodgy -t bandit
## package
prospector ./ws_bom_robot_app -t pyroma
```
#### 🧪 run tests
```pwsh
uv pip install -U pytest pytest-asyncio pytest-mock pytest-cov pyclean
# clean cache if needed
# pyclean --verbose .
pytest --cov=ws_bom_robot_app --log-cli-level=info
# directory
# pytest --cov=ws_bom_robot_app.llm.vector_store --log-cli-level=info ./tests/app/llm/vector_store
```
#### 🐞 start debugger
```pwsh
streamlit run debugger.py --server.port 8051
```
### ✈️ publish
- [testpypi](https://test.pypi.org/project/ws-bom-robot-app/)
```pwsh
twine upload --verbose -r testpypi dist/*
#pip install -i https://test.pypi.org/simple/ -U ws-bom-robot-app
```
- [pypi](https://pypi.org/project/ws-bom-robot-app/)
```pwsh
twine upload --verbose dist/*
```
| text/markdown | Websolute Spa | dev@websolute.it | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/websolutespa/bom | null | >=3.12 | [] | [] | [] | [
"standardwebhooks==1.0.0",
"apscheduler==3.11.1",
"aiofiles==25.1.0",
"pydantic==2.12.4",
"pydantic-settings==2.12.0",
"fastapi[standard]==0.121.1",
"chevron==0.14.0",
"msoffcrypto-tool==5.4.2",
"nest_asyncio==1.6.0",
"langchain==0.3.27",
"langchain-community==0.3.30",
"langchain-core==0.3.76"... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.7 | 2026-02-19T13:34:03.520176 | ws_bom_robot_app-0.0.113.tar.gz | 92,144 | ad/87/42b47e74a580a06965036f00b995f8490c6fbe60ffd7fcbe3bb878f203d0/ws_bom_robot_app-0.0.113.tar.gz | source | sdist | null | false | d69b438d4c4fd8a74df01c0ff8192401 | 11bcf4351cc74da84cfd6b1b2ce6856eb88a41ef660930fef47ebc139a5421a8 | ad8742b47e74a580a06965036f00b995f8490c6fbe60ffd7fcbe3bb878f203d0 | null | [] | 247 |
2.4 | deeploy | 1.58.0 | The official Deeploy client for Python | ## Deeploy Python Client API Reference
Python client for interacting with Deeploy: Deploying ML with confidence.
**Python Version Support:** This package supports Python 3.10, 3.11, and 3.12.
Read the [documentation](https://docs.deeploy.ml) or visit the [Deeploy website](https://deeploy.ml) to learn more about Deeploy.
| text/markdown | Tim Kleinloog | opensource@deeploy.ml | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | https://gitlab.com/deeploy-ml/deeploy-python-client | null | <3.13,>=3.10 | [] | [] | [] | [
"pydantic<3,>2",
"requests>=2.31.0",
"joblib==1.4.2",
"dill==0.3.7",
"click",
"Jinja2",
"numpy",
"pandas",
"tqdm",
"numpy>=1.17.2; extra == \"fair\"",
"pandas>=0.25.1; extra == \"fair\"",
"scikit-learn>=0.22.1; extra == \"fair\"",
"fairlearn>=0.5.0; extra == \"fair\"",
"fairsd~=0.1.0; extr... | [] | [] | [] | [
"Documentation, https://deeploy-ml.gitlab.io/deeploy-python-client/",
"Deeploy website, https://deeploy.ml"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T13:33:43.378978 | deeploy-1.58.0.tar.gz | 902,333 | bd/ac/7310846876ecb1042f31cb11d9fc01e81966f555a80b137f511e07873bf3/deeploy-1.58.0.tar.gz | source | sdist | null | false | f2c7efc9c51e2bcb311267cde43917ca | 6f16f8952f386f8207f8dcc0cd3fd21d8849524810482269b72997b0fae88ff4 | bdac7310846876ecb1042f31cb11d9fc01e81966f555a80b137f511e07873bf3 | null | [
"LICENSE"
] | 250 |
2.3 | django-oauth2-codeflow | 1.1.1 | Authenticate with any OpenId Connect/Oauth2 provider through authorization code flow. PKCE is also supported. | Summary
=======
[![pypi downloads][dl-image]][pypi-url]
[![pypi status][status-image]][pypi-url]
[![python versions][py-image]][pypi-url]
[![django versions][django-image]][pypi-url]
[![pipeline status][pipeline-image]][pipeline-url]
[![coverage status][coverage-image]][coverage-url]
[![license][license-image]](./LICENSE)
[pypi-url]: https://pypi.org/project/django-oauth2-authcodeflow/
[dl-image]: https://img.shields.io/pypi/dm/django-oauth2-authcodeflow
[status-image]: https://img.shields.io/pypi/status/django-oauth2-authcodeflow
[py-image]: https://img.shields.io/pypi/pyversions/django-oauth2-authcodeflow.svg
[django-image]: https://img.shields.io/pypi/djversions/django-oauth2-authcodeflow.svg
[pipeline-image]: https://gitlab.com/systra/qeto/lib/django-oauth2-authcodeflow/badges/master/pipeline.svg?ignore_skipped=true
[pipeline-url]: https://gitlab.com/systra/qeto/lib/django-oauth2-authcodeflow/-/commits/master
[coverage-image]: https://gitlab.com/systra/qeto/lib/django-oauth2-authcodeflow/badges/master/coverage.svg
[coverage-url]: https://gitlab.com/systra/qeto/lib/django-oauth2-authcodeflow/-/commits/master
[license-image]: https://img.shields.io/pypi/l/django-oauth2-authcodeflow.svg
Authenticate with any OpenId Connect/Oauth2 provider through authorization code flow with [Django](https://www.djangoproject.com/).
Supported protocols:
- [Oauth 2.0](https://www.rfc-editor.org/rfc/rfc6749)
- [PKCE](https://www.rfc-editor.org/rfc/rfc7636)
- [OpenIDConnect 1.0](https://openid.net/specs/openid-connect-rpinitiated-1_0.html)
Wording
-------
- OP = OpenId Connect Provider, the auth server
- RP = Relying Party, the client, your application
Setup
-----
- add `oauth2_authcodeflow` to the `INSTALLED_APPS` (after `django.contrib.auth` and `django.contrib.sessions` apps)
- add `path('oidc/', include('oauth2_authcodeflow.urls')),` in your global `urls.py` file.
You can change the path prefix to what you want
- add `oauth2_authcodeflow.auth.AuthenticationBackend` to the `AUTHENTICATION_BACKENDS` config.
You can keep `django.contrib.auth.backends.ModelBackend` as a second-fallback auth mechanism.
- get your callback urls by doing:
```sh
./manage.py oidc_urls [--secure] <HOST_NAME>
```
- Configure your application on the OpenId Connect Provider.
This should give you a `client_id` and a `secret_id`.
You will need to fill the `redirect_url` and `logout_url` there.
- Ensue to include the `sid`, email, first name, last name (if applicable) parameters in the id token claims on the OP.
- Ensure that `django.contrib.sessions.middleware.SessionMiddleware` is in `MIDDLEWARE`
Minimal configuration
---------------------
- `SESSION_COOKIE_SECURE` to `True` if your Django is served through *HTTPS*
- `OIDC_OP_DISCOVERY_DOCUMENT_URL` to the well-known openid configuration url of the OP
- `OIDC_RP_CLIENT_ID` client id provided by the OP
- `OIDC_RP_CLIENT_SECRET` secrect id provided by the OP
Login
-----
Get your browser/frontend to go to the `oidc_authentication` page name (`/oidc/authenticate` by default) with the following parameters:
- `next`: the url to redirect on success
- `fail`: the url to redirect on failure, `error` query string may contain an error description
Logout
------
Get your browser/frontend to go to the `oidc_logout` page name (`/oidc/logout` by default) with the following parameters:
- `next`: the url to redirect on success
- `fail`: the url to redirect on failure, `error` query string may contain an error description
Logout from the OP as well
--------------------------
This will logout the user from the application but also from the OP (if user say yes) and the OP should also logout the user from all other apps connected to this OP.
The spec is not well followed by the OP, so you mileage may vary.
Get your browser/frontend to go to the `oidc_total_logout` page name (`/oidc/total_logout` by default) with the following parameters:
- `next`: the url to redirect on success
- `fail`: the url to redirect on failure, `error` query string may contain an error description
Protect your urls
-----------------
At least three options are possible.
1. Use default django way to [limit access to logged-in users](https://docs.djangoproject.com/en/4.1/topics/auth/default/#limiting-access-to-logged-in-users) by defining `LOGIN_URL` in your settings and and `login_required` decorators in your views.
```python
# settings.py
from django.urls import reverse_lazy
from django.utils.text import format_lazy
LOGIN_URL = format_lazy('{url}?fail=/', url=reverse_lazy(OIDC_URL_AUTHENTICATION_NAME))
# urls.py
from django.contrib.auth.decorators import login_required
path('restricted_url/', login_required(your_view)),
```
2. A slightly different version, by directly and only using the `login_required` from `oauth2_authcodeflow.utils`.
3. Use the `LoginRequiredMiddleware` with `OIDC_MIDDLEWARE_NO_AUTH_URL_PATTERNS` configuration.
Optional middlewares
--------------------
You can add some middlewares to add some features:
- `oauth2_authcodeflow.middleware.LoginRequiredMiddleware` to automaticaly force a login request to urls not in `OIDC_MIDDLEWARE_NO_AUTH_URL_PATTERNS` if not authenticated.
- `oauth2_authcodeflow.middleware.RefreshAccessTokenMiddleware` to automaticaly refresh the access token when it’s expired.
- `oauth2_authcodeflow.middleware.RefreshSessionMiddleware` to automaticaly ask for a new id token when it’s considered expired.
- `oauth2_authcodeflow.middleware.BearerAuthMiddleware` to authenticate the user using `Authorization` HTTP header (API, scripts, CLI usage).
`LoginRequiredMiddleware` will refresh to the original page uppon user logged-in.
`RefreshAccessTokenMiddleware` and `RefreshSessionMiddleware` will try the refresh and return a redirect to the same page (or the one configured as next in the login phase) if the refresh cannot happen.
Use them to silently refresh your access/id tokens.
BearerAuthMiddleware will use `oauth2_authcodeflow.auth.BearerAuthenticationBackend` to authenticate the user based on `Authorization` HTTP header instead of using the sessions.
Use this to allow to authenticate without cookies/session. You then need to login with `from_cli=1` in your `login` url. You then needs to go to the displayed url with a browser and copy the result http header to make further requests.
Signals
-------
One can use Django `user_logged_in` and `user_logged_out` [signals](https://docs.djangoproject.com/en/5.0/ref/contrib/auth/#module-django.contrib.auth.signals) to know and act when a user is logged in or disconnected.
Full configuration
------------------
Secure session cookie settings:
- `SESSION_COOKIE_AGE` to a reasonable time (default 2 weeks)
- `SESSION_COOKIE_HTTPONLY` **must** be `True` (default `True`)
- `SESSION_COOKIE_PATH` be sure to use `/` to prevent some weird behavior (default `/`)
- `SESSION_COOKIE_SAMESITE` **should** be `Lax` (default `Lax`)
- `SESSION_COOKIE_SECURE` **should** be `True` in *https* context (default `False`)
Specific OIDC settings:
| Settings | Description | Default |
| -------- | ----------- | ------- |
| `OIDC_OP_DISCOVERY_DOCUMENT_URL` | URL of your OpenID connect Provider discovery document url (*recommended*).<br>If you provide this, the following configs will be ignored:<br>- `OIDC_OP_AUTHORIZATION_URL`<br>- `OIDC_OP_TOKEN_URL`<br>- `OIDC_OP_USERINFO_URL`<br>- `OIDC_OP_JWKS_URL` | `None` |
| `OIDC_OP_AUTHORIZATION_URL` | URL of your OpenID connect Provider authorization endpoint (**not recommended**, `OIDC_OP_DISCOVERY_DOCUMENT_URL` is preferred). | `None` |
| `OIDC_OP_TOKEN_URL` | URL of your OpenID connect Provider token endpoint (**not recommended**, `OIDC_OP_DISCOVERY_DOCUMENT_URL` is preferred). | `None` |
| `OIDC_OP_USERINFO_URL` | URL of your OpenID connect Provider userinfo endpoint (**not recommended**, `OIDC_OP_DISCOVERY_DOCUMENT_URL` is preferred). | `None` |
| `OIDC_OP_JWKS_URL` | URL of your OpenId connect Provider endpoint to get public signing keys (in `PEM` or `DER` format).<br>This is used to verify the `id_token`.<br>This is **not recommended** to provide this url here but rather use `OIDC_OP_DISCOVERY_DOCUMENT_URL` config. | `None` |
| `OIDC_OP_END_SESSION_URL` | URL of your OpenID connect Provider end session endpoint (not recommended, `OIDC_OP_DISCOVERY_DOCUMENT_URL` is preferred). | `None` |
| `OIDC_OP_FETCH_USER_INFO` | Fetch user info on login or not. | `True` |
| `OIDC_OP_TOTAL_LOGOUT` | Do a call to total logout will call the OP for a logout. Default true.<br>Be careful, some OP will not follow the RFC and will not allow the user to NOT logout all connected apps.<br>Azure is such a bad example. | `True` |
| `OIDC_OP_EXPECTED_EMAIL_CLAIM` | expected email key. | `'email'` |
| `OIDC_OP_EXPECTED_CLAIMS` | `OIDC_OP_EXPECTED_EMAIL_CLAIM` value is automatically included in this list. | `[]` |
| `OIDC_RP_CLIENT_ID` | OpenID Connect client ID provided for your Relaying Party/client by your OpenIdConnect Provider | |
| `OIDC_RP_CLIENT_SECRET` | OpenID Connect client secret provided for your Relaying Party/client by your OpenIdConnect Provider.<br>Could be empty in PKCE case. | |
| `OIDC_RP_USE_PKCE` | `PKCE` improve security, disable it only if your provider cannot handle it. | `True` |
| `OIDC_RP_FORCE_SECRET_WITH_PKCE` | Force to send the client secret even when using `PKCE`.<br>Only use this option if your provider don’t support PKCE without secret. | `False` |
| `OIDC_RP_FORCE_CONSENT_PROMPT` | Force to ask for consent on login, even if `offline_access` is not in scopes | `False` |
| `OIDC_RP_AZURE_SPA` | Azure require the 'Origin' header when using PKCE and SPA | `False` |
| `OIDC_RP_SCOPES` | The OpenID Connect scopes to request during login.<br>The scopes could be usefull later to get access to other ressources.<br>`openid` must be in the list.<br>You can also include the `email` scope to ensure that the email field will be in the claims (*recommended*).<br>You can also include the `profile` scope to get more (like names, …) info in the `id_token` (*recommended*).<br>You can also get a `refresh_token` by specifying the `offline_access` scope. | `['openid', 'email', 'profile', 'offline_access']` |
| `OIDC_RP_USERINFO_CLAIMS` | OpenID Connect authorization [request parameter `userinfo` member](https://openid.net/specs/openid-connect-core-1_0.html#ClaimsParameter) to optionaly add to id token request (dict type). | `None` |
| `OIDC_RP_TOKEN_CLAIMS` | OpenID Connect authorization [request parameter `id_token` member](https://openid.net/specs/openid-connect-core-1_0.html#ClaimsParameter) to optionaly add to id token request (dict type). | `None` |
| `OIDC_RP_SIGN_ALGOS_ALLOWED` | Sets the algorithms the IdP may use to sign ID tokens.<br>Typical values ar `HS256` (no key required) and `RS256` (public key required)<br>The public keys might be defined in `OIDC_RP_IDP_SIGN_KEY` or deduced using the `OIDC_OP_JWKS_URL` config. | `['HS256', 'HS384', 'HS512', 'RS256', 'RS384', 'RS512']` |
| `OIDC_RP_IDP_SIGN_KEY` | Public RSA used to verify signatures. Overrides keys from JWKS endpoint.<br>Should be in `PEM` or `DER` format. | `None` |
| `OIDC_CREATE_USER` | Enables or disables automatic user creation during authentication | `True` |
| `OIDC_RANDOM_SIZE` | Sets the length of the random string used in the OAuth2 protocol. | `32` |
| `OIDC_PROXY` | Defines a proxy for all requests to the OpenID Connect provider (fetch JWS, retrieve JWT tokens, Userinfo Endpoint).<br>The default is set to `None` which means the library will not use a proxy and connect directly.<br>For configuring a proxy check the Python requests documentation: <https://requests.readthedocs.io/en/master/user/advanced/#proxies> | `None` |
| `OIDC_TIMEOUT` | Defines a timeout for all requests to the OpenID Connect provider (fetch JWS, retrieve JWT tokens, Userinfo Endpoint).<br>The default is set to `None` which means the library will wait indefinitely.<br>The time can be defined as seconds (integer).<br>More information about possible configuration values, see Python requests: <https://requests.readthedocs.io/en/master/user/quickstart/#timeouts> | `None` |
| `OIDC_REDIRECT_OK_FIELD_NAME` | Sets the GET parameter that is being used to define the redirect URL after succesful authentication | `'next'` |
| `OIDC_REDIRECT_ERROR_FIELD_NAME` | Sets the GET parameter that is being used to define the redirect URL after failed authentication | `'fail'` |
| `OIDC_DJANGO_USERNAME_FUNC` | Function or dotted path to a function that compute the django username based on claims.<br>The username should be unique for this app.<br>The default is to use a base64 url encode of the email hash (sha1). | `get_default_django_username` |
| `OIDC_EMAIL_CLAIM` | Claim name for email<br>`None` value means use `OIDC_OP_EXPECTED_EMAIL_CLAIM` value<br>You can also provide a lambda that takes all the claims as argument and return an email | `None` |
| `OIDC_FIRSTNAME_CLAIM` | You can also provide a lambda that takes all the claims as argument and return a firstname | `'given_name'` |
| `OIDC_LASTNAME_CLAIM` | You can also provide a lambda that takes all the claims as argument and return a lastname | `'family_name'` |
| `OIDC_EXTEND_USER` | Callable that takes the `user`, the `claims` and optionaly the `request` and `access_token` as arguments and that can extend user properties.<br>You can also specify a dotted path to a callable. | `None` |
| `OIDC_UNUSABLE_PASSWORD` | Scramble the password on each SSO connection/renewal.<br>If `False`, it will only scramble it when creating an account | `True` |
| `OIDC_BLACKLIST_TOKEN_TIMEOUT_SECONDS` | 7 days by default | `7 * 86400` |
| `OIDC_AUTHORIZATION_HEADER_PREFIX` | Only used when using authorization in header:<br>`Authorization: Bearer id_token`<br>This is only possible if `oauth2_authcodeflow.middleware.BearerAuthMiddleware` has been added to `MIDDLEWARE` setting list. | `'Bearer'` |
| `OIDC_MIDDLEWARE_NO_AUTH_URL_PATTERNS` | The `RefreshAccessTokenMiddleware` and `RefreshSessionMiddleware` will use this list to bypass auth checks.<br>Any url listed here will not be tried to be authenticated using Auth Code Flow.<br>You should include at least any failure/error or admin urls in it. | `[]` |
| `OIDC_MIDDLEWARE_LOGIN_REQUIRED_REDIRECT` | Redirect to login page if not authenticated when using `LoginRequiredMiddleware`. | `True` |
| `OIDC_MIDDLEWARE_API_URL_PATTERNS` | The `RefreshAccessTokenMiddleware` and `RefreshSessionMiddleware` will use this list to answer JSON response in case of refresh failure.<br>Expected list of regexp URL patterns. | `['^/api/']` |
| `OIDC_MIDDLEWARE_SESSION_TIMEOUT_SECONDS` | 7 days by default | `7 * 86400` |
| text/markdown | Melih Sünbül | m.sunbul@excellence-cloud.com | Melih Sünbül | m.sunbul@excellence-cloud.com | MIT | oauth2, oidc, openid | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",... | [] | https://github.com/ExcellenceCloudGmbH/django-oauth2-codeflow | null | <4.0,>=3.8 | [] | [] | [] | [
"django>=4.2",
"python-jose[cryptography]>=3.3",
"requests>=2.28"
] | [] | [] | [] | [
"Repository, https://github.com/ExcellenceCloudGmbH/django-oauth2-codeflow",
"Documentation, https://github.com/ExcellenceCloudGmbH/django-oauth2-codeflow/blob/master/README.md",
"Bug Tracker, https://github.com/ExcellenceCloudGmbH/django-oauth2-codeflow/issues",
"Changelog, https://github.com/ExcellenceCloud... | poetry/2.1.4 CPython/3.13.7 Darwin/25.3.0 | 2026-02-19T13:33:22.247693 | django_oauth2_codeflow-1.1.1.tar.gz | 30,027 | 6b/a3/f695a54c44f30457d5fd6adb4b740e919dc372b2d1d906a53f1fbe85e4b2/django_oauth2_codeflow-1.1.1.tar.gz | source | sdist | null | false | c75dfec4c01775075f059645c3b10631 | 0ee5d246ec6264887395b4c66140bd342b87f31a5a7e61ec2dd9bf0cfe2dff53 | 6ba3f695a54c44f30457d5fd6adb4b740e919dc372b2d1d906a53f1fbe85e4b2 | null | [] | 257 |
2.4 | prediction-market-agent-tooling | 0.69.30 | Tools to benchmark, deploy and monitor prediction market agents. | # Prediction Market Agent Tooling
Tooling for benchmarking, deploying and monitoring agents for prediction market applications.
## Setup
Install the project dependencies with `poetry`, using Python >=3.10:
```bash
python3.10 -m pip install poetry
python3.10 -m poetry install
python3.10 -m poetry shell
```
Create a `.env` file in the root of the repo with the following variables:
Deploying and monitoring agents using GCP requires that you set up the gcloud CLI (see [here](https://cloud.google.com/sdk/docs/install) for installation instructions, and use `gcloud auth login` to authorize.)
```bash
MANIFOLD_API_KEY=...
BET_FROM_PRIVATE_KEY=...
OPENAI_API_KEY=...
```
## Benchmarking
Create a benchmarkable agent by subclassing the `AbstractBenchmarkedAgent` base class, and plug in your agent's research and prediction functions into the `predict` method.
Use the `Benchmarker` class to compare your agent's predictions vs. the 'wisdom of the crowd' on a set of markets from your chosen prediction market platform.
For example:
```python
import prediction_market_agent_tooling.benchmark.benchmark as bm
from prediction_market_agent_tooling.benchmark.agents import RandomAgent
from prediction_market_agent_tooling.markets.market_type import MarketType
from prediction_market_agent_tooling.markets.markets import get_binary_markets
benchmarker = bm.Benchmarker(
markets=get_binary_markets(limit=10, market_type=MarketType.MANIFOLD),
agents=[RandomAgent(agent_name="a_random_agent")],
)
benchmarker.run_agents()
md = benchmarker.generate_markdown_report()
```
This produces a markdown report that you can use for comparing agents side-by-side, like:

## Deploying
> **Deprecated**: We suggest using your own infrastructure to deploy, but you may still find this useful.
Create a deployable agent by subclassing the `DeployableTraderAgent` base class, and implementing the `answer_binary_market` method.
For example, deploy an agent that randomly picks an outcome:
```python
import random
from prediction_market_agent_tooling.deploy.agent import DeployableTraderAgent
from prediction_market_agent_tooling.markets.agent_market import AgentMarket
class DeployableCoinFlipAgent(DeployableTraderAgent):
def answer_binary_market(self, market: AgentMarket) -> bool | None:
return random.choice([True, False])
DeployableCoinFlipAgent().deploy_gcp(...)
```
### Safe
Agents can control funds via a wallet primary key only, or optionally via a [Safe](https://safe.global/) as well. For deploying a Safe manually for a given agent, run the script below:
```commandline
poetry run python scripts/create_safe_for_agent.py --from-private-key <YOUR_AGENT_PRIVATE_KEY> --salt-nonce 42
```
This will output the newly created Safe in the terminal, and it can then be copied over to the deployment part (e.g. Terraform).
Note that `salt_nonce` can be passed so that the created safe is deterministically created for each agent, so that, if the same `salt_nonce` is used, the script will not create a new Safe for the agent, instead it will output the previously existent Safe.
You can then specify this agent's Safe address with the `SAFE_ADDRESS` environment variable.
## Monitoring
Monitor the performance of the agents deployed to GCP, as well as meta-metrics of the prediction market platforms they are deployed to.
This runs as a streamlit app on a localhost server, executed with:
```bash
PYTHONPATH=. streamlit run examples/monitor/monitor.py
```
Which launches in the browser:

## The Market Platforms
The following prediction market platforms are supported:
| Platform | Benchmarking | Deployment | Monitoring |
|---------------------------------------|--------------|------------|------------|
| [Manifold](https://manifold.markets/) | ✅ | ✅ | ✅ |
| [AIOmen](https://aiomen.eth.limo/) | ✅ | ✅ | ✅ |
| [Polymarket](https://polymarket.com/) | ✅ | ❌ | ❌ |
## Prediction Markets Python API
We have built clean abstractions for taking actions on the different prediction market platforms (retrieving markets, buying and selling tokens, etc.). This is currently undocumented, but for now, inspecting the [`AgentMarket`](https://github.com/gnosis/prediction-market-agent-tooling/blob/1e497fff9f2b53e4e3e1beb5dda08b4d49da881b/prediction_market_agent_tooling/markets/agent_market.py) class and its methods is your best bet.
For example:
```python
from prediction_market_agent_tooling.config import APIKeys
from prediction_market_agent_tooling.markets.agent_market import SortBy
from prediction_market_agent_tooling.markets.omen.omen import OmenAgentMarket
# Place a bet on the market closing soonest
market = OmenAgentMarket.get_markets(limit=1, sort_by=SortBy.CLOSING_SOONEST)[0]
market.place_bet(outcome=True, amount=market.get_bet_amount(0.1))
# View your positions
my_positions = OmenAgentMarket.get_positions(user_id=APIKeys().bet_from_address)
print(my_positions)
# Sell position (accounting for fees)
market.sell_tokens(outcome=True, amount=market.get_bet_amount(0.095))
```
This API can be built on top of to create your application. See [here](https://github.com/gnosis/prediction-market-agent/tree/main/prediction_market_agent/agents/microchain_agent) for an example.
## Contributing
See the [Issues](https://github.com/gnosis/prediction-market-agent-tooling/issues) for ideas of things that need fixing or implementing. The team is also receptive to new issues and PRs.
We use `mypy` for static type checking, and `isort`, `black` and `autoflake` for linting, and `pre-commit` to minimise unwanted pushes to the public repositories. These all run as steps in CI, but `pre-commit` also needs to be installed locally using the provided `install_hooks.sh` script.
| text/markdown | Gnosis | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"autoflake<3.0.0,>=2.2.1",
"base58<2.0,>=1.0.2",
"cowdao-cowpy==1.0.1",
"cron-validator<2.0.0,>=1.0.8",
"eth-account<0.14.0,>=0.13.0",
"eth-keys<0.7.0,>=0.6.1",
"eth-typing<6.0.0,>=5.0.0",
"functions-framework<4.0.0,>=3.5.0",
"google-api-python-client==2.95.0; extra == \"google\"",
"google-cloud-f... | [] | [] | [] | [] | poetry/2.3.2 CPython/3.12.3 Linux/6.14.0-1017-azure | 2026-02-19T13:33:18.348072 | prediction_market_agent_tooling-0.69.30-py3-none-any.whl | 252,793 | 6c/d4/444bf310e3de60c71e7c8efa689ad85fafb59af462420da8f89738d444aa/prediction_market_agent_tooling-0.69.30-py3-none-any.whl | py3 | bdist_wheel | null | false | a5865a833dc1442e47664a78cbc982f8 | 641a0beeb55e421b6b1f73b4f346ebe660a8e412c78ceb3494a3f8e5d2f2f22c | 6cd4444bf310e3de60c71e7c8efa689ad85fafb59af462420da8f89738d444aa | null | [
"LICENSE"
] | 331 |
2.4 | pb-spec | 0.4.1 | Plan-Build Spec (pb-spec): A CLI tool for managing AI coding assistant skills | # pb-spec — Plan-Build Spec
[](https://deepwiki.com/longcipher/pb-spec)
[](https://context7.com/longcipher/pb-spec)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://pypi.org/project/pb-spec/)

**pb-spec** is a CLI tool that installs AI coding assistant skills into your project. It provides a structured workflow — **init → plan → build** — that turns natural-language requirements into implemented, tested code through AI agent prompts.
## 🧠 Design Philosophy
pb-spec follows a **harness-first** philosophy: reliability comes from process design, explicit checks, and recoverability, not from assuming one-shot model correctness.
### Best-Practice Alignment
| Source | Core Idea | How pb-spec Applies It |
|---|---|---|
| [RPI Strategy](https://patrickarobinson.com/blog/introducing-rpi-strategy/) | Separate research, planning, and implementation | `/pb-init` + `/pb-plan` precede `/pb-build` |
| [Plan-and-Solve Prompting](https://arxiv.org/abs/2305.04091) | Plan first to reduce missing-step errors | `design.md` + `tasks.md` are mandatory artifacts |
| [ReAct](https://arxiv.org/abs/2210.03629) | Interleave reasoning and actions with environment feedback | `/pb-build` executes task-by-task with test/tool feedback loops |
| [Reflexion](https://arxiv.org/abs/2303.11366) | Learn from failure signals via iterative retries | Retry/skip/abort and DCR flow in `pb-build` |
| [Effective Harnesses for Long-Running Agents](https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents) | Grounding, context hygiene, recovery, observability | State checks, minimal context handoff, task-local rollback guidance |
| [Building Effective Agents](https://www.anthropic.com/engineering/building-effective-agents) | Prefer simple composable workflows over framework complexity | Small adapter-based CLI + explicit workflow prompts |
### Practical Principles in pb-spec
- **Context Before Code:** `/pb-init` and `/pb-plan` establish project and requirement context before implementation starts.
- **Verification by Design:** Planning requires explicit verification commands so completion is measurable.
- **Strict TDD Execution:** `/pb-build` enforces Red → Green → Refactor with per-task status tracking.
- **Safe Failure Recovery:** Failed attempts use scoped recovery guidance to avoid polluting unrelated workspace state.
- **Composable Architecture:** Platform differences stay in adapters; workflow semantics stay in shared templates.
## Features
- **4 agent skills**: `pb-init`, `pb-plan`, `pb-refine`, `pb-build` — covering project analysis, design planning, iterative refinement, and TDD implementation
- **5 platforms**: Claude Code, VS Code Copilot, OpenCode, Gemini CLI, Codex
- **Zero config**: run `pb-spec init` and start using AI prompts immediately
- **Idempotent**: safe to re-run; use `--force` to overwrite existing files
- **Built with**: Python 3.12+, [click](https://click.palletsprojects.com/), [uv](https://docs.astral.sh/uv/)
## Installation
```bash
# Recommended
uv tool install pb-spec
# Alternative
pipx install pb-spec
```
## Quick Start
```bash
# 1. Install skills/prompts for your AI tool
cd my-project
pb-spec init --ai claude # or: copilot, opencode, gemini, codex, all
# 2. Open the project in your AI coding assistant and use the installed commands/prompts:
# /pb-init → Generate AGENTS.md project context
# /pb-plan Add WebSocket auth → Generate specs/YYYY-MM-DD-01-add-websocket-auth/
# /pb-refine add-websocket-auth → (Optional) Refine design based on feedback
# /pb-build add-websocket-auth → Implement tasks via TDD subagents
#
# Note for Codex: prompts are loaded from .codex/prompts and typically run via /prompts:<name>.
```
## Supported AI Tools
| AI Tool | Target Directory | File Format |
|---|---|---|
| Claude Code | `.claude/skills/pb-<name>/SKILL.md` | YAML frontmatter + Markdown |
| VS Code Copilot | `.github/prompts/pb-<name>.prompt.md` | Markdown (no frontmatter) |
| OpenCode | `.opencode/skills/pb-<name>/SKILL.md` | YAML frontmatter + Markdown |
| Gemini CLI | `.gemini/commands/pb-<name>.toml` | TOML (`description` + `prompt`) |
| Codex | `.codex/prompts/pb-<name>.md` | YAML frontmatter + Markdown |
## CLI Reference
```text
pb-spec init --ai <platform> [--force]
```
Install skill files into the current project.
- `--ai` — Target platform: `claude`, `copilot`, `opencode`, `gemini`, `codex`, or `all`
- `--force` — Overwrite existing files
```text
pb-spec version
```
Print the installed pb-spec version.
```text
pb-spec update
```
Update pb-spec to the latest version (requires `uv`).
## Workflow
four agent skills that chain together:
```text
/pb-init → /pb-plan → [/pb-refine] → /pb-build
```
### 1. `/pb-init` — Project Initialization
Analyzes your project and generates an `AGENTS.md` file at the project root. This file captures the tech stack, directory structure, conventions, and testing patterns. **Preserves user-added context** so manual notes aren't lost on re-runs.
### 2. `/pb-plan <requirement>` — Design & Task Planning
Takes a natural-language requirement and produces a complete feature spec:
```text
specs/<YYYY-MM-DD-NO-feature-name>/
├── design.md # Architecture, API contracts, data models
└── tasks.md # Ordered implementation tasks (logical units of work)
```
The spec directory follows the naming format `YYYY-MM-DD-NO-feature-name` (e.g., `2026-02-15-01-add-websocket-auth`). The feature-name part must be unique across all specs.
### 3. `/pb-refine <feature-name>` — Design Iteration (Optional)
Reads user feedback or Design Change Requests (from failed builds) and intelligently updates `design.md` and `tasks.md`. It maintains a revision history and cascades design changes to the task list without overwriting completed work.
### 4. `/pb-build <feature-name>` — Subagent-Driven Implementation
Reads `specs/<YYYY-MM-DD-NO-feature-name>/tasks.md` and implements each task sequentially. Every task is executed by a fresh subagent following strict TDD (Red → Green → Refactor). Supports **Design Change Requests** if the planned design proves infeasible during implementation. Only the `<feature-name>` part is needed when invoking — the agent resolves the full directory automatically.
## Skills Overview
| Skill | Trigger | Output | Description |
|---|---|---|---|
| `pb-init` | `/pb-init` | `AGENTS.md` | Detect stack, scan structure, generate project context |
| `pb-plan` | `/pb-plan <requirement>` | `specs/<YYYY-MM-DD-NO-feature-name>/design.md` + `tasks.md` | Design proposal + ordered task breakdown |
| `pb-refine` | `/pb-refine <feature>` | Revised spec files | Apply feedback or Design Change Requests |
| `pb-build` | `/pb-build <feature-name>` | Code + tests | TDD implementation via subagents |
## Design Philosophy: Agent Harness
pb-spec's prompt design is inspired by Anthropic's research on [Effective Harnesses for Long-Running Agents](https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents). The core idea: place AI agents inside a strict, observable, recoverable execution environment — a "harness" — rather than relying on the agent's autonomous judgment alone.
### Key Harness Principles
| Principle | How pb-spec Implements It |
|---|---|
| **State Grounding** | Subagents must verify workspace state (`ls`, `find`, `read_file`) before writing any code — preventing path hallucination |
| **Error Quoting** | Subagents must quote specific error messages before attempting fixes — preventing blind debugging |
| **Context Hygiene** | Orchestrator passes only minimal, relevant context to each subagent — preventing context window pollution |
| **Recovery Loop** | Failed tasks trigger `git checkout .` (workspace revert) before retry — ensuring each attempt starts from a known-good state |
| **Verification Harness** | Design docs define explicit verification commands at planning time — subagents execute, not invent, verification |
| **Agent Rules** | `AGENTS.md` embeds project-specific "laws of physics" that all subagents inherit as system-level constraints |
### Where Each Principle Lives
- **Worker (Implementer):** `implementer_prompt.md` enforces grounding-first workflow and error quoting
- **Architect (Planner):** `design_template.md` includes Critical Path Verification table
- **Orchestrator (Builder):** `pb-build` SKILL enforces context hygiene and workspace revert on failure
- **Foundation (Init):** `AGENTS.md` template includes Agent Harness Rules as global conventions
## Development
```bash
# Clone
git clone https://github.com/longcipher/pb-spec.git
cd pb-spec
# Install dependencies
uv sync --group dev
# Run tests
uv run pytest -v
# Install locally for testing
uv pip install -e .
```
## License
[Apache-2.0](LICENSE) © 2025 Bob Liu
| text/markdown | Bob Liu | Bob Liu <akagi201@gmail.com> | null | null | null | cli, ai, coding-assistant, skills, plan-build, tdd | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Code Generators",
"To... | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3.1"
] | [] | [] | [] | [
"Homepage, https://github.com/longcipher/pb-spec",
"Repository, https://github.com/longcipher/pb-spec",
"Issues, https://github.com/longcipher/pb-spec/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:33:17.112028 | pb_spec-0.4.1.tar.gz | 40,878 | 56/8d/3a948beba4db382142d0ade82ce522e2269a88ff86a9b0878e441d8edaef/pb_spec-0.4.1.tar.gz | source | sdist | null | false | 35fcb1ff7acff8a2669662484dc5bf7d | 90daf85ed196fdf2d6426a33711e47df08522bbab0ff6af74a18889e61c71bcd | 568d3a948beba4db382142d0ade82ce522e2269a88ff86a9b0878e441d8edaef | Apache-2.0 | [] | 215 |
2.4 | algotik-tse | 1.0.1 | A comprehensive Python library for fetching Tehran Stock Exchange (TSETMC) and currency/coin market data. | # AlgoTik TSE
[](https://pypi.org/project/algotik-tse/)
[](https://pypi.org/project/algotik-tse/)
[](https://pepy.tech/project/algotik-tse)
[](https://pypi.org/project/algotik-tse/)
[](https://results.pre-commit.ci/latest/github/mohsenalipour/algotik_tse/master)
**A comprehensive Python library for fetching market data from the Tehran Stock Exchange (TSETMC) and currency/coin prices (TGJU).** Supports stocks, options, ETFs, bonds, and treasury bills.
All outputs are returned as **Pandas DataFrames** with Jalali (Shamsi) date support.
<div dir="rtl" align="right">
### 🇮🇷 فارسی
این کتابخانه جهت دریافت اطلاعات بازار بورس تهران و قیمت ارز و سکه توسعه یافته است. خروجی تمامی توابع با فرمت **دیتافریم پانداز** و با پشتیبانی از **تاریخ شمسی** ارائه میشود.
#### ویژگیها:
- دسترسی به دادهها با استفاده از **نماد فارسی** سهم
- **تعدیل قیمت** خودکار (افزایش سرمایه + سود نقدی)
- تشخیص هوشمند **جابجایی نماد** بین بازارها
- دسترسی به **همه شاخصهای بازار** (صنایع و کل)
- قابلیت دانلود **دستهجمعی** سابقه قیمت
- دریافت اطلاعات **حقیقی/حقوقی**
- دریافت لیست **سهامداران عمده**
- دریافت سابقه **افزایش سرمایه**
- دریافت قیمت **ارز و سکه** (دلار، یورو، سکه امامی و ...)
- دریافت **اطلاعات لحظهای کل بازار** در یک درخواست (Market Watch)
- دریافت **دادههای اینترادی** (کندل و تیک دقیقهای، بازهها: ۱ دقیقه تا ۱۲ ساعت)
- لیست **اختیارمعاملهها** با تجزیه خودکار (نوع، دارایی پایه، قیمت اعمال، سررسید)
- دریافت **زنجیره اختیارمعامله** با Open Interest
- لیست **صندوقهای ETF** با محاسبه تخفیف/حباب NAV
- لیست **اوراق مرابحه و خزانه** با استخراج تاریخ سررسید
- **نامگذاری استاندارد** (`get_*`) در کنار نامهای اصلی
- پشتیبانی از تاریخ **شمسی، میلادی و نام روز هفته**
- تنظیمات قابل پیکربندی: SSL، Timeout، Rate Limiting، Retry
- مدیریت خودکار خطا و Rate Limiting برای جلوگیری از بلاک شدن
##### 🌐 وبسایت: [algotik.com](https://algotik.com) | 📱 تلگرام: [t.me/algotik](https://t.me/algotik)
</div>
---
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [API Reference](#api-reference)
- [get_history()](#get_history) — Historical price data
- [get_client_type()](#get_client_type) — Retail / Institutional data
- [get_capital_increase()](#get_capital_increase) — Capital increase history
- [get_detail()](#get_detail) — Full stock detail
- [get_info()](#get_info) — Instrument information
- [get_stats()](#get_stats) — Instrument statistics
- [get_symbols()](#get_symbols) — List all market symbols
- [get_shareholders()](#get_shareholders) — Major shareholders
- [get_currency()](#get_currency) — Currency & coin prices
- [get_intraday()](#get_intraday) — Intraday tick & candle data
- [get_market_snapshot()](#get_market_snapshot) — Live market snapshot (all instruments)
- [get_market_client_type()](#get_market_client_type) — Bulk individual/institutional data
- [list_options()](#list_options) — List all active options
- [get_options_chain()](#get_options_chain) — Options chain with Open Interest
- [list_etfs()](#list_etfs) — List ETFs with NAV discount
- [list_bonds()](#list_bonds) — List bonds & treasury bills with maturity
- [list_funds()](#list_funds) — List all investment funds with NAV, returns & portfolio
- [Legacy Aliases](#legacy-aliases)
- [Configuration](#configuration)
- [Examples](#examples)
- [Market Screening](#market-screening) — Top volume, gainers & losers
- [ETF Discount/Premium](#etf-discountpremium-analysis) — NAV arbitrage
- [Currency & Gold](#currency--gold-prices) — Dollar, Euro, Gold Coin
- [Options Overview](#options-overview) — Active options & top traded
- [Fund Comparison](#fund-comparison) — Equity vs Fixed Income funds
- [Bond Maturity](#bond-maturity-analysis) — Sukuk & treasury maturity
- [Institutional Money Flow](#institutional-money-flow) — Net buying/selling
- [All Asset Types](#all-asset-types-overview) — Market instrument breakdown
- [Intraday Candles](#intraday-candle-analysis) — 5min & 1h candles
- [Stock Detail & Shareholders](#stock-detail--shareholders) — Company info
- [Data Sources](#data-sources)
- [License](#license)
---
## Installation
```bash
pip install algotik-tse
```
**Upgrade to latest version:**
```bash
pip install algotik-tse --upgrade
```
**Requirements:** Python 3.8+ | pandas | requests | persiantools | lxml | numpy | openpyxl
---
## Quick Start
<div dir="rtl" align="right">
#### 📖 شروع سریع — توضیحات فارسی
| کد | توضیح |
|---|---|
| `att.get_history('شتران')` | دریافت سابقه قیمت تعدیل شده سهم |
| `att.get_client_type('شتران')` | دریافت اطلاعات حقیقی/حقوقی |
| `att.get_symbols()` | لیست تمام نمادهای بازار (سهام، اوراق، اختیار، صندوق و ...) |
| `att.get_currency('dollar')` | قیمت دلار آمریکا |
| `att.get_intraday('شتران')` | کندلهای ۱ دقیقهای امروز |
| `att.get_market_snapshot()` | اطلاعات لحظهای کل بازار |
| `att.list_options()` | لیست تمام اختیارمعاملههای فعال |
| `att.get_options_chain('اهرم')` | زنجیره اختیارمعامله با Open Interest |
| `att.list_etfs()` | لیست صندوقهای ETF با تخفیف/حباب NAV |
| `att.list_bonds()` | لیست اوراق بدهی (مرابحه، اجاره، خزانه) با سررسید |
| `att.list_funds()` | لیست صندوقهای سرمایهگذاری با NAV، بازدهی و ترکیب پرتفوی |
</div>
```python
import algotik_tse as att
# Get adjusted stock price history
df = att.get_history('شتران', start='1404-06-01', end='1404-08-01')
print(df.head())
```
```
Open High Low Close Volume
J-Date
1404-06-01 2008 2028 1969 2020 58693215
1404-06-03 1995 2011 1932 1932 56282643
1404-06-04 1888 1944 1888 1912 128242492
1404-06-05 1889 1965 1885 1897 80085551
1404-06-08 1875 1898 1875 1897 161293403
```
```python
# Get retail/institutional data
df_ri = att.get_client_type('شتران', limit=100)
# List all stocks in the market
all_stocks = att.get_symbols()
# Get US Dollar price history
usd = att.get_currency('dollar', limit=365)
# Intraday 1-minute candles (today's data)
intraday = att.get_intraday('شتران', interval='1min')
# Historical intraday (multi-day)
hist = att.get_intraday('شتران', interval='5min',
start='1404-11-01', end='1404-11-06')
# Live market data for ALL instruments in one call
data = att.get_market_snapshot()
print(data['stocks'].shape) # DataFrame of all instruments
print(data['market_time']) # '04/11/29 15:04:05'
print(data['index_value']) # 3806743.94
# Options chain for a specific underlying
chain = att.get_options_chain('اهرم')
print(chain['calls'].head()) # Calls DataFrame
print(chain['underlying_price']) # Current underlying price
# List all ETFs with NAV discount
etfs = att.list_etfs()
print(etfs[['Symbol', 'Close', 'NAV', 'NAV_Discount']].head())
# List all bonds with maturity info
bonds = att.list_bonds()
print(bonds[['Symbol', 'Ticker', 'BondType', 'MaturityJalali', 'DaysToMaturity']].head())
# Investment funds — NAV, returns, portfolio composition
funds = att.list_funds()
equity_funds = att.list_funds(fund_type='equity')
```
---
## API Reference
### `get_history()`
Get historical price data for one or more symbols. Prices are **auto-adjusted** for splits & dividends by default.
<div dir="rtl" align="right">
#### 📖 توضیحات فارسی — `get_history()`
تابع `get_history()` برای دریافت **سابقه قیمت سهام** از سایت TSETMC استفاده میشود. قیمتها بهصورت پیشفرض **تعدیلشده** (برای افزایش سرمایه و سود نقدی) ارائه میشوند.
**پارامترها:**
| پارامتر | نوع | پیشفرض | توضیح |
|---|---|---|---|
| `symbol` | `str` یا `list` | — | نماد فارسی سهم (مثلاً `'شتران'`) یا لیست نمادها |
| `start` | `str` | `None` | تاریخ شروع شمسی (مثلاً `'1402-01-01'`) |
| `end` | `str` | `None` | تاریخ پایان شمسی |
| `limit` | `int` | `0` | تعداد آخرین روزهای معاملاتی (`0` = کل تاریخچه) |
| `auto_adjust` | `bool` | `True` | تعدیل خودکار قیمت (افزایش سرمایه + سود نقدی) |
| `output_type` | `str` | `'standard'` | `'standard'` (فقط OHLCV) یا `'full'` (همه ستونها) |
| `date_format` | `str` | `'jalali'` | `'jalali'` (شمسی)، `'gregorian'` (میلادی)، یا `'both'` (هر دو) |
| `raw` | `bool` | `False` | فرمت TSETMC برای وارد کردن در نرمافزارهای معاملاتی |
| `return_type` | `str/list` | `None` | محاسبه بازده: `'simple'`، `'log'`، `'both'`، یا `['simple','Close',5]` |
| `save_to_file` | `bool` | `False` | ذخیره نتیجه در فایل CSV |
| `adjust_volume` | `bool` | `False` | تعدیل حجم معاملات برای افزایش سرمایه |
| `dropna` | `bool` | `True` | حذف ستونهای اضافی در حالت چند نمادی |
| `ascending` | `bool` | `True` | مرتبسازی صعودی (`True`) یا نزولی (`False`) بر اساس تاریخ |
| `save_path` | `str` | `None` | مسیر فایل CSV برای ذخیره (مثلاً `'output.csv'`) |
| `progress` | `bool` | `True` | نمایش نوار پیشرفت |
**خروجیهای مختلف:**
- **حالت عادی (`standard`):** ۵ ستون — `Open` (باز)، `High` (بیشترین)، `Low` (کمترین)، `Close` (پایانی)، `Volume` (حجم) — همه `int64`
- **حالت کامل (`full`):** ۱۰ ستون — علاوه بر موارد بالا: `Final` (قیمت پایانی میانگین وزنی)، `No.` (تعداد معاملات)، `Value` (ارزش معاملات ریالی)، `Weekday_fa` (نام روز هفته فارسی)، `Ticker` (نماد)
- **بدون تعدیل (`auto_adjust=False`):** ستون `Adj Close` (قیمت تعدیلشده) اضافه میشود و قیمتهای OHLC خام (بدون تعدیل) هستند
- **فرمت TSE:** نام ستونها مطابق TSETMC مثل `<TICKER>`، `<HIGH>`، `<CLOSE>` و...
- **تاریخ میلادی:** ایندکس `Date` از نوع `datetime64` بهجای رشته شمسی
- **بازده:** ستون `returns` اضافه میشود — ساده، لگاریتمی، یا هر دو
- **چند نمادی:** ستونها `MultiIndex` میشوند: `(Column, Symbol)`
**نکات مهم:**
- برای دریافت **شاخص کل** یا **شاخصهای صنایع**، نام شاخص را بهعنوان نماد وارد کنید (مثلاً `'شاخص کل'`، `'شاخص صنعت فلزات اساسی'`)
- با `save_to_file=True` خروجی بهصورت فایل CSV ذخیره میشود
- در حالت چند نمادی، فقط روزهای مشترک معاملاتی بین نمادها نمایش داده میشود
</div>
```python
att.get_history(
symbol='شتران', # str or list — symbol name(s) in Persian
start=None, # str — start date in Jalali 'YYYY-MM-DD' (e.g. '1402-01-01')
end=None, # str — end date in Jalali 'YYYY-MM-DD'
limit=0, # int — number of last trading days (0 = all history)
raw=False, # bool — use TSETMC column names
auto_adjust=True, # bool — adjust for splits & dividends
output_type='standard', # str — 'standard' (OHLCV) or 'full' (all columns)
date_format='jalali', # str — 'jalali', 'gregorian', or 'both'
progress=True, # bool — show download progress bar
save_to_file=False, # bool — save result to CSV file
dropna=True, # bool — drop extra columns in multi-stock mode
adjust_volume=False, # bool — adjust volume for capital increases
return_type=None, # str/list — 'simple', 'log', 'both', or ['simple','Close',5]
ascending=True, # bool — sort by date ascending (True) or descending (False)
save_path=None, # str — file path to save CSV (e.g. 'output.csv')
)
```
#### Standard output (default)
```python
df = att.get_history('شتران', limit=10)
```
```
Open High Low Close Volume
J-Date
1404-10-14 3960 3996 3810 3996 1272346113
1404-10-15 4098 4098 4098 4098 450168956
1404-10-16 4220 4220 4220 4220 326395132
1404-10-17 4346 4346 4346 4346 892210289
1404-10-20 4216 4476 4216 4218 1862610980
```
- **Index:** `J-Date` (Jalali string, e.g. `1404-10-14`)
- **Columns:** `Open`, `High`, `Low`, `Close`, `Volume` — all `int64`
#### Full output
```python
df = att.get_history('شتران', limit=5, output_type='full')
```
```
Open High Low Close Final Volume No. Value Weekday_fa Ticker
J-Date
1404-11-25 4400 4475 4218 4218 4308 238550890 4846 1027754207785 شنبه شتران
1404-11-26 4179 4179 4179 4179 4179 39453982 748 164878190778 یکشنبه شتران
1404-11-27 4054 4109 4054 4064 4056 430020598 7394 1744313323314 دوشنبه شتران
1404-11-28 4010 4100 3958 4066 4037 164209800 4199 662877646216 سه شنبه شتران
```
| Column | Description |
|---|---|
| `Open, High, Low, Close` | Adjusted OHLC prices (int) |
| `Final` | Weighted average closing price — قیمت پایانی |
| `Volume` | Trade volume |
| `No.` | Number of trades |
| `Value` | Total trade value (Rials) |
| `Weekday_fa` | Day of week in Persian (شنبه, یکشنبه, …) |
| `Ticker` | Symbol name |
#### Gregorian dates
```python
df = att.get_history('شتران', limit=5, date_format='gregorian')
```
```
Open High Low Close Volume
Date
2026-02-14 4400 4475 4218 4218 238550890
2026-02-15 4179 4179 4179 4179 39453982
2026-02-16 4054 4109 4054 4064 430020598
2026-02-17 4010 4100 3958 4066 164209800
```
- **Index:** `Date` (`datetime64`)
- Use `date_format='both'` to get both Jalali & Gregorian columns.
- Full mode with Gregorian shows `Weekday` (Monday, Tuesday, …) instead of `Weekday_fa`.
#### Auto-adjust off
```python
df = att.get_history('شتران', limit=5, auto_adjust=False)
```
```
Open High Low Close Adj Close Volume
J-Date
1404-11-25 4400.0 4475.0 4218.0 4218.0 4218 238550890
1404-11-26 4179.0 4179.0 4179.0 4179.0 4179 39453982
1404-11-27 4054.0 4109.0 4054.0 4064.0 4064 430020598
1404-11-28 4010.0 4100.0 3958.0 4066.0 4066 164209800
```
- Adds `Adj Close` column. OHLC are raw (unadjusted) and `float64`.
#### TSE format
```python
df = att.get_history('شتران', limit=3, raw=True)
```
```
<TICKER> <FIRST> <HIGH> <LOW> <CLOSE> <VALUE> <VOL> <OPENINT> <PER> <OPEN> <LAST>
<DTYYYYMMDD>
2026-02-15 Palayesh.Tehran 4179.0 4179.0 4179.0 4179.0 164878190778 39453982 748 D 4308.0 4179.0
2026-02-16 Palayesh.Tehran 4054.0 4109.0 4054.0 4056.0 1744313323314 430020598 7394 D 4179.0 4064.0
2026-02-17 Palayesh.Tehran 4010.0 4100.0 3958.0 4037.0 662877646216 164209800 4199 D 4056.0 4066.0
```
- TSETMC-compatible column names for import into trading software.
#### Return calculation
```python
# Simple 1-day returns
df = att.get_history('شتران', limit=10, return_type='simple')
# Adds 'returns' column: (Close[t] - Close[t-1]) / Close[t-1]
# Log returns
df = att.get_history('شتران', limit=10, return_type='log')
# Adds 'returns' column: ln(Close[t] / Close[t-1])
# Both simple & log returns
df = att.get_history('شتران', limit=10, return_type='both')
# Adds 'simple_returns' and 'log_returns' columns
# Custom: simple 5-day returns on Close
df = att.get_history('شتران', limit=15, return_type=['simple', 'Close', 5])
```
```
Open High Low Close Volume returns
J-Date
1404-11-06 4490 4490 4490 4490 16195770 NaN
1404-11-07 4356 4356 4356 4356 276970553 -0.029844
1404-11-08 4226 4356 4226 4259 1731947316 -0.022268
1404-11-11 4240 4330 4110 4110 379107775 -0.034985
1404-11-12 4110 4278 4087 4278 700763528 0.040876
```
#### Multi-stock
```python
df = att.get_history(['شتران', 'فملی'], limit=5)
```
```
Open High Low Close Volume Open High Low Close Volume
شتران شتران شتران شتران شتران فملی فملی فملی فملی فملی
J-Date
1404-11-25 4400 4475 4218 4218 238550890 14890 15080 14310 14310 306133075
1404-11-26 4179 4179 4179 4179 39453982 14020 14100 14020 14020 185179129
1404-11-27 4054 4109 4054 4064 430020598 13600 13970 13600 13900 214659584
1404-11-28 4010 4100 3958 4066 164209800 14030 14120 13790 14030 139758819
```
- Returns a `MultiIndex` column structure: `(Column, Symbol)`.
#### Index support
```python
# شاخص کل (Total Market Index)
idx = att.get_history('شاخص کل', limit=10)
# Industry indices
idx = att.get_history('شاخص صنعت فلزات اساسی', limit=10)
```
```
Open High Low Close Volume
J-Date
1404-11-25 4081300.0 4090060.0 3986100.0 3986106.0 2.184455e+10
1404-11-26 3898000.0 3898000.0 3881860.0 3881867.0 2.381066e+10
1404-11-27 3800290.0 3822580.0 3799820.0 3822568.0 2.270925e+10
```
---
### `get_client_type()`
Get historical **Retail / Institutional** (حقیقی / حقوقی) trading data.
<div dir="rtl" align="right">
#### 📖 توضیحات فارسی — `get_client_type()`
تابع `get_client_type()` برای دریافت **اطلاعات معاملهگران حقیقی و حقوقی** استفاده میشود.
**خروجی عادی (۱۲ ستون):**
| ستون | توضیح |
|---|---|
| `N_buy_retail` | تعداد معاملات خرید حقیقی |
| `N_buy_institutional` | تعداد معاملات خرید حقوقی |
| `N_sell_retail` | تعداد معاملات فروش حقیقی |
| `N_sell_institutional` | تعداد معاملات فروش حقوقی |
| `Vol_buy_retail` | حجم خرید حقیقی |
| `Vol_buy_institutional` | حجم خرید حقوقی |
| `Vol_sell_retail` | حجم فروش حقیقی |
| `Vol_sell_institutional` | حجم فروش حقوقی |
| `Val_buy_retail` | ارزش خرید حقیقی (ریالی) |
| `Val_buy_institutional` | ارزش خرید حقوقی (ریالی) |
| `Val_sell_retail` | ارزش فروش حقیقی (ریالی) |
| `Val_sell_institutional` | ارزش فروش حقوقی (ریالی) |
**خروجی کامل (۲۰ ستون):** علاوه بر ۱۲ ستون بالا:
| ستون اضافی | توضیح |
|---|---|
| `Per_capita_buy_retail` | سرانه خرید حقیقی |
| `Per_capita_sell_retail` | سرانه فروش حقیقی |
| `Per_capita_buy_institutional` | سرانه خرید حقوقی |
| `Per_capita_sell_institutional` | سرانه فروش حقوقی |
| `Power_retail` | قدرت خریدار به فروشنده حقیقی |
| `Power_institutional` | قدرت خریدار به فروشنده حقوقی |
| `Weekday_fa` | نام روز هفته فارسی |
| `Ticker` | نماد |
</div>
```python
att.get_client_type(
symbol='شتران', # str or list — symbol name(s) in Persian
start=None, # str — start date in Jalali
end=None, # str — end date in Jalali
limit=0, # int — number of last trading days
raw=False, # bool — use TSETMC column names
output_type='standard', # str — 'standard' or 'full'
date_format='jalali', # str — 'jalali', 'gregorian', or 'both'
progress=True, # bool — show progress bar
save_to_file=False, # bool — save to CSV
dropna=True, # bool — drop extra cols in multi-stock
ascending=True, # bool — sort ascending (True) or descending (False)
save_path=None, # str — file path to save CSV
)
```
#### Standard output (12 columns)
```python
df = att.get_client_type('شتران', limit=5)
```
```
N_buy_retail N_buy_institutional N_sell_retail N_sell_institutional Vol_buy_retail Vol_buy_institutional Vol_sell_retail Vol_sell_institutional Val_buy_retail Val_buy_institutional Val_sell_retail Val_sell_institutional
J-Date
1404-11-25 1499 12 883 10 95906966 142643924 216933695 21617195 414366661677 613387546108 935110119463 92644088322
1404-11-26 531 3 47 4 14403982 25050000 28630968 10823014 60194240778 104683950000 119648815272 45229375506
1404-11-27 2465 10 1969 27 319021634 110998964 277392757 152627841 1294256635847 450056687467 1125177802472 619135520842
1404-11-28 1260 11 1171 9 112375538 51834262 156350833 7858967 453981687814 208895958402 631059614604 31818031612
```
| Column | Description |
|---|---|
| `N_buy_retail` | Number of individual (حقیقی) buy trades |
| `N_buy_institutional` | Number of institutional (حقوقی) buy trades |
| `N_sell_retail` | Number of individual sell trades |
| `N_sell_institutional` | Number of institutional sell trades |
| `Vol_buy_retail` | Individual buy volume |
| `Vol_buy_institutional` | Institutional buy volume |
| `Vol_sell_retail` | Individual sell volume |
| `Vol_sell_institutional` | Institutional sell volume |
| `Val_buy_retail` | Individual buy value (Rials) |
| `Val_buy_institutional` | Institutional buy value (Rials) |
| `Val_sell_retail` | Individual sell value (Rials) |
| `Val_sell_institutional` | Institutional sell value (Rials) |
#### Full output (20 columns)
```python
df = att.get_client_type('شتران', limit=5, output_type='full')
```
Adds 8 extra columns to the standard 12:
| Extra Column | Description |
|---|---|
| `Per_capita_buy_retail` | Average buy value per individual trade |
| `Per_capita_sell_retail` | Average sell value per individual trade |
| `Per_capita_buy_institutional` | Average buy value per institutional trade |
| `Per_capita_sell_institutional` | Average sell value per institutional trade |
| `Power_retail` | Individual buyer/seller power ratio |
| `Power_institutional` | Institutional buyer/seller power ratio |
| `Weekday_fa` | Day name in Persian |
| `Ticker` | Symbol name |
#### Date range & Gregorian
```python
# Jalali date range
df = att.get_client_type('شتران', start='1404-06-01', end='1404-08-01')
# Gregorian index
df = att.get_client_type('شتران', limit=10, date_format='gregorian')
# Index: 'Date' (datetime64)
```
---
### `get_capital_increase()`
Get the full history of capital increases for a stock.
<div dir="rtl" align="right">
#### 📖 توضیحات فارسی — `get_capital_increase()`
تابع `get_capital_increase()` **سابقه کامل افزایش سرمایه** یک نماد را برمیگرداند.
| ستون | توضیح |
|---|---|
| `old_shares_amount` | تعداد سهام قبل از افزایش سرمایه |
| `new_shares_amount` | تعداد سهام بعد از افزایش سرمایه |
- ایندکس: تاریخ میلادی (`datetime64`)
- دادهها از قدیمیترین به جدیدترین مرتب شدهاند
</div>
```python
df = att.get_capital_increase('شتران')
```
```
old_shares_amount new_shares_amount
date
2025-03-02 3.900000e+11 5.395000e+11
2024-02-17 2.750000e+11 3.900000e+11
2022-11-02 1.700000e+11 2.750000e+11
2021-10-17 7.500000e+10 1.700000e+11
2020-10-04 4.400000e+10 7.500000e+10
2019-08-07 2.400000e+10 4.400000e+10
2018-07-24 1.600000e+10 2.400000e+10
2017-02-04 1.200000e+10 1.600000e+10
```
- **Index:** `date` (`datetime64` — Gregorian)
- **Columns:** `old_shares_amount`, `new_shares_amount`
---
### `get_detail()`
Get comprehensive detail for a stock (ISIN, company name, market, sector, etc.).
<div dir="rtl" align="right">
#### 📖 توضیحات فارسی — `get_detail()`
تابع `get_detail()` **اطلاعات جامع نماد** شامل کد ISIN، نام شرکت، نام لاتین، بازار، کد تابلو و سایر مشخصات را برمیگرداند.
- خروجی: دیتافریم با ۱۵ ردیف (کلید-مقدار)
- ایندکس: نام فیلد به فارسی (مثلاً `کد 12 رقمی نماد`، `نماد فارسی`، `بازار`)
- ستون: `value` — مقدار هر فیلد
</div>
```python
df = att.get_detail('شتران')
```
```
value
key
کد 12 رقمی نماد IRO1PTEH0001
کد 5 رقمی نماد PTEH1
نام لاتین شرکت Palayesh Tehran
کد 4 رقمی شرکت PTEH
نام شرکت پالايش نفت تهران
نماد فارسی شتران
نماد 30 رقمی فارسی پالايش نفت تهران
کد 12 رقمی شرکت IRO1PTEH0007
بازار بازار اول (تابلوي اصلي) بورس
کد تابلو 1
```
- **Shape:** (15, 1) — 15 key-value rows
- **Index:** `key` (str) — Persian field names
- **Column:** `value`
---
### `get_info()`
Get instrument information (EPS, sector PE, PSR, sector name, threshold data, etc.).
<div dir="rtl" align="right">
#### 📖 توضیحات فارسی — `get_info()`
تابع `get_info()` **اطلاعات ابزار مالی** شامل EPS تخمینی، P/E گروه صنعت، PSR، نام گروه صنعت، و اطلاعات آستانه قیمتی را برمیگرداند.
- خروجی: دیتافریم با ۴۶ ردیف (کلید-مقدار)
- ایندکس: شناسه فیلد (مثلاً `eps_estimatedEPS`، `eps_sectorPE`، `sector_lSecVal`)
- ستون: `value` — مقدار هر فیلد
</div>
```python
df = att.get_info('شتران')
```
```
value
key
eps_estimatedEPS 1018
eps_sectorPE 4.58
eps_psr 5933.701
sector_cSecVal 23
sector_lSecVal فراورده هاي نفتي، كك و سوخت هسته اي
```
- **Shape:** (46, 1) — 46 key-value rows
- **Index:** `key` (str) — field identifiers (e.g. `eps_estimatedEPS`, `sector_lSecVal`)
- **Column:** `value`
---
### `get_stats()`
Get trading statistics for a stock (averages, rankings over 3 and 12 months).
<div dir="rtl" align="right">
#### 📖 توضیحات فارسی — `get_stats()`
تابع `get_stats()` **آمار معاملاتی نماد** شامل میانگین و رتبه ارزش، حجم و تعداد معاملات در بازههای ۳ ماهه و ۱۲ ماهه را برمیگرداند.
- خروجی: دیتافریم با ۸۸ ردیف (کلید-مقدار)
- ایندکس: نام آماره به فارسی (مثلاً `میانگین ارزش معاملات در 3 ماه گذشته`)
- ستون: `value` — مقدار عددی هر آماره
- شامل: رتبهبندی نماد از نظر حجم، ارزش و دفعات معاملات نسبت به کل بازار
</div>
```python
df = att.get_stats('شتران')
```
```
value
key
میانگین ارزش معاملات در 3 ماه گذشته 2.443327e+12
میانگین ارزش معاملات در 12 ماه گذشته 1.461746e+12
رتبه ارزش معاملات در 3 ماه گذشته 4.500000e+01
رتبه ارزش معاملات در 12 ماه گذشته 5.200000e+01
میانگین حجم معاملات در 3 ماه گذشته 6.053144e+08
میانگین حجم معاملات در 12 ماه گذشته 4.740798e+08
رتبه حجم معاملات در 3 ماه گذشته 1.200000e+01
رتبه حجم معاملات در 12 ماه گذشته 1.100000e+01
میانگین دفعات معاملات روزانه در 3 ماه گذشته 8.543000e+03
میانگین دفعات معاملات روزانه در 12 ماه گذشته 6.474000e+03
```
- **Shape:** (88, 1) — 88 key-value rows
- **Index:** `key` (str) — Persian statistic names
- **Column:** `value`
---
### `get_shareholders()`
Get major shareholders of a stock (current or historical).
<div dir="rtl" align="right">
#### 📖 توضیحات فارسی — `get_shareholders()`
تابع `get_shareholders()` **لیست سهامداران عمده** یک نماد را برمیگرداند — هم فعلی و هم تاریخی.
**پارامترها:**
| پارامتر | توضیح |
|---|---|
| `symbol` | نماد فارسی سهم |
| `date` | تاریخ شمسی به فرمت `YYYYMMDD` برای دریافت سهامداران در تاریخ خاص (`None` = آخرین اطلاعات) |
| `include_id` | اگر `True` باشد، شناسه سهامدار (`share_holder_id`) نیز اضافه میشود |
**ستونهای خروجی:**
| ستون | توضیح |
|---|---|
| `share_holder_name` | نام سهامدار |
| `number_of_shares` | تعداد سهام |
| `percentage_of_shares` | درصد مالکیت |
| `change_state` | وضعیت تغییر (۱ = بدون تغییر، ۳ = تغییر یافته) |
| `change_amount` | مقدار تغییر |
| `date` | تاریخ ثبت (میلادی YYYYMMDD) |
| `share_holder_id` | شناسه عددی سهامدار (فقط با `include_id=True`) |
</div>
```python
att.get_shareholders(
symbol='شتران', # str — symbol name in Persian
date=None, # str — Jalali date 'YYYYMMDD' for historical data (None = latest)
include_id=False, # bool — include shareholder IDs
)
```
#### Current shareholders
```python
df = att.get_shareholders('شتران')
```
```
share_holder_name number_of_shares percentage_of_shares change_state change_amount date
0 بانك صادرات ايران 3.234498e+10 5.995 1 0.0 20260218
1 شركت سرمايه گذاري ايرانيان -سهامي خاص - 2.569312e+10 4.762 1 0.0 20260218
2 شركت سرمايه گذاري .ا.تهران -سهامي عام --م ك م ف ع - 2.169540e+10 4.021 1 0.0 20260218
3 شركت .س .سهام عدالت .ا.خراسان رضوي -س ع --م ك م ف ع - 2.092901e+10 3.879 1 0.0 20260218
4 PRXسبد-شرك76894--موس33322- 1.797408e+10 3.331 1 0.0 20260218
```
| Column | Description |
|---|---|
| `share_holder_name` | Shareholder name |
| `number_of_shares` | Number of shares held |
| `percentage_of_shares` | Ownership percentage |
| `change_state` | Change indicator (1=unchanged, 3=changed) |
| `change_amount` | Amount of change |
| `date` | Date of record (YYYYMMDD) |
#### Historical shareholders
```python
df = att.get_shareholders('داتام', date='14021006')
```
```
share_holder_name number_of_shares percentage_of_shares change_state change_amount date
0 شركت توسعه تجارت داتام -سهامي خاص - 6.732833e+09 67.32 3 6.232833e+09 20231230
1 BFMصندوق سرمايه گذاري .ا.ب .افتخارحافظ 1.500000e+09 15.00 0 6.232833e+09 20231230
```
#### With shareholder IDs
```python
df = att.get_shareholders('شتران', include_id=True)
# Adds 'share_holder_id' column (7 columns total)
```
---
### `get_symbols()`
Get a list of all symbols in Tehran Stock Exchange markets — including stocks, ETFs, bonds, options, and more.
<div dir="rtl" align="right">
#### 📖 توضیحات فارسی — `get_symbols()`
تابع `get_symbols()` فهرست **تمام نمادهای بازار سرمایه** را برمیگرداند. علاوه بر سهام و حقتقدم و صندوقها، اکنون میتوانید **اوراق بدهی، اختیار معامله، تسهیلات مسکن، گواهیهای کالایی و گواهیهای انرژی** را هم دریافت کنید.
**پارامترهای فیلتر بازار (سهام):**
| پارامتر | مقدار پیشفرض | توضیح |
|---|---|---|
| `bourse` / `main_market` | `True` | شامل نمادهای **بورس** |
| `farabourse` / `otc` | `True` | شامل نمادهای **فرابورس** (شامل نوآفرین) |
| `payeh` / `base_market` | `True` | شامل نمادهای **بازار پایه** |
| `payeh_color` / `base_market_tier` | `None` | فیلتر رنگ بازار پایه: `'زرد'`، `'نارنجی'`، `'قرمز'` |
**پارامترهای نوع دارایی:**
| پارامتر | مقدار پیشفرض | توضیح |
|---|---|---|
| `haghe_taqadom` / `rights` | `False` | شامل نمادهای **حق تقدم** |
| `sandogh` / `funds` | `False` | شامل **صندوقهای ETF و سرمایهگذاری** |
| `bonds` | `False` | شامل **اوراق بدهی**: اخزا، اراد، صکوک، اسناد شهری |
| `options` | `False` | شامل **اختیار معامله**: خرید و فروش سهام و صندوق |
| `mortgage` | `False` | شامل **تسهیلات مسکن** |
| `commodity` | `False` | شامل **گواهیهای کالایی**: گواهی سپرده، زعفران |
| `energy` | `False` | شامل **گواهیهای انرژی**: گواهی ظرفیت برق |
| `output` | `'dataframe'` | فرمت خروجی: `'dataframe'` یا `'list'` |
**ستونهای خروجی (هنگام `output='dataframe'`):**
| ستون | توضیح |
|---|---|
| `symbol` | نماد (ایندکس DataFrame) |
| `name` | نام کامل فارسی |
| `instrument_isin` | کد ISIN |
| `english_name` | نام انگلیسی |
| `company_code` | کد ۴ رقمی شرکت |
| `company_isin` | ISIN شرکت |
| `market` | نوع بازار |
| `industry_group` | گروه صنعت |
| `asset_type` | **نوع دارایی**: `stock`, `right`, `fund`, `bond`, `option`, `mortgage`, `commodity`, `energy` |
| `instrument_id` | شناسه عددی نماد |
**مثالهای فیلتر:**
- `att.get_symbols()` → فقط سهام (پیشفرض)
- `att.get_symbols(bonds=True)` → سهام + اوراق بدهی
- `att.get_symbols(bourse=False, farabourse=False, payeh=False, options=True)` → فقط اختیار معامله
- `att.get_symbols(sandogh=True, bonds=True, options=True)` → سهام + صندوق + اوراق + اختیار
- `att.get_symbols(output='list')` → خروجی به صورت لیست نمادها
</div>
```python
att.get_symbols(
bourse=True, # bool — include Bourse stocks (alias: main_market)
farabourse=True, # bool — include Fara Bourse stocks (alias: otc)
payeh=True, # bool — include Payeh market stocks (alias: base_market)
haghe_taqadom=False, # bool — include subscription rights (alias: rights)
sandogh=False, # bool — include ETFs/funds (alias: funds)
bonds=False, # bool — include bonds, sukuk, treasury bills
options=False, # bool — include stock & fund options (calls + puts)
mortgage=False, # bool — include housing facility certificates
commodity=False, # bool — include commodity certificates
energy=False, # bool — include energy certificates
payeh_color=None, # str or list — filter Payeh by tier (alias: base_market_tier)
output='dataframe', # str — 'dataframe' or 'list'
progress=True, # bool — show progress messages
)
```
#### Default: all regular stocks
```python
df = att.get_symbols()
```
```
name instrument_isin english_name company_code company_isin market industry_group asset_type instrument_id
symbol
آباد توریستی ورفاهی آبادگران ایران IRO1ABAD0001 Abadgaran ABAD IRO1ABAD0002 بازار دوم بورس هتل و رست | text/markdown | Mohsen Alipour | alipour@algotik.ir | null | null | GNU General Public License v3 | tse, tsetmc, tehran stock exchange, bourse, algotik, stock, market data, iran | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Topic :: Office/Business :: Financial :: Investment",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Natural Language :: English",
"Programmin... | [] | https://github.com/mohsenalipour/algotik_tse | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0",
"pandas>=1.3.0",
"numpy>=1.20.0",
"persiantools>=2.0.0",
"urllib3>=1.26.0",
"lxml>=4.6.0",
"openpyxl>=3.0.0"
] | [] | [] | [] | [
"Website, https://algotik.com",
"Bug Tracker, https://github.com/mohsenalipour/algotik_tse/issues",
"Documentation, https://github.com/mohsenalipour/algotik_tse#readme",
"Telegram, https://t.me/algotik"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T13:32:15.966872 | algotik_tse-1.0.1.tar.gz | 147,715 | f8/3a/41c5debc1746d0141787e648cda42b2d975af8443d694e4840747e16dfe3/algotik_tse-1.0.1.tar.gz | source | sdist | null | false | b4dd6779c5021f0e154afd152a0fcd4e | ebd3aa1bc44ef40c852827ed9b8901b56ebcf115178db968b4de5e73fa55223c | f83a41c5debc1746d0141787e648cda42b2d975af8443d694e4840747e16dfe3 | null | [
"LICENSE",
"AUTHORS.rst"
] | 229 |
2.4 | mastapy-cli-regression | 0.1.0b1.post1 | Command-line tool for running a single mastapy script in multiple APIs and comparing results. | <h1 align="center">
<img src="https://documentation.smartmt.com/MastaAPI/15.1.1/images/smt_logo.png" width="150" alt="SMT">
</h1><br>
[](https://github.com/astral-sh/uv) [](https://github.com/astral-sh/ruff) [](https://opensource.org/licenses/MIT) 
`mastapy-cli-regression` is a command-line plugin for [mastapy](https://pypi.org/project/mastapy/).
- **Website**: https://www.smartmt.com/
- **Support**: https://support.smartmt.com/

To install this plugin, run the following:
```bash
pip install mastapy[cli-regression]
```
This plugin is designed to be used on the command-line via the [mastapy](https://pypi.org/project/mastapy/) package using the following syntax:
```bash
python -m mastapy regression ...
```
### Features
- Compare results between different versions of the MASTA API and automatically identify regressions.
- Completely automated and parallelized virtual environment creation, installation and script execution for each specified version of the MASTA API.
- Fully customisable comparisons; choose your own tolerances, group values together and decide how you want them to be compared.
- Supports loading legacy `mastapy` packages using modern Python versions.
- Export results to an Excel workbook for further analysis.
### Release Information
This is the initial release of the package.
### Pre-Requisites
Note that these pre-requisites are only required for launching the plugin from the command-line.
- `mastapy>=15.1.3`
- An internet connection
### Usage
Before starting, prepare a Python script for the plugin to execute. This must be a self-contained script (i.e. does not rely on `@mastapy.masta_property`) that can be executed from the command-line.
There are two steps to using this plugin. Before executing your script, you must modify it to export a comparison structure, `mastapy_cli_regression.Comparer`, which the plugin will read to run your regression tests. The following is an example script demonstrating a modified script.
```python
from mastapy import Examples
# Import the Comparer from the regression package.
from mastapy_cli_regression import Comparer
def main() -> None:
design = Examples.Components.SIMPLE_HOUSING_FULL_MESH.load()
# Create a new Comparer object. We will add values to this for regression testing.
comparer = Comparer()
for load_case in design.static_loads:
# For each load case, create a new group. Any values added to the comparer while
# indented inside the `with` statement will get added to the group. his will
# help organise our results.
with comparer.group("Load Case: " + load_case.name):
system_deflection = load_case.system_deflection
system_deflection.perform_analysis()
gear_sets = design.all_parts_of_type_cylindrical_gear_set()
mesh_groups = (gear_set.cylindrical_meshes for gear_set in gear_sets)
meshes = (mesh for group in mesh_groups for mesh in group)
for mesh in meshes:
sd = system_deflection.results_for_cylindrical_gear_mesh(
mesh
).cast_to.cylindrical_gear_mesh_system_deflection_with_ltca_results
altca_results = sd.advanced_ltca_results
# Once we have results, we can add them to our comparison
# structure. We must also decide how we want to configure
# our tolerances for the regression tests. We have opted for a
# relative tolerance, but other options are available.
comparer.add(
"Total Misalignment",
sd.misalignment_data.total_equivalent_misalignment_for_rating,
relative_tolerance=0.0000001,
)
flank_rating = altca_results.rating.cylindrical_mesh_single_flank_rating
flank_rating = flank_rating.gear_single_flank_ratings
# Add values for the gear bending stress.
comparer.add(
"Root Stress 0",
flank_rating[0].tooth_root_stress,
relative_tolerance=0.0000001,
)
comparer.add(
"Root Stress 1",
flank_rating[1].tooth_root_stress,
relative_tolerance=0.0000001,
)
# Add values for the gear contact stress.
comparer.add(
"Contact Stress 0",
flank_rating[0].calculated_contact_stress,
relative_tolerance=0.0000001,
)
comparer.add(
"Contact Stress 1",
flank_rating[1].calculated_contact_stress,
relative_tolerance=0.0000001,
)
# Once we are done, we must call `comparer.collect`. This will output our
# comparison structure for the regression plugin to read!
comparer.collect()
if __name__ == "__main__":
main()
```
Once the script has been modified to output a comparison structure, we are ready to launch it using the plugin. This is done from the command line.
For instance, to compare results between MASTA 14.0 and MASTA 15.0 and export results to Excel, assuming both versions of MASTA are installed in the default location, we could run:
```bash
python -m mastapy regression "path/to/script.py" 14.0 15.0 --export-excel
```
This will execute your script in both versions of the API and automatically compile test results, then export them to Excel. Alternatively, you can provide paths to your MASTA installations:
```bash
python -m mastapy regression "path/to/script.py" "path/to/MASTA 14.0" "path/to/MASTA 15.0" --export-excel
```
To view the full list of features, run:
```bash
python -m mastapy regression --help
```
### Troubleshooting
#### Version differences
If you are attempting to compare different major versions of the API (e.g. comparing 14.0 and 15.0), changes in the API might cause your script to work in one API but not the other. To work around this, you can import `__api_version__` from `mastapy` and branch on it:
```python
from mastapy import __api_version__
if __api_version__ == "14.0.0":
# Run code for MASTA 14.0
...
else:
# Run code for MASTA 15.0
...
```
You may also need to use different versions of Python, depending on what each version of `mastapy` supports. This can be configured in your launch command using `@VERSION` syntax (or `@PATH` if you have a path to the Python executable):
```bash
python -m mastapy regression "path/to/script.py" 14.0@3.10 15.0
```
This will launch MASTA 14.0 using Python 3.10. You must have the corresponding version of Python installed.
#### Package Installation
This plugin automatically creates virtual environments, then downloads and installs packages into them. Internally, `pip` is used for all package management. If you require SSL certification, you can optionally use the `--truststore` flag to install it into your virtual environments:
```bash
python -m mastapy regression --truststore ...
```
Everything else must be configured using `pip.ini` or environment variables.
| text/markdown | null | George Baron <george.baron@smartmt.com> | null | null | null | masta, mastapy, plugin, regression, smt | [
"Development Status :: 4 - Beta",
"Environment :: Plugins",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: O... | [] | null | null | >=3.10 | [] | [] | [] | [
"beartype>=0.22.6",
"numpy>=1.22.0; python_version >= \"3.9\" and python_version < \"3.12\"",
"numpy>=1.26.0; python_version >= \"3.12\"",
"packaging>=25.0",
"polars>=1.36.1",
"rich-argparse>=1.7.2",
"rich>=14.2.0",
"seaborn>=0.13.2",
"xlsxwriter>=3.2.9"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:31:37.546318 | mastapy_cli_regression-0.1.0b1.post1-py3-none-any.whl | 268,971 | 40/b8/648c753d53b26b6ac95314166c9fa31f1b8d42fa42f3d234e22920d1e04c/mastapy_cli_regression-0.1.0b1.post1-py3-none-any.whl | py3 | bdist_wheel | null | false | 3a1e8e96cc8893dc40f94b598e379019 | 01c5a647c8fcce358ea021a56b499fcec25de315f52bf77ccc53c9e42c38a0d2 | 40b8648c753d53b26b6ac95314166c9fa31f1b8d42fa42f3d234e22920d1e04c | MIT | [] | 222 |
2.4 | py-clob-client | 0.34.6 | Python client for the Polymarket CLOB | # Polymarket Python CLOB Client
<a href='https://pypi.org/project/py-clob-client'>
<img src='https://img.shields.io/pypi/v/py-clob-client.svg' alt='PyPI'/>
</a>
Python client for the Polymarket Central Limit Order Book (CLOB).
## Documentation
## Installation
```bash
# install from PyPI (Python 3.9>)
pip install py-clob-client
```
## Usage
The examples below are short and copy‑pasteable.
- What you need:
- **Python 3.9+**
- **Private key** that owns funds on Polymarket
- Optional: a **proxy/funder address** if you use an email or smart‑contract wallet
- Tip: store secrets in environment variables (e.g., with `.env`)
### Quickstart (read‑only)
```python
from py_clob_client.client import ClobClient
client = ClobClient("https://clob.polymarket.com") # Level 0 (no auth)
ok = client.get_ok()
time = client.get_server_time()
print(ok, time)
```
### Start trading (EOA)
**Note**: If using MetaMask or hardware wallet, you must first set token allowances. See [Token Allowances section](#important-token-allowances-for-metamaskeoa-users) below.
```python
from py_clob_client.client import ClobClient
HOST = "https://clob.polymarket.com"
CHAIN_ID = 137
PRIVATE_KEY = "<your-private-key>"
FUNDER = "<your-funder-address>"
client = ClobClient(
HOST, # The CLOB API endpoint
key=PRIVATE_KEY, # Your wallet's private key
chain_id=CHAIN_ID, # Polygon chain ID (137)
signature_type=1, # 1 for email/Magic wallet signatures
funder=FUNDER # Address that holds your funds
)
client.set_api_creds(client.create_or_derive_api_creds())
```
### Start trading (proxy wallet)
For email/Magic or browser wallet proxies, you need to specify two additional parameters:
#### Funder Address
The **funder address** is the actual address that holds your funds on Polymarket. When using proxy wallets (email wallets like Magic or browser extension wallets), the signing key differs from the address holding the funds. The funder address ensures orders are properly attributed to your funded account.
#### Signature Types
The **signature_type** parameter tells the system how to verify your signatures:
- `signature_type=0` (default): Standard EOA (Externally Owned Account) signatures - includes MetaMask, hardware wallets, and any wallet where you control the private key directly
- `signature_type=1`: Email/Magic wallet signatures (delegated signing)
- `signature_type=2`: Browser wallet proxy signatures (when using a proxy contract, not direct wallet connections)
```python
from py_clob_client.client import ClobClient
HOST = "https://clob.polymarket.com"
CHAIN_ID = 137
PRIVATE_KEY = "<your-private-key>"
PROXY_FUNDER = "<your-proxy-or-smart-wallet-address>" # Address that holds your funds
client = ClobClient(
HOST, # The CLOB API endpoint
key=PRIVATE_KEY, # Your wallet's private key
chain_id=CHAIN_ID, # Polygon chain ID (137)
signature_type=1, # 1 for email/Magic wallet signatures
funder=PROXY_FUNDER # Address that holds your funds
)
client.set_api_creds(client.create_or_derive_api_creds())
```
### Find markets, prices, and orderbooks
```python
from py_clob_client.client import ClobClient
from py_clob_client.clob_types import BookParams
client = ClobClient("https://clob.polymarket.com") # read-only
token_id = "<token-id>" # Get a token ID: https://docs.polymarket.com/developers/gamma-markets-api/get-markets
mid = client.get_midpoint(token_id)
price = client.get_price(token_id, side="BUY")
book = client.get_order_book(token_id)
books = client.get_order_books([BookParams(token_id=token_id)])
print(mid, price, book.market, len(books))
```
### Place a market order (buy by $ amount)
**Note**: EOA/MetaMask users must set token allowances before trading. See [Token Allowances section](#important-token-allowances-for-metamaskeoa-users) below.
```python
from py_clob_client.client import ClobClient
from py_clob_client.clob_types import MarketOrderArgs, OrderType
from py_clob_client.order_builder.constants import BUY
HOST = "https://clob.polymarket.com"
CHAIN_ID = 137
PRIVATE_KEY = "<your-private-key>"
FUNDER = "<your-funder-address>"
client = ClobClient(
HOST, # The CLOB API endpoint
key=PRIVATE_KEY, # Your wallet's private key
chain_id=CHAIN_ID, # Polygon chain ID (137)
signature_type=1, # 1 for email/Magic wallet signatures
funder=FUNDER # Address that holds your funds
)
client.set_api_creds(client.create_or_derive_api_creds())
mo = MarketOrderArgs(token_id="<token-id>", amount=25.0, side=BUY, order_type=OrderType.FOK) # Get a token ID: https://docs.polymarket.com/developers/gamma-markets-api/get-markets
signed = client.create_market_order(mo)
resp = client.post_order(signed, OrderType.FOK)
print(resp)
```
### Place a limit order (shares at a price)
**Note**: EOA/MetaMask users must set token allowances before trading. See [Token Allowances section](#important-token-allowances-for-metamaskeoa-users) below.
```python
from py_clob_client.client import ClobClient
from py_clob_client.clob_types import OrderArgs, OrderType
from py_clob_client.order_builder.constants import BUY
HOST = "https://clob.polymarket.com"
CHAIN_ID = 137
PRIVATE_KEY = "<your-private-key>"
FUNDER = "<your-funder-address>"
client = ClobClient(
HOST, # The CLOB API endpoint
key=PRIVATE_KEY, # Your wallet's private key
chain_id=CHAIN_ID, # Polygon chain ID (137)
signature_type=1, # 1 for email/Magic wallet signatures
funder=FUNDER # Address that holds your funds
)
client.set_api_creds(client.create_or_derive_api_creds())
order = OrderArgs(token_id="<token-id>", price=0.01, size=5.0, side=BUY) # Get a token ID: https://docs.polymarket.com/developers/gamma-markets-api/get-markets
signed = client.create_order(order)
resp = client.post_order(signed, OrderType.GTC)
print(resp)
```
### Manage orders
**Note**: EOA/MetaMask users must set token allowances before trading. See [Token Allowances section](#important-token-allowances-for-metamaskeoa-users) below.
```python
from py_clob_client.client import ClobClient
from py_clob_client.clob_types import OpenOrderParams
HOST = "https://clob.polymarket.com"
CHAIN_ID = 137
PRIVATE_KEY = "<your-private-key>"
FUNDER = "<your-funder-address>"
client = ClobClient(
HOST, # The CLOB API endpoint
key=PRIVATE_KEY, # Your wallet's private key
chain_id=CHAIN_ID, # Polygon chain ID (137)
signature_type=1, # 1 for email/Magic wallet signatures
funder=FUNDER # Address that holds your funds
)
client.set_api_creds(client.create_or_derive_api_creds())
open_orders = client.get_orders(OpenOrderParams())
order_id = open_orders[0]["id"] if open_orders else None
if order_id:
client.cancel(order_id)
client.cancel_all()
```
### Markets (read‑only)
```python
from py_clob_client.client import ClobClient
client = ClobClient("https://clob.polymarket.com")
markets = client.get_simplified_markets()
print(markets["data"][:1])
```
### User trades (requires auth)
**Note**: EOA/MetaMask users must set token allowances before trading. See [Token Allowances section](#important-token-allowances-for-metamaskeoa-users) below.
```python
from py_clob_client.client import ClobClient
HOST = "https://clob.polymarket.com"
CHAIN_ID = 137
PRIVATE_KEY = "<your-private-key>"
FUNDER = "<your-funder-address>"
client = ClobClient(
HOST, # The CLOB API endpoint
key=PRIVATE_KEY, # Your wallet's private key
chain_id=CHAIN_ID, # Polygon chain ID (137)
signature_type=1, # 1 for email/Magic wallet signatures
funder=FUNDER # Address that holds your funds
)
client.set_api_creds(client.create_or_derive_api_creds())
last = client.get_last_trade_price("<token-id>")
trades = client.get_trades()
print(last, len(trades))
```
## Important: Token Allowances for MetaMask/EOA Users
### Do I need to set allowances?
- **Using email/Magic wallet?** No action needed - allowances are set automatically.
- **Using MetaMask or hardware wallet?** You need to set allowances before trading.
### What are allowances?
Think of allowances as permissions. Before Polymarket can move your funds to execute trades, you need to give the exchange contracts permission to access your USDC and conditional tokens.
### Quick Setup
You need to approve two types of tokens:
1. **USDC** (for deposits and trading)
2. **Conditional Tokens** (the outcome tokens you trade)
Each needs approval for the exchange contracts to work properly.
### Setting Allowances
Here's a simple breakdown of what needs to be approved:
**For USDC (your trading currency):**
- Token: `0x2791Bca1f2de4661ED88A30C99A7a9449Aa84174`
- Approve for these contracts:
- `0x4bFb41d5B3570DeFd03C39a9A4D8dE6Bd8B8982E` (Main exchange)
- `0xC5d563A36AE78145C45a50134d48A1215220f80a` (Neg risk markets)
- `0xd91E80cF2E7be2e162c6513ceD06f1dD0dA35296` (Neg risk adapter)
**For Conditional Tokens (your outcome tokens):**
- Token: `0x4D97DCd97eC945f40cF65F87097ACe5EA0476045`
- Approve for the same three contracts above
### Example Code
See [this Python example](https://gist.github.com/poly-rodr/44313920481de58d5a3f6d1f8226bd5e) for setting allowances programmatically.
**Pro tip**: You only need to set these once per wallet. After that, you can trade freely.
## Notes
- To discover token IDs, use the Markets API Explorer: [Get Markets](https://docs.polymarket.com/developers/gamma-markets-api/get-markets).
- Prices are in dollars from 0.00 to 1.00. Shares are whole or fractional units of the outcome token.
See [/example](/examples) for more.
| text/markdown | Polymarket Engineering | engineering@polymarket.com | Polymarket Engineering | engineering@polymarket.com | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/Polymarket/py-clob-client | null | >=3.9.10 | [] | [] | [] | [
"eth-account>=0.13.0",
"eth-utils>=4.1.1",
"poly_eip712_structs>=0.0.1",
"py-order-utils>=0.3.2",
"python-dotenv",
"py-builder-signing-sdk>=0.0.2",
"httpx[http2]>=0.27.0"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/Polymarket/py-clob-client/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:30:43.733650 | py_clob_client-0.34.6.tar.gz | 38,567 | e4/4d/00896d81210ffae5b2e9b33b9d1e6b247d1b017c1ac98038d2a638a3ecc2/py_clob_client-0.34.6.tar.gz | source | sdist | null | false | 153795a81f017e48f245fdb8f856163b | 09c6b96e7296f6cc22466018af9ab74bcaab661a697432025e7257252c660af4 | e44d00896d81210ffae5b2e9b33b9d1e6b247d1b017c1ac98038d2a638a3ecc2 | null | [
"LICENSE"
] | 47,477 |
2.4 | astro-hipster | 0.1.1 | Generate HiPS representation | [](https://github.com/HITS-AIN/hipster/actions/workflows/python-package.yml?branch=main)
[](https://spherinator.readthedocs.io/en/latest/?badge=latest)

# HiPSter
[Spherinator](https://github.com/HITS-AIN/Spherinator) and
[HiPSter](https://github.com/HITS-AIN/HiPSter) are tools that provide explorative access
and visualization for multimodal data from extremely large astrophysical datasets, ranging from
exascale cosmological simulations to multi-billion object observational galaxy surveys.
HiPSter uses a trained model from Spherinator with a spherical latent space to create HiPS tilings
and a catalog that can be visualized interactively on the surface of a sphere using
[Aladin Lite](https://github.com/cds-astro/aladin-lite).
<p align="center">
<img src="images/P404_f2.png" width="400" height="400">
</p>
## Installation
```bash
pip install astro-hipster
```
## Usage
The `HiPSter` package provides a CLI to create HiPS tilings and a catalog from the spherical latent
space representation.
```bash
hipster --config <path_to_config_file>
```
For more details run `hipster --help` or check the [documentation](https://spherinator.readthedocs.io/en/latest/hipster.html#command-line-interface).
## Documentation
The `HiPSter` documentation is part of the Spherinator documentation and can be found at:
[Read The Docs](https://spherinator.readthedocs.io/en/latest/hipster.html)
## Acknowledgments
Funded by the European Union. This work has received funding from the European High-Performance Computing Joint Undertaking (JU) and Belgium, Czech Republic, France, Germany, Greece, Italy, Norway, and Spain under grant agreement No 101093441.
Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European High Performance Computing Joint Undertaking (JU) and Belgium, Czech Republic, France, Germany, Greece, Italy, Norway, and Spain. Neither the European Union nor the granting authority can be held responsible for them.
## License
This project is licensed under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
## Citation
If you use HiPSter in your research, we provide a [citation](./CITATION.cff) to use:
```bibtex
@article{Polsterer_Spherinator_and_HiPSter_2024,
author = {Polsterer, Kai Lars and Doser, Bernd and Fehlner, Andreas and Trujillo-Gomez, Sebastian},
title = {{Spherinator and HiPSter: Representation Learning for Unbiased Knowledge Discovery from Simulations}},
url = {https://arxiv.org/abs/2406.03810},
doi = {10.48550/arXiv.2406.03810},
year = {2024}
}
```
| text/markdown | null | Kai Polsterer <kai.polsterer@h-its.org>, Bernd Doser <bernd.doser@h-its.org>, Andreas Fehlner <andreas.fehlner@h-its.org>, "Sebastian T. Gomez" <sebastian.trujillogomez@h-its.org> | null | null | null | null | [] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"astropy>=6.1",
"gaiaxpy>=2.1",
"healpy>=1.18",
"jsonargparse[omegaconf]>=4.37",
"matplotlib>=3.10",
"onnxruntime-gpu<1.24,>=1.21",
"pandas>2.2",
"pyarrow>=20.0",
"scipy>1.15",
"tqdm>=4.67",
"jinja2>3.1",
"streamlit>=1.44",
"ipykernel>=6.29; extra == \"dev\"",
"pytest>=8.3; extra == \"dev\... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:30:41.247667 | astro_hipster-0.1.1-py3-none-any.whl | 21,245 | 26/87/f0a414b4726d76995771b730bb2130f1d6dda7717b1b446d20276403fc4a/astro_hipster-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | b9ae500d8cffadd532159cd26a5dbc8c | a99bad4963475d2ad196b7eb23b355b7f34d6931605e0ac329ea2d5fb7ce7997 | 2687f0a414b4726d76995771b730bb2130f1d6dda7717b1b446d20276403fc4a | Apache-2.0 | [
"LICENSE"
] | 230 |
2.4 | grover | 0.0.3 | The agentic filesystem. Safe file operations, knowledge graphs, and semantic search — unified for AI agents. | [](https://pypi.org/project/grover/)
[](https://pypi.org/project/grover/)
[](https://github.com/ClayGendron/grover/blob/main/LICENSE)
# Grover
**The agentic filesystem.** Safe file operations, knowledge graphs, and semantic search — unified for AI agents.
> **Alpha** — Grover is under active development. The core API is functional and tested, but expect breaking changes before 1.0.
Grover gives AI agents a single toolkit for working with codebases and documents:
- **Versioned filesystem** — mount local directories or databases, write safely with automatic versioning, and recover mistakes with soft-delete trash and rollback.
- **Knowledge graph** — dependency, impact, and containment queries powered by [rustworkx](https://github.com/Qiskit/rustworkx). Code is automatically analyzed (Python via AST; JS/TS/Go via tree-sitter) and wired into the graph.
- **Semantic search** — pluggable vector stores (local [usearch](https://github.com/unum-cloud/usearch), [Pinecone](https://www.pinecone.io/), [Databricks](https://docs.databricks.com/en/generative-ai/vector-search.html)) with pluggable embedding providers (sentence-transformers, OpenAI, LangChain). Search by meaning, not just keywords.
All three layers stay in sync — write a file and the graph rebuilds and embeddings re-index automatically.
The name comes from **grove** (a connected cluster of trees) + **rover** (an agent that explores). Grover treats your codebase as a grove of interconnected files and lets agents navigate it safely.
## Installation
```bash
pip install grover
```
Optional extras:
```bash
pip install grover[search] # sentence-transformers + usearch (local search)
pip install grover[openai] # OpenAI embeddings
pip install grover[pinecone] # Pinecone vector store
pip install grover[databricks] # Databricks Vector Search
pip install grover[treesitter] # JS/TS/Go code analyzers
pip install grover[postgres] # PostgreSQL backend
pip install grover[mssql] # MSSQL backend
pip install grover[deepagents] # deepagents/LangGraph integration
pip install grover[langchain] # LangChain retriever + document loader
pip install grover[langgraph] # LangGraph persistent store
pip install grover[all] # everything
```
Requires Python 3.12+.
## Quick start
```python
from grover import Grover
from grover.fs import LocalFileSystem
# Create a Grover instance (state is stored in .grover/)
g = Grover()
# Mount a local project directory
backend = LocalFileSystem(workspace_dir="/path/to/project")
g.mount("/project", backend)
# Write files — every write is automatically versioned
g.write("/project/hello.py", "def greet(name):\n return f'Hello, {name}!'\n")
g.write("/project/main.py", "from hello import greet\nprint(greet('world'))\n")
# Read, edit, delete
content = g.read("/project/hello.py")
g.edit("/project/hello.py", "Hello", "Hi")
g.delete("/project/main.py") # soft-delete — recoverable from trash
# Index the project (analyze code, build graph + search index)
stats = g.index()
# {"files_scanned": 42, "chunks_created": 187, "edges_added": 95}
# Knowledge graph queries
g.dependencies("/project/main.py") # what does main.py depend on?
g.dependents("/project/hello.py") # what depends on hello.py?
g.impacts("/project/hello.py") # transitive impact analysis
g.contains("/project/hello.py") # functions and classes inside
# Graph algorithms (centrality, traversal, subgraph extraction)
scores = g.pagerank() # PageRank centrality
anc = g.ancestors("/project/main.py") # transitive predecessors
sub = g.meeting_subgraph(["/project/a.py", "/project/b.py"]) # connecting subgraph
nodes = g.find_nodes(lang="python") # filter by attributes
# Semantic search (requires the search extra)
results = g.search("greeting function", k=5)
for r in results:
print(r.ref.path, r.score)
# Persist and clean up
g.save()
g.close()
```
A full async API is also available:
```python
from grover import GroverAsync
g = GroverAsync()
await g.mount("/project", backend)
await g.write("/project/hello.py", "...")
await g.save()
await g.close()
```
## Architecture
Grover is composed of three layers that share a common identity model — every node in the graph and every entry in the search index is a file path.
```mermaid
graph TD
A["Grover (sync) / GroverAsync"]
A --> B["VFS — Virtual Filesystem"]
A --> C["Graph — Knowledge Graph"]
A --> D["SearchEngine"]
A --> E["EventBus"]
B --> F["LocalFileSystem<br/><i>disk + SQLite</i>"]
B --> G["DatabaseFileSystem<br/><i>PostgreSQL · MSSQL · SQLite</i>"]
C --> H["rustworkx DiGraph"]
C --> I["Analyzers<br/><i>Python · JS/TS · Go</i>"]
D --> J["VectorStore<br/><i>Local · Pinecone · Databricks</i>"]
D --> K["EmbeddingProvider<br/><i>sentence-transformers · OpenAI · LangChain</i>"]
E -.->|FILE_WRITTEN| C
E -.->|FILE_WRITTEN| D
E -.->|FILE_DELETED| C
E -.->|FILE_DELETED| D
```
**VFS** routes operations to the right backend based on mount paths. Multiple backends can be mounted simultaneously.
**Graph** maintains an in-memory directed graph of file dependencies. Code analyzers automatically extract imports, function definitions, and class hierarchies. You can also add manual edges.
**SearchEngine** orchestrates embedding and vector storage. It wires together an `EmbeddingProvider` (text → vectors) and a `VectorStore` (store/search vectors). The default setup uses `all-MiniLM-L6-v2` embeddings + local usearch HNSW. For production, swap in Pinecone or Databricks with OpenAI embeddings.
**EventBus** keeps everything consistent — when a file is written or deleted, the graph and search engine update automatically.
## Backends
Grover supports two storage backends through a common protocol:
**LocalFileSystem** — for desktop development and code editing. Files live on disk where your IDE, git, and other tools can see them. Metadata and version history are stored in a local SQLite database. This is the default for local projects.
**DatabaseFileSystem** — for web applications and shared knowledge bases. All content lives in the database (PostgreSQL, MSSQL, or SQLite). There are no physical files. This is ideal for multi-tenant platforms, enterprise document stores, or any environment where state should be centralized.
Both backends support versioning and trash. You can mount them side by side:
```python
from grover.fs import LocalFileSystem, DatabaseFileSystem
g = Grover()
# Local code on disk
g.mount("/code", LocalFileSystem(workspace_dir="./my-project"))
# Shared docs in PostgreSQL
g.mount("/docs", DatabaseFileSystem(dialect="postgresql"))
```
### User-scoped mounts
For multi-tenant deployments, mount a `UserScopedFileSystem` to enable per-user namespacing:
```python
from grover.fs.user_scoped_fs import UserScopedFileSystem
from grover.fs.sharing import SharingService
from grover.models.shares import FileShare
g = GroverAsync()
backend = UserScopedFileSystem(sharing=SharingService(FileShare))
await g.mount("/ws", backend, engine=engine)
# Each user has their own namespace
await g.write("/ws/notes.md", "hello", user_id="alice")
await g.write("/ws/notes.md", "world", user_id="bob")
r1 = await g.read("/ws/notes.md", user_id="alice") # "hello"
r2 = await g.read("/ws/notes.md", user_id="bob") # "world"
# Share files between users
await g.share("/ws/notes.md", "bob", user_id="alice")
r3 = await g.read("/ws/@shared/alice/notes.md", user_id="bob") # "hello"
```
### deepagents integration
Use Grover as a storage backend for [deepagents](https://github.com/langchain-ai/deepagents) (LangGraph agent framework):
```python
from grover.integrations.deepagents import GroverBackend, GroverMiddleware
# GroverBackend implements deepagents BackendProtocol
backend = GroverBackend.from_local("/path/to/workspace")
# GroverMiddleware adds version, search, graph, and trash tools
middleware = [GroverMiddleware(backend.grover)]
```
Requires the `deepagents` extra: `pip install grover[deepagents]`
### LangChain / LangGraph integration
Use Grover as a LangChain retriever, document loader, or LangGraph persistent store:
```python
from grover.integrations.langchain import GroverRetriever, GroverLoader, GroverStore
# Retriever — semantic search as a LangChain retriever
retriever = GroverRetriever(grover=g, k=5)
docs = retriever.invoke("authentication logic")
# Loader — stream files as LangChain Documents
loader = GroverLoader(grover=g, path="/project", glob_pattern="*.py")
docs = loader.load()
# Store — LangGraph persistent memory backed by Grover
store = GroverStore(grover=g, prefix="/data/store")
store.put(("users", "alice"), "prefs", {"theme": "dark"})
item = store.get(("users", "alice"), "prefs")
```
Requires `pip install grover[langchain]` for retriever/loader, `pip install grover[langgraph]` for store.
## What's in `.grover/`
When you use Grover, a `.grover/` directory is created to store internal state:
| Path | Contents |
|------|----------|
| `grover.db` | SQLite database with file metadata, version history, and graph edges |
| `chunks/` | Extracted code chunks (functions, classes) as individual files |
| `search.usearch` | The HNSW vector index for semantic search |
| `search_meta.json` | Metadata mapping for the search index |
This directory is excluded from indexing automatically. You'll typically want to add `.grover/` to your `.gitignore`.
## API overview
The full API reference is in [`docs/api.md`](docs/api.md). Here's a summary:
| Category | Methods |
|----------|---------|
| **Filesystem** | `read`, `write`, `edit`, `delete`, `list_dir`, `exists`, `move`, `copy` |
| **Versioning** | `list_versions`, `get_version_content`, `restore_version` |
| **Trash** | `list_trash`, `restore_from_trash`, `empty_trash` |
| **Sharing** | `share`, `unshare`, `list_shares`, `list_shared_with_me` |
| **Graph** | `dependencies`, `dependents`, `impacts`, `path_between`, `contains`, `pagerank`, `ancestors`, `descendants`, `meeting_subgraph`, `neighborhood`, `find_nodes` |
| **Search** | `search` |
| **Lifecycle** | `mount`, `unmount`, `index`, `save`, `close` |
Key types:
```python
from grover import Ref, SearchResult, file_ref
# Ref — immutable reference to a file or chunk
Ref(path="/project/hello.py", version=2, line_start=1, line_end=5)
# SearchResult — a search hit with similarity score
result.ref # Ref
result.score # float (cosine similarity, 0–1)
result.content # str
```
## Error handling
All filesystem operations return **result objects** instead of raising exceptions. Every result has a `success: bool` field and a `message: str` field. Always check `success` before using other fields:
```python
result = g.write("/project/hello.py", "content")
if result.success:
print(f"Created version {result.version}")
else:
print(f"Write failed: {result.message}")
```
This design is intentional — agents running in loops should never crash on a failed file operation. The full set of result types (`ReadResult`, `WriteResult`, `EditResult`, etc.) is documented in [`docs/api.md`](docs/api.md#result-types).
## Roadmap
Grover is in its first release cycle. Here's what's coming:
- **MCP server** — expose Grover as a Model Context Protocol server for Claude Code, Cursor, and other MCP-compatible agents
- **CLI** — `grover init`, `grover status`, `grover search`, `grover rollback`
- **More framework integrations** — Aider plugin, fsspec adapter
- **More language analyzers** — Rust, Java, C#
- **More embedding providers** — Cohere, Voyage (OpenAI and LangChain adapters are already available)
See the [implementation plan](grover_implementation_plan.md) for the full roadmap.
## Contributing
Contributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, workflow, and guidelines.
## License
[Apache-2.0](LICENSE)
| text/markdown | Clay Gendron | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Typing :: Ty... | [] | null | null | >=3.12 | [] | [] | [] | [
"aiosqlite>=0.20",
"pydantic>=2.0",
"rustworkx>=0.17",
"sqlalchemy[asyncio]>=2.0",
"sqlmodel>=0.0.31",
"unidiff>=0.7",
"asyncpg>=0.29; extra == \"all\"",
"databricks-vectorsearch>=0.40; extra == \"all\"",
"deepagents>=0.4; extra == \"all\"",
"langchain-core>=0.3; extra == \"all\"",
"langgraph>=0... | [] | [] | [] | [
"Homepage, https://github.com/ClayGendron/grover",
"Repository, https://github.com/ClayGendron/grover",
"Issues, https://github.com/ClayGendron/grover/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:30:39.604270 | grover-0.0.3.tar.gz | 523,709 | 3c/9c/d1f04f0695de2c797b4c264315fdeacefec1d64638b969280081ee2293ed/grover-0.0.3.tar.gz | source | sdist | null | false | b5369684695f495fb6ac3c13dab5f1dc | 13a2b70284687bd2ccd91e0364ddbd554450ed54fd8f385683db33130c95ef9d | 3c9cd1f04f0695de2c797b4c264315fdeacefec1d64638b969280081ee2293ed | Apache-2.0 | [
"LICENSE"
] | 216 |
2.4 | codebuddy-agent-sdk | 0.3.57 | CodeBuddy Code SDK for Python | # CodeBuddy Agent SDK for Python
SDK for building AI agents with CodeBuddy Code's capabilities. Programmatically interact with AI to build autonomous agents that can understand codebases, edit files, and execute workflows.
## Installation
```bash
# Using uv (recommended)
uv add codebuddy-agent-sdk
# Using pip
pip install codebuddy-agent-sdk
```
## Quick Start
```python
import asyncio
from codebuddy_agent_sdk import query
async def main():
async for message in query(
prompt="What files are in this directory?",
permission_mode="bypassPermissions",
):
if message.type == "assistant":
for block in message.content:
if hasattr(block, "text"):
print(block.text)
asyncio.run(main())
```
## API Reference
### `query(prompt, **options)`
Create a query to interact with the agent.
```python
async for message in query(
prompt="Your prompt here",
model="sonnet", # Model to use
permission_mode="bypassPermissions", # Permission mode
max_turns=10, # Maximum conversation turns
cwd="/path/to/project", # Working directory
):
# Handle message
pass
```
### Message Types
- `system` - Session initialization info
- `assistant` - Agent responses (text, tool calls)
- `result` - Query completion status
## Related Links
- [CodeBuddy Code CLI](https://www.npmjs.com/package/@tencent-ai/codebuddy-code)
- [Documentation](https://cnb.cool/codebuddy/codebuddy-code/-/blob/main/docs)
- [Issues](https://cnb.cool/codebuddy/codebuddy-code/-/issues)
## Feedback
- Submit issues at [Issues](https://cnb.cool/codebuddy/codebuddy-code/-/issues)
- Contact: codebuddy@tencent.com
| text/markdown | null | ninoyi <ninoyi@tencent.com> | null | null | null | agent, ai, codebuddy, sdk | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typing-extensions>=4.0.0",
"mypy>=1.10.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\""
] | [] | [] | [] | [] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:30:01.659910 | codebuddy_agent_sdk-0.3.57-py3-none-win_amd64.whl | 47,781,686 | e1/9b/ebd39aebdcc2e9283edabb4638589c38df7b9df115f053caedc2e1412043/codebuddy_agent_sdk-0.3.57-py3-none-win_amd64.whl | py3 | bdist_wheel | null | false | 515df34a6da5a4fba0bb6aff2d0bc8c7 | a8f13e9cfa5f521b5bfe403d1eb317f185c822298e1f20916c17f421d1806d9f | e19bebd39aebdcc2e9283edabb4638589c38df7b9df115f053caedc2e1412043 | null | [] | 581 |
2.4 | esgpull-plus | 1.0.0 | Extension of the original ESGF data discovery and download adding config file-based downloading and advanced regridding/subsetting functionality | # esgpull-plus
[](https://rye.astral.sh)
API and processing extension to [esgf-download](https://github.com/ESGF/esgf-download): YAML-based download config, fast downloads, [CDO](https://pypi.org/project/cdo/) regridding, and surface/seafloor subsetting.
---
## Contents
1. [Installation and set-up](#installation-and-set-up)
2. [File structure](#file-structure)
3. [Dependencies](#dependencies)
4. [Keeping up with upstream](#keeping-up-with-upstream)
5. [Git configuration](#git-configuration)
6. [Searching for data](#searching-for-data)
7. [CDO regridding pipeline](#cdo-regridding-pipeline)
8. [Works in progress](#works-in-progress)
8. [License](#license)
---
## Installation and set-up
**1. Install the package** (in a conda env if you need CDO regridding):
```bash
pip install esgpull-plus
```
**2. Optional – CDO regridding** (conda recommended):
```bash
conda install -c conda-forge python-cdo
```
**3. Base esgpull:**
```bash
esgpull self install
```
See [esgf-download installation](https://esgf.github.io/esgf-download/installation/).
---
## File structure
```
esgf-download/
├── esgpull/ # Original esgpull
│ └── esgpullplus/ # Extensions (regrid, API, etc.)
├── update-from-upstream.sh
```
---
## Dependencies
- **Base:** from `pyproject.toml` (httpx, click, rich, sqlalchemy, pydantic, etc.).
- **esgpullplus:** pandas, numpy, requests, watchdog, xarray; geospatial via xesmf and `python-cdo` (conda).
---
## Keeping up with upstream
**Recommended:**
```bash
./update-from-upstream.sh
```
**Manual:**
```bash
git fetch upstream && git merge upstream/main
# Then reinstall (conda-aware): conda install -c conda-forge pandas xarray numpy; pip install xesmf cdo-python watchdog orjson
```
---
## Git configuration
```bash
git remote -v
# origin https://github.com/orlando-code/esgpull-plus/ (fetch/push)
# upstream https://github.com/ESGF/esgf-download.git (fetch/push)
```
If upstream is missing: `git remote add upstream https://github.com/ESGF/esgf-download.git`
---
## Searching for data
### Main search
Populate the `search.yaml` file (in the repo root) with your ESGF [facets](https://esgf.github.io/esg-search/ESGF_Search_RESTful_API.html) and meta options:
```yaml
search_criteria:
project: CMIP6
table_id: Omon
experiment_id: historical,ssp585
variable: uo,vo
filter:
top_n: 3 # top N datasets to keep
limit: 10 # max results per sub-search
meta_criteria:
data_dir: /path/to/data
max_workers: 4
```
Run the search + download pipeline (uses `search.yaml` automatically):
```bash
python -m esgpull.esgpullplus.api
python -m esgpull.esgpullplus.api --symmetrical # only download sources with both historical + SSP experiments
```
- **Symmetry:** in `--symmetrical` mode the tool first analyses all experiments and then only downloads datasets from sources that have both historical and SSP-style experiments (e.g. `ssp*`), so historical/SSP are matched.
- **Sorting by resolution:** search results are converted to a DataFrame and sorted by parsed nominal horizontal resolution, then by `dataset_id`, so you always get a consistent “highest resolution first” ordering.
- **Stable IDs:** multi-value facets like `variable: uo,vo` are normalised (split, trimmed, sorted) so the order you write them in `search.yaml` does not affect the generated search IDs or caching.
**Inputs (YAML keys):**
| Key | Description |
|-----|-------------|
| `search_criteria.*` | ESGF facets (project, table_id, experiment_id, variable/variable_id, frequency, etc.). |
| `search_criteria.filter.top_n` | Number of top grouped datasets to keep. |
| `search_criteria.filter.limit` | Maximum number of results per sub-search (useful for debugging). |
| `meta_criteria.data_dir` | Base directory for downloaded data and cached search results. |
| `meta_criteria.max_workers` | Worker count used for any post-download regridding. |
### Search analysis script
`run_search_analysis` runs an ESGF search from `search.yaml`, analyzes source availability (which sources have both historical and SSP experiments, resolutions, ensemble counts), and optionally writes an `analysis_df.csv` plus PNG plots. It ignores `filter.top_n` and `filter.limit` so the analysis uses all matching results.
**Run:**
```bash
python run_search_analysis.py [OPTIONS]
```
| Option | Default | Description |
|--------|--------|-------------|
| `--config` / `--config-path` | `search.yaml` | Path to search config YAML. |
| `--output-dir` | `plots/` (repo) | Directory for `analysis_df.csv` and plot PNGs. |
| `--save-plots` | True | Save plot images (source availability heatmap, ensemble counts, resolution distribution, summary table). |
| `--show-plots` | True | Display plots interactively; pass `--show-plots` to disable. |
| `--require-both` | True | Only include sources that have both historical and SSP experiments. |
**Outputs:** `analysis_df.csv` plus, when `--save-plots` is on, `source_availability_heatmap.png`, `ensemble_counts.png`, `resolution_distribution.png`, `source_summary_table.png` in the output directory. Requires `matplotlib` and `seaborn` for plotting.
---
## CDO regridding pipeline
Single pipeline in `esgpull.esgpullplus.cdo_regrid`: regridding with regrid weights reuse, chunked and parallel processing. Supports **surface** (top level) and **seafloor** extraction: each writes a file next to the original (`*_top_level.nc`, `*_seafloor.nc`) and that file is regridded like any other. Or you can regrid the whole thing.
### Command line
```bash
# Directory: surface only
python -m esgpull.esgpullplus.cdo_regrid /path/to/dir -o /path/to/out -r 1.0 1.0 --extract-surface
# Directory: seafloor only
python -m esgpull.esgpullplus.cdo_regrid /path/to/dir -o /path/to/out --extract-seafloor --max-workers 2
# Both surface and seafloor per file
python -m esgpull.esgpullplus.cdo_regrid /path/to/dir --extreme-levels
# Single file
python -m esgpull.esgpullplus.cdo_regrid /path/to/file.nc -o /path/to/out.nc --extract-seafloor
```
**Options:**
| Option | Default | Description |
|--------|---------|-------------|
| `input` (positional) | required | Input file or directory. |
| `-o`, `--output` | same as input dir | Output file or directory; if omitted, writes next to input. |
| `-r`, `--resolution lon lat` | `1.0 1.0` | Target output resolution (lon_res, lat_res). |
| `-p`, `--pattern` | `"*.nc"` | File pattern when `input` is a directory. |
| `--include-subdirectories` | `True` | Include subdirectories when walking a directory. |
| `--extract-surface` | `False` | Extract and regrid only the top level (surface). |
| `--extract-seafloor` | `False` | Extract and regrid only seafloor values. |
| `--extreme-levels` | `False` | Regrid both surface and seafloor for each file. |
| `--no-regrid-cache` | `False` | Disable reuse of CDO weight files. |
| `--no-seafloor-cache` | `False` | Disable reuse of seafloor depth index cache. |
| `-w`, `--max-workers` | `4` | Maximum parallel workers. |
| `--chunk-size-gb` | `2.0` | Maximum time-chunk size in GB. |
| `--max-memory-gb` | `8.0` | Soft cap for memory-aware chunking. |
| `--no-parallel` | `False` | Process files sequentially. |
| `--no-chunking` | `False` | Disable time chunking (process files in one go). |
| `-v`, `--verbose` | `True` | Verbose progress UI. |
| `--verbose-max` | `False` | Extra diagnostics (grid type, size, large file messages). |
| `--quiet` | `False` | Disable verbose output. |
| `--use-ui` | `True` | Use the rich progress UI. |
| `--unlink-unprocessed` | `False` | Remove any files that could not be processed. |
| `--overwrite` | `False` | Overwrite existing output files. |
N.B. if `--output` is not specified, new files will be written to the same directory as the inputs.
### File watcher regridding
Continuously watch a directory for new NetCDF files and regrid them as they arrive, using the same CDO pipeline. This is helpful when downloading files and wanting them to be processed directly:
```bash
python -m esgpull.esgpullplus.file_watcher /path/to/watch \
-r 1.0 1.0 \
--extract-surface \
--use-regrid-cache \
--process-existing # also process files that are already present
```
**Options:**
| Option | Default | Description |
|--------|---------|-------------|
| `watch_dir` (positional) | required | Directory to watch for new NetCDF files. |
| `-r`, `--target-resolution lon lat` | `1.0 1.0` | Target output resolution (lon_res, lat_res). |
| `--target-grid` | `"lonlat"` | CDO target grid type. |
| `--weight-cache-dir` | `None` | Directory to store/reuse CDO weight files. |
| `--max-workers` | `4` | Maximum parallel workers. |
| `--batch-size` | `10` | Maximum files to accumulate before triggering a batch regrid. |
| `--batch-timeout` | `30.0` | Maximum seconds to wait before processing a partial batch. |
| `--extract-surface` | `False` | Extract and regrid only the top level (surface). |
| `--extract-seafloor` | `False` | Extract and regrid only seafloor values. |
| `--use-regrid-cache` | `False` | Enable reuse of CDO weight files. |
| `--use-seafloor-cache` | `False` | Enable reuse of seafloor depth index cache. |
| `--file-settle-seconds` | `10.0` | Wait time to ensure files are no longer being written before processing. |
| `--validate-can-open` | `True` | Validate that files can be opened before scheduling regridding. |
| `--overwrite` | `False` | Overwrite existing regridded outputs. |
| `--delete-original` | `False` | Delete original files after successful regridding. |
| `--process-existing` | `True` | Process files already present in `watch_dir` on startup. |
### Python API
```python
from pathlib import Path
from esgpull.esgpullplus.cdo_regrid import regrid_directory, regrid_single_file, CDORegridPipeline
# Directory
results = regrid_directory(
Path("data/input"),
output_dir=Path("data/output"),
target_resolution=(1.0, 1.0),
extract_surface=True,
extract_seafloor=False,
max_workers=4,
)
# results["successful"], results["failed"], results["skipped"]
# Single file
ok = regrid_single_file(
Path("data/file.nc"),
output_dir=Path("data/output"),
target_resolution=(1.0, 1.0),
extract_seafloor=True,
)
```
### Features
- **Surface/seafloor:** Writes `*_top_level.nc` or `*_seafloor.nc` beside the original, then regrids that file (same CDO path).
- **Weight reuse:** Weights cached per directory (e.g. `cdo_weights/`); shared when grids match.
- **Chunking:** Large files split by time; optional `--chunk-size-gb`, `--max-memory-gb`.
- **Parallel:** Per-file locking; `--max-workers`; `--no-parallel` to disable.
- **Grids:** Structured, curvilinear, unstructured (e.g. `ncells`); multi-level and time series.
---
## Works in Progress
1. There's a fair bit of functionality here! Time to get a proper documentation site in order...
2. Merge as much of this functionality as is welcome/useful into the original `esgpull`
repository
I am more than happy to take suggestions/contributions from anyone. Just get in touch via email: rt582@cam.ac.uk
---
## License
Same license terms as the esgpull project.
| text/markdown | null | Orlando Timmerman <rt582@cam.ac.uk> | null | null | BSD-3-Clause | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Langua... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=22.1.0",
"aiostream>=0.4.5",
"alembic>=1.8.1",
"attrs>=22.1.0",
"cattrs>=22.2.0",
"click-params>=0.4.0",
"click>=8.1.3",
"httpx>=0.23.0",
"nest-asyncio>=1.5.6",
"numpy>=1.24.0",
"orjson>=3.8.0",
"packaging>=25.0",
"pandas>=2.0.0",
"platformdirs>=2.6.2",
"pydantic-settings>=2.1... | [] | [] | [] | [
"Repository, https://github.com/orlando-code/esgpull-plus",
"Documentation, https://github.com/orlando-code/esgpull-plus",
"Issues, https://github.com/orlando-code/esgpull-plus/issues",
"Changelog, https://github.com/orlando-code/esgpull-plus/blob/main/CHANGELOG.md",
"Original_respository, https://github.co... | twine/6.2.0 CPython/3.14.0 | 2026-02-19T13:29:33.965789 | esgpull_plus-1.0.0.tar.gz | 362,303 | 08/63/59ab33a010e0c7da8c3c44103e477e803be7d19eb1a48cad743e8e9581e8/esgpull_plus-1.0.0.tar.gz | source | sdist | null | false | ce4fb0eebb548d7849afce8d4c6aaf7c | b42e52227dddfb6e2fa5ddb839d6d6aebb4cc40c6ae11bf39e7eb9a8a8130ded | 086359ab33a010e0c7da8c3c44103e477e803be7d19eb1a48cad743e8e9581e8 | null | [
"LICENSE"
] | 228 |
2.1 | mrsitoolbox | 1.0.12 | Analysis toolbox for MRSI data | # 🧠 MRSI Toolbox Kit
This repository provides tools and preprocessing utilites to construct a within-subject **Metabolic Similarity Matrix (MetSiM)** based on MRSI scans, as detailed in [Nature Communications 2025](https://www.nature.com/articles/s41467-025-66124-w) and preparing files for a voxel-based analysis as detailed in [biorxiv](https://www.biorxiv.org/content/10.1101/2025.06.22.660965v1)
## 📚 Table of Contents
- [🧩 Construct a within-subject MetSiM](#-construct-a-within-subject-metsim)
- [📊 MetSiM Analysis](#-metsim-analysis)
- [🔧 Pre-Processing Pipeline for Voxel-Based Analysis](#-pre-processing-pipeline-for-voxel-based-analysis)
---
## 📜 License
The repository is distributed under the CHUV license [LICENSE](./LICENSE).
---
## 🧑💻 Contributors
| Name | GitHub Profile | Email |
|--------------------|--------------------------------------------------|----------------------------|
| Federico Lucchetti | [@fedlucchetti](https://github.com/fedlucchetti) | federico.lucchetti@unil.ch |
| Edgar Céléreau | [@mrspsy](https://github.com/mrspsy) | edgar.celereau@unil.ch |
---
## 📂 Dataset
A demo dataset is available at `data/BIDS/Dummy-Project` and constructed MetSiMs from the Geneva-Study in `data/BIDS/Geneva-Study/derivatives/connectivity`.
To access the full dataset, contact the authors with a detailed research proposal explaining your intended use.
---
## ⚙️ Installation
### Requirements
- **Python 3.x**
- **Conda / Miniconda** (optional, but recommended)
- **[CHIMERA](https://github.com/connectomicslab/chimera)** for anatomical parcellation
### Setup Instructions
1. **Clone the Repository**
```bash
git clone git@github.com:MRSI-Psychosis-UP/MRSI-Metabolic-Connectome.git
cd MRSI-Metabolic-Connectome
```
2. **Install the Environment**
```bash
bash build_env.sh
```
3. **Activate the Environment**
```bash
conda activate mrsitooldemo_env
```
4. **Set Environment Paths**
```bash
python set_env_paths.py
```
Use the provided demo BIDS dataset (`data/BIDS`) if applicable.
### Install From PyPI
```bash
pip install mrsitoolbox
```
### Python Imports
Use package-prefixed imports:
```python
from mrsitoolbox.tools.datautils import DataUtils
from mrsitoolbox.tools.mridata import MRIData
from mrsitoolbox.registration.registration import Registration
from mrsitoolbox.connectomics.network import NetBasedAnalysis
from mrsitoolbox.graphplot.simmatrix import SimMatrixPlot
```
---
## 🗂️ Inputs
### List of Participants/Subject File
- BIDS directory `PROJECT_NAME/` should contain a tab-separated `participants_allsessions.tsv` file following the BIDS standard `subject-id \t session-id`.
### MRSI Files
- MRSI files should be placed in:
```
PROJECT_NAME/derivatives/mrsi-<space>/sub-<subject_id>/ses-<session>/
```
- File naming convention:
```
sub-<subject_id>_ses-<session>_space-<space>_met-<metabolite>_desc-<description>_mrsi.nii.gz
```
| **BIDS Prefix** | **Description** | **Choices** |
|------------------|---------------------------|-------------------------------------------------------------------------------------------------------|
| `subject_id` | Subject/Participant ID | |
| `session` | Session ID | `[V1, V2, V3, ...]` |
| `space` | MRI Acquisition space | `orig`, `t1w`, `mni` |
| `metabolite` | MRSI resolved Metabolite | **B<sub>0</sub> = 3T**: `Ins`, `CrPCr`, `GPCPCh`, `GluGln`, `NAANAAG`, `water` |
| | | **B<sub>0</sub> = 7T**: `NAA`, `NAAG`, `Ins`, `GPCPCh`, `Glu`, `Gln`, `CrPCr`, `GABA`, `GSH` |
| `description` | MRSI Map Description | `signal`, `crlb`, `fwhm`, `snr`, `filtharmonic`, `brainmask` |
### Anatomical Files
- **Chimera Anatomical Parcellation Files:**
- Example for subjects T1w filenames stores in found in `t1s.txt` with chimera atlas LFMIHISIFF using the Lausanne cortical parcellation at scale 3 :
```bash
chimera -b data/BIDS/Dummy-Project/ \
-d data/BIDS/Dummy-Project/derivatives/ \
--freesurferdir data/BIDS/Dummy-Project/derivatives/freesurfer/ \
-p LFMIHISIFF -g 2 -s 3 -ids t1s.txt --nthreads 28
```
- Parcellations saved in `PROJECT_NAME/derivatives/chimera-atlases`.
- **Partial Volume Correction (PVC) files:**
Replace `<N>` with the appropriate tissue type index (e.g., `1` for GM, `2` for WM, `3` for CSF):
```bash
PROJECT_NAME/derivatives/<PVCORR_DIR>/sub-<subject_id>/ses-<session>/sub-<subject_id>_ses-<session>_desc-p<N>_T1w.nii.gz
```
- **Minimum required:**
- `p1`: gray matter
- `p2`: white matter
- `p3`: CSF
- **Additional tissue files may be included (optional).**
---
## 🧩 Construct a within-subject MetSiM
> **Batch mode semantics**
>
> * `--batch file` **requires** `--participants` to point to a `.tsv` file listing the subject–session pairs to process.
> * `--batch off` **requires** both `--subject_id` and `--session` and processes **one** acquisition (a single subject–session pair).
> * `--batch all` processes all discoverable subject–session pairs in the group.
1. **Create MRSI-to-T1w Transforms**
```bash
python experiments/Preprocessing/registration_mrsi_to_t1.py --group Dummy-Project --ref_met CrPCr --subject_id S001 --session V1 --nthreads 16
```
2. **Map Chimera Parcel Image to MRSI Space**
```bash
python experiments/MetSiM_pipeline/map_parcel_image_to_mrsi.py --group Dummy-Project --subject_id S001 --session V1 --parc LFMIHIFIS --scale 3
```
3. **Construct within-subject MetSiM**
```bash
python experiments/MetSiM_pipeline/construct_MetSiM_subject.py --group Dummy-Project --subject_id S001 --session V1 --parc LFMIHIFIS --scale 3 --npert 50 --show_plot 1 --nthreads 16 --analyze 1
```
4. **Construct within-subject MetSiM (batch)**
```bash
python experiments/MetSiM_pipeline/construct_MetSiM_subject.py --group Dummy-Project --parc LFMIHIFIS --scale 3 --npert 50 --show_plot 0 --nthreads 16 --analyze 1 --batch file --participants $PATH2_PARTICIPANT-SESSION_FILE --t1mask acq-memprage_desc-brain_T1w
```
5. **Construct MetSiM Population Average**
```bash
python experiments/MetSiM_pipeline/construct_MetSiM_pop.py --group Geneva-Study --parc LFMIHIFIS --scale 3 --npert 50 --participants $PATH2_PARTICIPANT-SESSION_FILE
- **Outputs**: Transforms, coregistered parcellations, and MetSiMs are saved in the `derivatives/` folder.
---
### Input Options Description
| **Arg Name** | **Description** | **Type** | **Default** |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | --------------- |
| `--group` | BIDS project folder name | str | `Dummy-Project` |
| `--subject_id` | Subject ID (e.g., `S001`). **Required when** `--batch off` (processes a single acquisition). | str | `S001` |
| `--session` | Session label (e.g., `V1`). **Required when** `--batch off` (processes a single acquisition). | str | `V1` |
| `--parc` | Chimera parcellation string | str | `LFMIHIFIS` |
| `--npert` | Number of metabolic profile perturbations | int | `50` |
| `--leave_one_out` | Leave-one-metabolite-out option | int (0 or 1) | `0` |
| `--show_plot` | Show plots | int (0 or 1) | `0` |
| `--overwrite` | Overwrite existing results | int (0 or 1) | `0` |
| `--ref_met` | Reference metabolite for coregistration | str | `CrPCr` |
| `--nthreads` | Number of parallel CPU threads | int | `4` |
| `--t1` | Path or pattern to T1-weighted image | str | `None` |
| `--t1mask` | Path or pattern to T1-weighted brain mask | str | `None` |
| `--b0` | MRI B<sub>0</sub> field in Tesla | float (choices: 3, 7) | `3` |
| `--batch` | Batch mode. `file` **requires** `--participants`; `off` **requires** `--subject_id` **and** `--session` (processes a single acquisition); `all` uses all pairs. | str (choices: `all`,`file`,`off`) | `off` |
| `--participants` | Path to a `.tsv` listing subject–session pairs to process (**required when** `--batch file`; ignored for `--batch off` and `--batch all`). | path | `None` |
- **Note**: `--participants` refers to a BIDS-style `participants_allsessions.tsv`. Defaults to `$BIDSDATAPATH/group` if not specified.
---

---
## 📊 MetSiM Analysis
1. **Construct Metabolic Similarity Map (Single Subject)**
```bash
python experiments/MetSiM_analysis/construct_MSI-map_subj.py --group Geneva-Study --parc LFMIHIFIS --scale 3 --npert 50 --dimalg pca_tsne
```
2. **Construct Metabolic Similarity Map (Population)**
```bash
python experiments/MetSiM_analysis/construct_MSI-map_pop.py --group Geneva-Study --parc LFMIHIFIS --scale 3 --npert 50 --dimalg pca_tsne --msiscale -255.0
```
3. **Inverse Map MSI to MRSI Signal (Population)**
```bash
python experiments/MetSiM_analysis/construct_MSI-map_pop.py --group Geneva-Study --parc LFMIHIFIS --scale 3 --npert 50 --dimalg pca_tsne
```
4. **Derive all GM constrained networtk paths**
```bash
python experiments/MetSiM_analysis/find_all_network_paths.py --group Geneva-Study --parc LFMIHIFIS --scale 3
--lobe LOBE --hemi HEMI --lpath 13
```
5. **Construct Metabolic Principal Curve (Population)**
```bash
python experiments/MetSiM_analysis/construct_metabolic_principal_path.py --group Geneva-Study --parc LFMIHIFIS --scale 3 --diag group --lpath 13 --lobe ctx --nperm 100 --lobe ctx
```
- **Note**:
- Run first `find_all_network_paths.py` for both hemispheres `lh` and `rh` to construct all possible network paths then select the one which maximizes metabolic entropy and minimizes local metabolic heterogeneity with `construct_metabolic_principal_path.py` followed by comparison with a random geometric network and results figure generation.
- `--dimalg` specifies the **manifold-discovery algorithm** used to construct the metabolic fibre.
- `--hemi` chooses the **hemisphere** in which the fibre is built (`lh` or `rh`).
- `--lpath` sets the **maximum path length**.
- `--nperm` defines the **size of the null distribution**, generated from random-geometric networks.
- `--start` and `--stop` indicate the **start and stop nodes**.
- If not provided, they default to the **occipital region** (start) and **frontal/anterior cingulate regions** (stop), which maximise the inter-node MS-mode difference.
- The script then runs `find_all_network_paths.py` from scratch with these adjusted end-node labels.
- `--lobe` restricts the path search to either the **neocortex** (`ctx`) or the **subcortex** (`subc`).
---

## 🔧 Pre-Processing Pipeline for Voxel-Based Analysis
`experiments/Preprocessing/preprocess.py` now runs the full voxel-wise chain end-to-end: preflight checks, optional orientation correction, spike filtering, optional partial volume correction, and MRSI exports to MNI or T1w space. Missing transforms are generated where supported; overwrite flags force regeneration.
**What it does**
- Checks required inputs (T1w, MRSI signals/CRLB/SNR/FWHM, CAT12 p1–p3 where available) and batches subject–session pairs from `participants_allsessions.tsv` or a custom TSV/CSV.
- Prints a preflight availability table per subject/session: existing files are marked with a green check, missing ones with a red X, and items marked **PROC** (orange) are auto-generated during preprocessing (e.g., transforms or masks).
- Optional oblique FOV correction (`--corr_orient`).
- Filters MRSI spikes (`--filtoption`, `--spikepc`) and builds brain masks if present.
- Runs/refreshes T1w→MNI registration when needed (`--overwrite_mni_reg`) and exports MRSI to T1w or MNI space at native-MRSI or T1w resolution via `--transform`.
- Partial volume correction runs when `--overwrite_pve` is set and p1/p2/p3 maps are present; otherwise PVC is skipped (or explicitly bypassed with `--no_pvc`).
> **Batch mode semantics**
>
> * `--batch file` **requires** `--participants` to point to a `.tsv`/`.csv` listing the subject–session pairs to process.
> * `--batch off` **requires** both `--sub` and `--ses` and processes **one** acquisition.
> * `--batch all` processes all discoverable subject–session pairs in the group.
**Quick start**
- Single subject (filtering → PVC → export to MNI @ native MRSI res)
```bash
python experiments/Preprocessing/preprocess.py \
--group Dummy-Project --sub S001 --ses V1 \
--t1 acq-memprage_desc-brain_T1w --b0 3 --nthreads 16 \
--transform mni-origres --overwrite_pve
```
- Batch (participants TSV/CSV)
```bash
python experiments/Preprocessing/preprocess.py \
--group Dummy-Project --b0 3 --nthreads 16 \
--batch file --participants $PATH2_PARTICIPANT-SESSION_FILE \
--t1 acq-memprage_desc-brain_T1w \
--transform mni-t1wres --overwrite_pve
```
- Optional add-ons: `--corr_orient` to fix oblique FOV; `--no_pvc` to skip partial volume correction; `--overwrite_filt`/`--overwrite_transform`/`--overwrite_mni_reg` to recompute stages; `--proc_mnilong` (+ `--overwrite_mnilong`) for MNI152-long outputs; `--checksum` to print a pre-run output validity summary; `--v 1` for verbose logs.
**Key `preprocess.py` options**
| Argument | Default | Purpose |
| -------- | ------- | ------- |
| `--group` | `Mindfulness-Project` | BIDS project under `$BIDSDATAPATH`. |
| `--sub` | `S002` | Subject ID used when `--batch off`. |
| `--ses` | `V3` | Session ID used when `--batch off`. |
| `--batch` | `off` (`all`/`file`/`off`) | Process all pairs, a TSV/CSV list (`--participants`), or a single pair. |
| `--participants` | `None` | TSV/CSV path used when `--batch file` (columns like `participant_id`/`sub` and `ses`/`session_id`). |
| `--t1` | `desc-brain_T1w` | T1w path or pattern (resolved per subject/session). |
| `--b0` | `3` (`3`/`7`) | Sets metabolite list (3T vs 7T). |
| `--nthreads` | `4` | CPU threads for filtering, PVC, and transforms. |
| `--filtoption` | `filtbiharmonic` | Spike filtering strategy. |
| `--spikepc` | `99` | Percentile for spike removal. |
| `--transform` | `mni-origres` (`mni-t1wres`/`t1w-origres`/`t1w-t1wres`) | Export MRSI to MNI or T1w space at native MRSI vs T1w resolution. |
| `--overwrite_transform` | `off` | Recompute transforms/exports for the selected space+resolution (flag). |
| `--no_pvc` | `off` | Skip PVC exports (transform filtered signals only; flag). |
| `--overwrite_pve` | `off` | Run/refresh partial volume correction before transforms (flag). |
| `--overwrite_filt` | `off` | Recompute spike filtering outputs (flag). |
| `--overwrite_t1_reg` | `off` | Force regeneration of MRSI→T1w transforms (flag). |
| `--overwrite_mni_reg` | `off` | Force regeneration of T1w→MNI transforms (flag). |
| `--proc_mnilong` | `off` | Generate MNI152-longitudinal outputs (flag). |
| `--overwrite_mnilong` | `off` | Rerun MNI152-longitudinal exports (flag). |
| `--corr_orient` | `off` | Correct oblique FOV orientation for MRSI and masks (flag). |
| `--checksum` | `off` | Display the output validity summary before processing (flag). |
| `--v` | `0` | Verbose flag (`1` to print more detail). |
Input notes: `--participants` can be TSV/CSV (subject column such as `participant_id`/`sub`/`id` and session column such as `ses`/`session_id`); default is `$BIDSDATAPATH/<group>/participants_allsessions.tsv` (V2BIS rows are skipped). `--t1` is required (defaults to the `desc-brain_T1w` pattern). Boolean options are flags (no `0/1` values); include them to enable. PVC needs CAT12 p1/p2/p3 maps; if absent or `--no_pvc` is set, PVC is skipped and logged.
**Population quality mask**
```bash
python experiments/Preprocessing/compute_pop_qmask.py \
--group Dummy-Project --participants $PATH2_PARTICIPANT-SESSION_FILE \
--snr 4 --crlb 20 --fwhm 0.1 --alpha 0.68 --b0 3
```
**Registration helpers (optional)**
All registration is triggered automatically by `preprocess.py`. Run these directly only for debugging or bespoke registration:
- MRSI→T1w (batch capable)
```bash
python experiments/Preprocessing/registration_mrsi_to_t1.py \
--group Dummy-Project --ref_met CrPCr --nthreads 16 \
--batch file --participants $PATH2_PARTICIPANT-SESSION_FILE \
--t1 acq-memprage_desc-brain_T1w
```
- T1w→MNI (batch capable)
```bash
python experiments/Preprocessing/registration_t1_to_MNI.py \
--group Dummy-Project --nthreads 16 \
--batch file --participants $PATH2_PARTICIPANT-SESSION_FILE
```

| text/markdown | Federico Lucchetti | federico.lucchetti@unil.ch | null | null | null | null | [
"Development Status :: 4 - Beta",
"License :: Other/Proprietary License"
] | [] | https://github.com/MRSI-Psychosis-UP/Metabolic-Connectome.git | null | >=3.8 | [] | [] | [] | [
"matplotlib",
"nibabel",
"PyQt5",
"pyqtgraph",
"tqdm",
"scipy",
"statsmodels",
"PyOpenGL",
"natsort",
"vispy",
"nilearn",
"networkx",
"mne",
"opencv-python",
"tensorflow",
"powerlaw",
"PyQt6",
"scikit-tda",
"python-louvain"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.6 | 2026-02-19T13:29:32.327974 | mrsitoolbox-1.0.12.tar.gz | 168,017 | 12/2d/b31d11531b53ef45428a0428d12d70118548c35674ac4622f9a2e6e0f99e/mrsitoolbox-1.0.12.tar.gz | source | sdist | null | false | 99b8339f9da77766803d87b7295c2b20 | 81dbed30d0909856ddc4de9fc77726631c7fd4fd039d44f21bd7d820659f2ef5 | 122db31d11531b53ef45428a0428d12d70118548c35674ac4622f9a2e6e0f99e | null | [] | 215 |
2.4 | poly-web3 | 1.0.3 | Polymarket Proxy wallet redeem SDK - Execute redeem operations on Polymarket using proxy wallets | # poly-web3



Python SDK for redeeming and splitting/merging Polymarket positions via Proxy/Safe wallets (gas-free).
[English](README.md) | [中文](README.zh.md)
```bash
Python >= 3.11
pip install poly-web3
```
```python
from poly_web3 import PolyWeb3Service
service = PolyWeb3Service(
clob_client=client,
relayer_client=relayer_client,
)
# Redeem all redeemable positions for the current account.
service.redeem_all(batch_size=10)
# Split/Merge for binary markets (amount in human USDC units).
service.split("0x...", 10)
service.merge("0x...", 10)
```
[See the full example](#quick-start)
## Redeem Behavior Notes
- Redeemable positions are fetched via the official Positions API, which typically has ~1 minute latency.
- `redeem_all` returns an empty list if there are no redeemable positions. If the returned list contains `None`, the redeem failed and should be retried.
## Split/Merge Notes
- `split`/`merge` are designed for binary markets (Yes/No) and use the default partition internally.
- `amount` is in human units (USDC), and is converted to base units internally.
## FAQ
1. **UI shows redeemable, but `redeem_all` returns `[]`**: The official Positions API can be delayed by 1–3 minutes. Wait a bit and retry.
2. **RPC error during redeem**: Switch RPC endpoints by setting `rpc_url` when instantiating `PolyWeb3Service`.
3. **Redeem stuck in `execute`**: The official relayer may be congested. Stop redeeming for 1 hour to avoid nonce looping from repeated submissions.
4. **Relayer client returns 403**: You need to apply for Builder API access and use a valid key. Reference: Polymarket Builders — Introduction: https://docs.polymarket.com/developers/builders/builder-intro
5. **Relayer daily limit**: The official relayer typically limits to 100 requests per day. Prefer batch redeem (`batch_size`) to reduce the number of requests and avoid hitting the limit.
## About the Project
This project is a Python rewrite of Polymarket's official TypeScript implementation of `builder-relayer-client`, designed to provide Python developers with a convenient tool for executing Proxy and Safe wallet redeem operations on Polymarket.
**Important Notes:**
- This project implements official CTF redeem plus binary split/merge operations
- Other features (such as trading, order placement, etc.) are not within the scope of this project
**Some Polymarket-related redeem or write operations implemented in this project depend on access granted through Polymarket's Builder program. To perform real redeem operations against Polymarket, you must apply for and obtain a Builder key/credentials via Polymarket's official Builder application process. After approval you will receive the credentials required to use the Builder API—only then will the redeem flows in this repository work against the live service. For local development or automated tests, use mocks or testnet setups instead of real keys to avoid exposing production credentials.**
Reference:
- Polymarket Builders — Introduction: https://docs.polymarket.com/developers/builders/builder-intro
**Current Status:**
- ✅ **Proxy Wallet** - Fully supported for redeem/split/merge
- ✅ **Safe Wallet** - Fully supported for redeem/split/merge
- 🚧 **EOA Wallet** - Under development
We welcome community contributions! If you'd like to help implement EOA wallet redeem functionality, or have other improvement suggestions, please feel free to submit a Pull Request.
## Installation
```bash
pip install poly-web3
```
Or using uv:
```bash
uv add poly-web3
```
## Requirements
- Python >= 3.11
## Dependencies
- `py-clob-client >= 0.25.0` - Polymarket CLOB client
- `py-builder-relayer-client >= 0.0.1` - Builder Relayer client
- `web3 >= 7.0.0` - Web3.py library
- `eth-utils == 5.3.1` - Ethereum utilities library
## Quick Start
### Basic Usage - Execute Redeem
```python
import os
import dotenv
from py_builder_relayer_client.client import RelayClient
from py_builder_signing_sdk.config import BuilderConfig
from py_builder_signing_sdk.sdk_types import BuilderApiKeyCreds
from py_clob_client.client import ClobClient
from poly_web3 import RELAYER_URL, PolyWeb3Service
dotenv.load_dotenv()
# Initialize ClobClient
host = "https://clob.polymarket.com"
chain_id = 137 # Polygon mainnet
client = ClobClient(
host,
key=os.getenv("POLY_API_KEY"),
chain_id=chain_id,
signature_type=1, # Proxy wallet type (signature_type=2 for Safe)
funder=os.getenv("POLYMARKET_PROXY_ADDRESS"),
)
client.set_api_creds(client.create_or_derive_api_creds())
# Initialize RelayerClient
relayer_client = RelayClient(
RELAYER_URL,
chain_id,
os.getenv("POLY_API_KEY"),
BuilderConfig(
local_builder_creds=BuilderApiKeyCreds(
key=os.getenv("BUILDER_KEY"),
secret=os.getenv("BUILDER_SECRET"),
passphrase=os.getenv("BUILDER_PASSPHRASE"),
)
),
)
# Create service instance
service = PolyWeb3Service(
clob_client=client,
relayer_client=relayer_client,
rpc_url="https://polygon-bor.publicnode.com", # optional
)
# Redeem all positions that are currently redeemable
redeem_all_result = service.redeem_all(batch_size=10)
print(f"Redeem all result: {redeem_all_result}")
# If redeem_all_result contains None, refer to README FAQ and retry.
if redeem_all_result and any(item is None for item in redeem_all_result):
print("Redeem failed for some items; please retry.")
# Execute redeem operation (batch)
condition_ids = [
"0xc3df016175463c44f9c9f98bddaa3bf3daaabb14b069fb7869621cffe73ddd1c",
"0x31fb435a9506d14f00b9de5e5e4491cf2223b6d40a2525d9afa8b620b61b50e2",
]
redeem_batch_result = service.redeem(condition_ids, batch_size=10)
print(f"Redeem batch result: {redeem_batch_result}")
if redeem_all_result and any(item is None for item in redeem_all_result):
print("Redeem failed for some items; please retry.")
```
### Basic Usage - Split/Merge (Binary Markets)
```python
# amount is in human units (USDC)
split_result = service.split(
"0x31fb435a9506d14f00b9de5e5e4491cf2223b6d40a2525d9afa8b620b61b50e2",
1.5,
)
print(f"Split result: {split_result}")
merge_result = service.merge(
"0x31fb435a9506d14f00b9de5e5e4491cf2223b6d40a2525d9afa8b620b61b50e2",
1.5,
)
print(f"Merge result: {merge_result}")
```
## API Documentation
### PolyWeb3Service
The main service class that automatically selects the appropriate service implementation based on wallet type.
#### Methods
##### `redeem(condition_ids: list[str], batch_size: int = 20)`
Execute redeem operation.
**Parameters:**
- `condition_ids` (list[str]): List of condition IDs
- `batch_size` (int): Batch size for redeem requests
**Returns:**
- `dict | list[dict]`: Transaction result(s) containing transaction status and related information
**Examples:**
```python
# Batch redeem
result = service.redeem(["0x...", "0x..."], batch_size=10)
```
##### `redeem_all(batch_size: int = 20) -> list[dict]`
Redeem all positions that are currently redeemable for the authenticated account.
**Returns:**
- `list[dict]`: List of redeem results; empty list if no redeemable positions. If the list contains `None`, the redeem failed and should be retried.
**Examples:**
```python
# Redeem all positions that can be redeemed
service.redeem_all(batch_size=10)
```
##### `split(condition_id: str, amount: int | float | str)`
Split a binary (Yes/No) position. `amount` is in human USDC units.
**Parameters:**
- `condition_id` (str): Condition ID
- `amount` (int | float | str): Amount in USDC
**Returns:**
- `dict | None`: Transaction result
**Examples:**
```python
result = service.split("0x...", 1.25)
```
##### `merge(condition_id: str, amount: int | float | str)`
Merge a binary (Yes/No) position. `amount` is in human USDC units.
**Parameters:**
- `condition_id` (str): Condition ID
- `amount` (int | float | str): Amount in USDC
**Returns:**
- `dict | None`: Transaction result
**Examples:**
```python
result = service.merge("0x...", 1.25)
```
#### Optional APIs
##### `is_condition_resolved(condition_id: str) -> bool`
Check if the specified condition is resolved.
**Parameters:**
- `condition_id` (str): Condition ID (32-byte hexadecimal string)
**Returns:**
- `bool`: Returns `True` if the condition is resolved, otherwise `False`
##### `get_winning_indexes(condition_id: str) -> list[int]`
Get the list of winning indexes.
**Parameters:**
- `condition_id` (str): Condition ID
**Returns:**
- `list[int]`: List of winning indexes
##### `get_redeemable_index_and_balance(condition_id: str, owner: str) -> list[tuple]`
Get redeemable indexes and balances for the specified address.
**Parameters:**
- `condition_id` (str): Condition ID
- `owner` (str): Wallet address
**Returns:**
- `list[tuple]`: List of tuples containing (index, balance), balance is in USDC units
## Optional: Query Operations
Before executing redeem, you can optionally check the condition status and query redeemable balances:
```python
# Check if condition is resolved
condition_id = "0xc3df016175463c44f9c9f98bddaa3bf3daaabb14b069fb7869621cffe73ddd1c"
can_redeem = service.is_condition_resolved(condition_id)
# Get redeemable indexes and balances
redeem_balance = service.get_redeemable_index_and_balance(
condition_id, owner=client.builder.funder
)
print(f"Can redeem: {can_redeem}")
print(f"Redeemable balance: {redeem_balance}")
```
## Project Structure
```
poly_web3/
├── __init__.py # Main entry point, exports PolyWeb3Service
├── const.py # Constant definitions (contract addresses, ABIs, etc.)
├── schema.py # Data models (WalletType, etc.)
├── signature/ # Signature-related modules
│ ├── build.py # Proxy wallet derivation and struct hashing
│ ├── hash_message.py # Message hashing
│ └── secp256k1.py # secp256k1 signing
└── web3_service/ # Web3 service implementations
├── base.py # Base service class
├── proxy_service.py # Proxy wallet service (✅ Implemented)
├── eoa_service.py # EOA wallet service (🚧 Under development)
└── safe_service.py # Safe wallet service (✅ Implemented)
```
## Notes
1. **Environment Variable Security**: Make sure `.env` file is added to `.gitignore`, do not commit sensitive information to the code repository
2. **Network Support**: Currently mainly supports Polygon mainnet (chain_id: 137), Amoy testnet may have limited functionality
3. **Wallet Type**: Proxy (signature_type: 1) and Safe (signature_type: 2) are supported; EOA wallet operations are under development
4. **Gas Fees**: Transactions are executed through Relayer, gas fees are handled by the Relayer
## Development
### Install Development Dependencies
```bash
uv pip install -e ".[dev]"
```
### Run Examples
```bash
python examples/example_redeem.py
python examples/example_split_merge.py
```
### Contributing
Simple contribution flow:
1. Open an Issue to describe the change (bug/feature/doc).
2. Fork and create a branch: `feat/xxx` or `fix/xxx`.
3. Make changes and update/add docs if needed.
4. Run: `uv run python -m examples.example_redeem` or `uv run python -m examples.example_split_merge` (if applicable).
5. Open a Pull Request and link the Issue.
## License
MIT
## Author
PinBar
## Related Links
- [Polymarket](https://polymarket.com/)
- [Polygon Network](https://polygon.technology/)
| text/markdown | PinBar | null | null | null | null | polymarket, web3, proxy, wallet, redeem, blockchain, polygon | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Internet :: WWW/HTTP"
] | [] | https://github.com/tosmart01/poly-web3 | null | >=3.11 | [] | [] | [] | [
"py-clob-client>=0.25.0",
"py-builder-relayer-client>=0.0.1",
"web3<8,>=7.0.0",
"eth-utils==5.3.1",
"setuptools>=80.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/tosmart01/poly-web3",
"Repository, https://github.com/tosmart01/poly-web3",
"Bug Tracker, https://github.com/tosmart01/poly-web3/issues"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T13:29:17.802863 | poly_web3-1.0.3.tar.gz | 21,953 | b5/3d/0513e6c109d974b8b3c036da3a46b82f986f6e6543cc8532259f3064c4c8/poly_web3-1.0.3.tar.gz | source | sdist | null | false | 469e6eb3a4b471e28b9af4bef5e4968e | b127d0a08f64ea70a76b936aab3b1c76d67287eae041b5f91875fc2d6716688c | b53d0513e6c109d974b8b3c036da3a46b82f986f6e6543cc8532259f3064c4c8 | null | [] | 674 |
2.1 | airbyte-source-sftp-bulk | 1.9.0.dev202602191328 | Source implementation for SFTP Bulk. | # Sftp-Bulk source connector
This is the repository for the Sftp-Bulk source connector, written in Python.
For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.com/integrations/sources/sftp-bulk).
## Local development
### Prerequisites
- Python (~=3.9)
- Poetry (~=1.7) - installation instructions [here](https://python-poetry.org/docs/#installation)
### Installing the connector
From this connector directory, run:
```bash
poetry install --with dev
```
### Create credentials
**If you are a community contributor**, follow the instructions in the [documentation](https://docs.airbyte.com/integrations/sources/sftp-bulk)
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_sftp_bulk/spec.yaml` file.
Note that any directory named `secrets` is gitignored across the entire Airbyte repo, so there is no danger of accidentally checking in sensitive information.
See `sample_files/sample_config.json` for a sample config file.
### Locally running the connector
```
poetry run source-sftp-bulk spec
poetry run source-sftp-bulk check --config secrets/config.json
poetry run source-sftp-bulk discover --config secrets/config.json
poetry run source-sftp-bulk read --config secrets/config.json --catalog sample_files/configured_catalog.json
```
### Running unit tests
To run unit tests locally, from the connector directory run:
```
poetry run pytest unit_tests
```
### Building the docker image
1. Install [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md)
2. Run the following command to build the docker image:
```bash
airbyte-ci connectors --name=source-sftp-bulk build
```
An image will be available on your host with the tag `airbyte/source-sftp-bulk:dev`.
### Running as a docker container
Then run any of the connector commands as follows:
```
docker run --rm airbyte/source-sftp-bulk:dev spec
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-sftp-bulk:dev check --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-sftp-bulk:dev discover --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets -v $(pwd)/integration_tests:/integration_tests airbyte/source-sftp-bulk:dev read --config /secrets/config.json --catalog /integration_tests/configured_catalog.json
```
### Running our CI test suite
You can run our full test suite locally using [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md):
```bash
airbyte-ci connectors --name=source-sftp-bulk test
```
### Customizing acceptance Tests
Customize `acceptance-test-config.yml` file to configure acceptance tests. See [Connector Acceptance Tests](https://docs.airbyte.com/connector-development/testing-connectors/connector-acceptance-tests-reference) for more information.
If your connector requires to create or destroy resources for use during acceptance tests create fixtures for it and place them inside integration_tests/acceptance.py.
### Dependency Management
All of your dependencies should be managed via Poetry.
To add a new dependency, run:
```bash
poetry add <package-name>
```
Please commit the changes to `pyproject.toml` and `poetry.lock` files.
## Publishing a new version of the connector
You've checked out the repo, implemented a million dollar feature, and you're ready to share your changes with the world. Now what?
1. Make sure your changes are passing our test suite: `airbyte-ci connectors --name=source-sftp-bulk test`
2. Bump the connector version (please follow [semantic versioning for connectors](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#semantic-versioning-for-connectors)):
- bump the `dockerImageTag` value in in `metadata.yaml`
- bump the `version` value in `pyproject.toml`
3. Make sure the `metadata.yaml` content is up to date.
4. Make sure the connector documentation and its changelog is up to date (`docs/integrations/sources/sftp-bulk.md`).
5. Create a Pull Request: use [our PR naming conventions](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#pull-request-title-convention).
6. Pat yourself on the back for being an awesome contributor.
7. Someone from Airbyte will take a look at your PR and iterate with you to merge it into master.
8. Once your PR is merged, the new version of the connector will be automatically published to Docker Hub and our connector registry.
| text/markdown | Airbyte | contact@airbyte.io | null | null | ELv2 | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | https://airbyte.com | null | <3.12,>=3.10 | [] | [] | [] | [
"airbyte-cdk[file-based]<8.0.0,>=7.0.4",
"paramiko==3.4.0",
"psutil==6.1.0"
] | [] | [] | [] | [
"Repository, https://github.com/airbytehq/airbyte",
"Documentation, https://docs.airbyte.com/integrations/sources/sftp-bulk"
] | poetry/1.8.5 CPython/3.11.14 Linux/6.14.0-1017-azure | 2026-02-19T13:28:43.294956 | airbyte_source_sftp_bulk-1.9.0.dev202602191328.tar.gz | 10,026 | 01/9b/602826d36765b71ab84e4525e41902316da1e41dec2d35d3889152af3339/airbyte_source_sftp_bulk-1.9.0.dev202602191328.tar.gz | source | sdist | null | false | 26c5ce90d3c31748fec7c922b668de9b | 00e51dfe6bc7315e9755f3858104a5d6f3b8ee14f6c2511a80dd1638c5294147 | 019b602826d36765b71ab84e4525e41902316da1e41dec2d35d3889152af3339 | null | [] | 190 |
2.4 | clang-format-docs | 0.5.0 | Run `clang-format` on C++ code blocks in documentation files | clang-format-docs
=================
Run `clang-format` on C++ code blocks in documentation files.
This project is derivative work of [`blacken-docs`](https://github.com/adamchainz/blacken-docs). License from `blacken-docs` is included in [LICENSE_blacken_docs](LICENSE_blacken_docs)
## install
```bash
pip install clang-format-docs
```
## Usage
`clang-format-docs` will take markdown files and search for C++ code blocks e.g
```markdown
```c++
void hello(){
std::cout << "Hello world\n";
}
```
```
and format them using `clang-format`, i.e
```bash
clang-format-docs file.md
```
will rewrite the file with clang-format applied. Also note that you can pass in a different format style using
```
clang-format-docs --style=LLVM file.md
```
or using a clang-format config file
```
clang-format-docs --style=file:my_clang_format.txt file.md
```
## Usage with pre-commit
See [pre-commit](https://pre-commit.com) for instructions
Sample `.pre-commit-config.yaml`:
```yaml
- repo: https://github.com/finsberg/clang-format-docs
rev: v0.5.0
hooks:
- id: clang-format-docs
additional_dependencies: [clang-format==14.0.6]
```
| text/markdown | null | Henrik Finsberg <henriknf@simula.no> | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"clang-format",
"build; extra == \"dev\"",
"ipython; extra == \"dev\"",
"pdbpp; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"twine; extra == \"dev\"",
"wheel; extra == \"dev\"",
"pre-commit; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/finsberg/clang-format-docs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:28:22.045781 | clang_format_docs-0.5.0.tar.gz | 6,037 | 41/f4/b6a5baad090c7f3c23680e08df7686b05246d22ab7365fd3b31b519e8ae2/clang_format_docs-0.5.0.tar.gz | source | sdist | null | false | 463e3e71fb2cc3e2fffebf3dec50e202 | 1a470ab90962d403bfcfe6e19655e8728784a65778f95fdac418c58b5435ec4e | 41f4b6a5baad090c7f3c23680e08df7686b05246d22ab7365fd3b31b519e8ae2 | null | [
"LICENSE",
"LICENSE_blacken_docs"
] | 245 |
2.4 | medimgkit | 0.12.0 | A comprehensive toolkit for medical image processing, including DICOM, NIfTI, and multi-format I/O utilities | # MedImgKit
A comprehensive toolkit for medical image processing, providing utilities for DICOM, NIfTI, and other medical image formats with seamless multi-format I/O operations.
## Features
- **DICOM Support**: Read, anonymize, and manipulate DICOM files
- **NIfTI Support**: Work with neuroimaging data in NIfTI format
- **Multi-format I/O**: Unified interface for reading various image formats
- **Anonymization**: DICOM anonymization following DICOM standards
- **Coordinate Conversion**: Convert between pixel and patient coordinates
- **Multi-frame Assembly**: Combine multiple DICOM files into multi-frame volumes
## Installation
### From PyPI
```bash
pip install medimgkit
```
### From Source
```bash
pip install git+https://github.com/SonanceAI/medimgkit
```
## Quick Start
### DICOM Operations
```python
import medimgkit as mik
import pydicom
# Read and normalize DICOM image
ds = pydicom.dcmread('path/to/dicom.dcm')
image_array = mik.load_image_normalized(ds)
# Anonymize DICOM
anonymized_ds = mik.anonymize_dicom(ds)
# Convert pixel coordinates to patient coordinates
patient_coords = mik.pixel_to_patient(ds, pixel_x=100, pixel_y=150)
```
### NIfTI Operations
```python
import nibabel as nib
import medimgkit as mik
# Load NIfTI file
nifti_data = nib.load('path/to/image.nii.gz')
# Get a specific slice
slice_image = mik.get_slice(nifti_data, slice_index=50, slice_axis=2)
# Convert world coordinates to slice index
slice_idx, axis = mik.line_to_slice_index(nifti_data, point1, point2)
```
### Multi-format Reading
```python
import medimgkit as mik
# Read any supported format
image_array = mik.read_array_normalized('path/to/image.dcm')
image_array = mik.read_array_normalized('path/to/image.nii.gz')
image_array = mik.read_array_normalized('path/to/image.png')
```
## API Reference
### DICOM Utils (`medimgkit.dicom_utils`)
#### Core Functions
- `load_image_normalized(dicom, index=None)`: Load and normalize DICOM pixel data
- `anonymize_dicom(ds, retain_codes=[], copy=False, token_mapper=None)`: Anonymize DICOM following standards
- `assemble_dicoms(files_path, return_as_IO=False)`: Combine multiple DICOMs into multi-frame
- `is_dicom(f)`: Check if file is a DICOM
#### Coordinate Conversion
- `pixel_to_patient(ds, pixel_x, pixel_y, slice_index=None)`: Convert pixel to patient coordinates
- `get_image_position(ds, slice_index=None)`: Get image position in patient coordinates
- `get_pixel_spacing(ds, slice_index)`: Get pixel spacing information
#### Anatomical Analysis
- `determine_anatomical_plane_from_dicom(ds, slice_axis, alignment_threshold=0.95)`: Determine anatomical plane
### NIfTI Utils (`medimgkit.nifti_utils`)
#### Slice Operations
- `get_slice(data, slice_index, slice_axis)`: Extract 2D slice from 3D volume
- `get_slice_from_line(data, world_point1, world_point2)`: Get slice defined by line
- `slice_location_to_slice_index(data, slice_location, slice_axis)`: Convert location to index
#### Coordinate Conversion
- `line_to_slice_index(data, world_point1=None, world_point2=None, coplanar_vector=None)`: Convert line to slice
- `axis_name_to_axis_index(data, axis_name)`: Convert axis name to index
#### Utilities
- `is_nifti_file(file_path)`: Check if file is NIfTI format
### I/O Utils (`medimgkit.io_utils`)
#### Reading Functions
- `read_array_normalized(file_path, index=None, return_metainfo=False, use_magic=False)`: Universal image reader
- `read_image(file_path)`: Read standard image formats (PNG, JPEG)
- `read_nifti(file_path, mimetype=None)`: Read NIfTI files
- `read_video(file_path, index=None)`: Read video files
## Supported Formats
- **DICOM**: .dcm, .dicom (and files without extension)
- **NIfTI**: .nii, .nii.gz
- **Images**: .png, .jpg, .jpeg
- **Video**: .mp4, .avi, .mov, .mkv
- **NumPy**: .npy
## Development
### Running Tests
```bash
pytest
```
## License
MIT License - see LICENSE file for details.
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Submit a pull request
| text/markdown | null | null | null | null | null | medical, imaging, dicom, nifti, healthcare, radiology, medical-imaging, image-processing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydicom>=2.3.0",
"nibabel>=3.2.0",
"numpy>=1.20.0",
"Pillow>=8.0.0",
"opencv-python>=4.5.0",
"tqdm>=4.60.0",
"python-magic>=0.4.24",
"puremagic>=1.30",
"plotly>=5.19.0",
"nbformat>=4.3.0",
"pandas>=1.5.3",
"pylibjpeg>=2.0.0",
"pylibjpeg-libjpeg>=2.0.0",
"deprecated>=1.2.0",
"pytest>=6.0... | [] | [] | [] | [
"Homepage, https://github.com/SonanceAI/medimgkit",
"Repository, https://github.com/SonanceAI/medimgkit",
"Bug Tracker, https://github.com/SonanceAI/medimgkit/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:27:40.324096 | medimgkit-0.12.0.tar.gz | 42,394 | c3/6f/c4e3c940b2dbda5df6c0db1762b1fd398cd9f7a53b5a81711b3c902a1fa2/medimgkit-0.12.0.tar.gz | source | sdist | null | false | 5c018351d96b8e8b6f4fd83974954ac3 | d702888619e93b966c49aa4daf56a31cb92305b5c4a799f19a1a24462d0ecf58 | c36fc4e3c940b2dbda5df6c0db1762b1fd398cd9f7a53b5a81711b3c902a1fa2 | MIT | [] | 212 |
2.4 | ecdallm | 0.2.9 | Retrieval-Augmented Generation - (RAG) | # ecdallm
[](https://pypi.org/project/ecdallm/)
[](https://pypi.org/project/ecdallm/)
[](LICENCE)
**ecdallm** is a lightweight Retrieval-Augmented Generation (RAG)
application that lets you chat with your own documents using either a
**local LLM** or an **external OpenAI-compatible provider**.
It combines:
- FastAPI web interface
- Local embedding pipeline (FastEmbed)
- Persistent vector storage (ChromaDB)
- Document ingestion (PDF, TXT, DOCX)
- CLI launcher
- Local LLM support (e.g., LM Studio)
- External LLM support (OpenAI-compatible APIs)
The goal is to provide a simple, reproducible environment for
**document-grounded LLM interaction** with flexible model connectivity.
------------------------------------------------------------------------
## Overview
`ecdallm` allows you to:
1. Upload documents
2. Index them into a vector database
3. Run semantic retrieval
4. Query an LLM with grounded context
All embeddings and vector storage run locally.
The chat model can run:
- locally (LM Studio, Ollama, etc.)
- externally (OpenRouter or OpenAI-compatible APIs)
This makes the system suitable for:
- research environments
- private document analysis
- offline experimentation
- RAG prototyping
- hybrid local/cloud workflows
------------------------------------------------------------------------
## Installation
Install from PyPI:
``` bash
pip install ecdallm
```
------------------------------------------------------------------------
## Running the application
Start the CLI:
``` bash
ecdallm
```
The CLI will:
- find a free port (starting from 8000)
- start the FastAPI server
- open the browser automatically
Example output:
ecdallm running at http://127.0.0.1:8000/
INFO: Uvicorn running on http://127.0.0.1:8000
------------------------------------------------------------------------
## LLM Configuration
When the application starts, click **Continue** and choose:
- Local LLM
- External LLM
Configuration is stored in the browser session.
Embeddings always run locally using FastEmbed with:
nomic-embed-text-v1.5
------------------------------------------------------------------------
## Using a local LLM
`ecdallm` expects an OpenAI-compatible endpoint.
For example, with **LM Studio**:
1. Start LM Studio server
2. Load a chat model
3. Enable the local API server
Typical endpoint:
http://localhost:1234/v1
Default configuration:
Base URL: http://localhost:1234/v1
API Key: lm-studio
The backend automatically detects the available chat model via:
GET /models
------------------------------------------------------------------------
## Using an external LLM
`ecdallm` can connect to any **OpenAI-compatible API provider**.
Examples include:
- OpenRouter
- OpenAI-compatible gateways
- Self-hosted inference APIs
Example configuration (OpenRouter):
Base URL: https://openrouter.ai/api/v1
Model: openrouter/aurora-alpha
API Key: sk-or-...
Steps:
1. Create an account with the provider
2. Generate an API key
3. Choose a chat model
4. Enter the configuration in the web interface
When validating, `ecdallm`:
- checks connectivity
- performs a test chat completion
- stores configuration in session storage
Your API key is sent only to your backend for validation and is **not
used directly in the browser**.
Embeddings remain local.
------------------------------------------------------------------------
## Supported document types
- PDF
- TXT
- DOCX
------------------------------------------------------------------------
## Workflow
### 1. Upload documents
Use the **Upload** page to add files.
### 2. Index documents
Files are automatically indexed into ChromaDB using FastEmbed.
### 3. Chat with documents
Open the **Chat** page and ask questions.
The assistant will:
- retrieve relevant chunks
- build a grounded prompt
- query the configured LLM
- return a concise answer
------------------------------------------------------------------------
## Project structure
ecdallm/
├── cli.py
└── app/
├── main.py
├── rag.py
├── paths.py
├── search_engine.py
├── vector.py
├── templates/
├── static/
├── uploads/
└── rag_store/
------------------------------------------------------------------------
## Notes
Embeddings and retrieval always run locally.
The chat model can be:
- local (LM Studio, Ollama, etc.)
- external (OpenAI-compatible providers)
This keeps the system flexible while maintaining local document
processing.
------------------------------------------------------------------------
## Erasmus Data Collaboratory
Developed by the Erasmus Data Collaboratory (ECDA).
- Zaman Ziabakhshganji --- creator and maintainer
- Farshad Radman --- co-author and contributor
- Jos van Dongen --- co-author and contributor
------------------------------------------------------------------------
## License
MIT License
| text/markdown | EDC - Erasmus Data Collaboratory | admin@ecda.ai | null | null | MIT | rag, llm, nlp, data, vector | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"bs4<0.0.3,>=0.0.2",
"chromadb<2.0.0,>=1.5.0",
"docx2txt<0.10,>=0.9",
"faiss-cpu<2.0.0,>=1.13.2",
"fastapi<0.129.0,>=0.128.5",
"fastembed<0.8.0,>=0.7.4",
"jinja2<4.0.0,>=3.1.6",
"langchain<2.0.0,>=1.2.9",
"langchain-chroma<2.0.0,>=1.1.0",
"langchain-community<0.5.0,>=0.4.1",
"langchain-openai<2.... | [] | [] | [] | [
"Homepage, https://ecda.eur.nl/erasmus-data-collaboratory/",
"Repository, https://github.com/Erasmus-Data-Collaboratory/ecdallm"
] | poetry/2.3.2 CPython/3.11.0 Darwin/25.3.0 | 2026-02-19T13:27:36.010219 | ecdallm-0.2.9-py3-none-any.whl | 712,309 | 32/24/ab8f1025437db93c99947cada9633fa05a13fa4771fd9d6502fafc75b712/ecdallm-0.2.9-py3-none-any.whl | py3 | bdist_wheel | null | false | 506b3efb2a6eca7bb6358c5cbcb11e0f | 9d3777973a90c19944ffbde7c522d6fa8e8c06b00c6e965e7e2a601d566ed0bf | 3224ab8f1025437db93c99947cada9633fa05a13fa4771fd9d6502fafc75b712 | null | [
"LICENSE"
] | 219 |
2.4 | threejs-viewer | 0.0.2 | Lightweight Three.js viewer controlled from Python via WebSocket | # threejs-viewer
Lightweight Three.js viewer controlled from Python via WebSocket.

A Python client runs a WebSocket server that a browser-based Three.js viewer connects to. Designed for robotics visualization, scientific computing, and interactive 3D exploration.
## Features
- **Simple API**: Add primitives, load models, update transforms
- **GLB/PBR support**: Load GLB models with PBR materials, studio environment lighting
- **Embedded animations**: Drive GLTF skeletal/morph animations from Python via `clip_times`
- **Animation support**: Pre-compute animations, scrub timeline, adjust playback speed
- **Binary transfer**: Efficient loading of large meshes and polylines
- **Auto-reconnect**: Browser reconnects automatically, animations persist
- **Z-up coordinates**: Robotics convention (matches ROS, URDF)
- **No build step**: Self-contained HTML viewer, just open in browser
## Installation
```bash
pip install threejs-viewer
```
## Quick Start
```python
from threejs_viewer import viewer
# Start server and wait for browser to connect
v = viewer()
# Add objects
v.add_sphere("ball", radius=0.3, color=0xFF0000, position=[0, 0, 0.5])
v.add_box("ground", width=5, height=5, depth=0.1, color=0x444444)
# Keep running
input("Press Enter to exit")
```
Open the viewer in your browser:
```bash
threejs-viewer open
# Or: threejs-viewer path (prints path to viewer.html)
```
## Usage
### Objects
```python
# Primitives
client.add_box("box1", width=1, height=2, depth=0.5, color=0x4A90D9)
client.add_sphere("sphere1", radius=0.5, position=[2, 0, 0])
client.add_cylinder("cyl1", radius_top=0.3, radius_bottom=0.5, height=1)
# 3D models (binary transfer)
client.add_model_binary("robot", "robot.stl", format="stl")
# Polylines with colormaps
client.add_polyline("path", points, colors=z_values, colormap="viridis", line_width=3)
```
### Transforms
```python
# Single object
client.set_position("box1", 1.0, 2.0, 0.5)
client.set_matrix("box1", matrix_4x4.flatten().tolist())
# Batch update (efficient for 60fps)
client.set_transforms({
"link1": matrix1.flatten().tolist(),
"link2": matrix2.flatten().tolist(),
})
```
### Animations
```python
from threejs_viewer import Animation
animation = Animation(loop=True)
for t in times:
animation.add_frame(
time=t,
transforms=compute_transforms(t),
colors={"robot": 0xFF0000 if collision else 0x00FF00},
clip_times={"glb_model": t}, # drive embedded GLTF animations
)
animation.add_marker(3.5, "Collision detected")
client.load_animation(animation)
```
Viewer controls: Space (play/pause), Arrow keys (step frames), 1-5 (speed), L (loop)
### GLB Models with Embedded Animations
```python
# Load a GLB with embedded animations (skeletal, morph targets)
client.add_model_binary("fox", "fox.glb", format="glb")
# Seek embedded animation to a specific time (seconds)
client.set_clip_time("fox", 1.5)
```
## Documentation
- [DESIGN.md](DESIGN.md) - Architecture and protocol details
- [examples/](examples/) - Runnable demo scripts
## CLI
```bash
threejs-viewer path # Print path to viewer.html
threejs-viewer open # Open in default browser
threejs-viewer code # Open in VS Code (use "Show Preview" for docked view)
```
## License
MIT
| text/markdown | Thijs Damsma | null | null | null | null | 3d, robotics, three.js, viewer, visualization, websocket | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Gr... | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.24.0",
"websockets>=12.0"
] | [] | [] | [] | [
"Homepage, https://github.com/thijsdamsma/threejs-viewer",
"Repository, https://github.com/thijsdamsma/threejs-viewer"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:27:23.986475 | threejs_viewer-0.0.2.tar.gz | 25,137 | bb/cc/3e23b65b57faa0ef2b2bb5374e10cfa2b2f82722a884514c4b0634759615/threejs_viewer-0.0.2.tar.gz | source | sdist | null | false | 078766b2866a802cbdf80d735862fefb | 7e96ab0fe9be96e6a06c60fb8b912d550929effb0aedd615beaf675dc3b4d34a | bbcc3e23b65b57faa0ef2b2bb5374e10cfa2b2f82722a884514c4b0634759615 | MIT | [] | 199 |
2.4 | syftbox-crypto-python | 0.1.0b1 | Python bindings for the Syft crypto protocol | # syftbox-crypto-python (PyO3 bindings)
Python bindings for the `syftbox-crypto-protocol` crate, built with [PyO3](https://pyo3.rs/) and [maturin](https://www.maturin.rs/).
## Quick start
```bash
uv venv
uv pip install maturin
uv run -- maturin develop --manifest-path bindings/python/Cargo.toml
python - <<'PY'
import syftbox_crypto_python as sbc
material = sbc.generate_identity_material("alice@example.com")
print(material.fingerprint)
print(material.did)
print(material.recovery_key_hex)
PY
```
## Building wheels
```bash
uv venv
uv pip install maturin
uv run -- maturin build --release --manifest-path bindings/python/Cargo.toml
ls dist
```
## Development
- Format Rust code with `cargo fmt`
- Format Python stubs with `uv run ruff format python`
- Lint Python code with `uv run ruff check python`
| text/markdown; charset=UTF-8; variant=GFM | OpenMined | null | null | null | Apache-2.0 | null | [
"Programming Language :: Rust",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/OpenMined/syftbox-crypto",
"Repository, https://github.com/OpenMined/syftbox-crypto"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:26:51.414056 | syftbox_crypto_python-0.1.0b1.tar.gz | 67,361 | 16/4b/de374f524b56457a96ab9890fb6f81be989cc54e6cd9f17bc51b446e8299/syftbox_crypto_python-0.1.0b1.tar.gz | source | sdist | null | false | b2c5cfdda6285c8bac53f70452def62b | 1ad074d14e1b8c1406d53a2ae4a285355bd9c8f532df0da01c360556c3647b4e | 164bde374f524b56457a96ab9890fb6f81be989cc54e6cd9f17bc51b446e8299 | null | [] | 439 |
2.4 | surfacedocs | 0.4.1 | Python SDK for SurfaceDocs - Save LLM-generated documents | # SurfaceDocs Python SDK
Save LLM-generated documents to [SurfaceDocs](https://surfacedocs.dev).
## Installation
```bash
pip install surfacedocs
```
## Quick Start
```python
from surfacedocs import SurfaceDocs, DOCUMENT_SCHEMA, SYSTEM_PROMPT
from openai import OpenAI
# Initialize clients
openai = OpenAI()
docs = SurfaceDocs(api_key="sd_live_...")
# Generate a document with your LLM
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": "Document our REST API authentication flow"},
],
response_format={
"type": "json_schema",
"json_schema": {
"name": "surfacedocs_document",
"schema": DOCUMENT_SCHEMA,
},
},
)
# Save to SurfaceDocs
result = docs.save(response.choices[0].message.content)
print(result.url) # https://app.surfacedocs.dev/d/abc123
```
## What's Included
The SDK provides three exports:
| Export | Type | Purpose |
|--------|------|---------|
| `DOCUMENT_SCHEMA` | dict | JSON schema for LLM structured output |
| `SYSTEM_PROMPT` | str | Instructions for LLM to generate documents |
| `SurfaceDocs` | class | HTTP client to save documents |
## API Reference
### SurfaceDocs
```python
from surfacedocs import SurfaceDocs
# Initialize with API key
client = SurfaceDocs(api_key="sd_live_...")
# Or use environment variable
# export SURFACEDOCS_API_KEY=sd_live_...
client = SurfaceDocs()
```
#### save(content, folder_id=None)
Save a document from LLM output.
```python
# From JSON string
result = client.save(response.choices[0].message.content)
# From dict
result = client.save({
"title": "My Document",
"blocks": [{"type": "paragraph", "content": "Hello world"}]
})
# To specific folder
result = client.save(content, folder_id="folder_abc123")
```
#### save_raw(title, blocks, folder_id=None, metadata=None)
Save a document with explicit parameters.
```python
result = client.save_raw(
title="API Documentation",
blocks=[
{"type": "heading", "content": "Authentication", "metadata": {"level": 1}},
{"type": "paragraph", "content": "Use Bearer tokens for auth."},
{"type": "code", "content": "curl -H 'Authorization: Bearer ...'", "metadata": {"language": "bash"}},
],
metadata={"source": "doc-generator", "version": "1.0"},
)
```
#### get_document(document_id)
Retrieve a document by ID.
```python
doc = client.get_document("doc_abc123")
print(doc.title) # "API Documentation"
print(doc.blocks[0].type) # "heading"
print(doc.blocks[0].content) # "Authentication"
```
#### delete_document(document_id)
Delete a document by ID.
```python
client.delete_document("doc_abc123")
```
#### create_folder(name, parent_id=None)
Create a new folder.
```python
folder = client.create_folder("API Docs")
print(folder.id) # "fld_abc123"
print(folder.name) # "API Docs"
# Create a subfolder
subfolder = client.create_folder("v2", parent_id=folder.id)
```
#### list_folders(parent_id=None)
List folders, optionally filtered by parent.
```python
# List all root folders
folders = client.list_folders()
# List subfolders of a specific folder
subfolders = client.list_folders(parent_id="fld_abc123")
```
#### SaveResult
Both `save()` and `save_raw()` return a `SaveResult`:
```python
result.id # "doc_abc123"
result.url # "https://app.surfacedocs.dev/d/doc_abc123"
result.folder_id # "folder_xyz"
```
#### Document
Returned by `get_document()`:
```python
doc.id # "doc_abc123"
doc.url # "https://app.surfacedocs.dev/d/doc_abc123"
doc.folder_id # "folder_xyz"
doc.title # "My Document"
doc.content_type # "markdown"
doc.visibility # "private"
doc.blocks # list[Block]
doc.metadata # dict or None
doc.created_at # "2024-01-01T00:00:00Z"
doc.updated_at # "2024-01-02T00:00:00Z"
```
#### Block
Each document contains a list of `Block` objects:
```python
block.id # "blk_abc123"
block.order # 0
block.type # "heading", "paragraph", "code", etc.
block.content # "Hello world"
block.metadata # {"level": 1} or None
```
#### Folder
Returned by `create_folder()` and `list_folders()`:
```python
folder.id # "fld_abc123"
folder.name # "API Docs"
folder.parent_id # "fld_parent" or None
folder.path # "/API Docs"
folder.depth # 0
folder.created_at # "2024-01-01T00:00:00Z"
```
### DOCUMENT_SCHEMA
JSON schema dict for LLM structured output. Pass directly to your LLM provider.
### SYSTEM_PROMPT
System prompt string to instruct LLMs on document format.
```python
from surfacedocs import SYSTEM_PROMPT
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": "Document the login flow"},
]
```
## Block Types
Documents are composed of blocks:
| Type | Description | Metadata |
|------|-------------|----------|
| `heading` | Section header | `level` (1-6) |
| `paragraph` | Body text | - |
| `code` | Code block | `language` (optional) |
| `list` | Bullet/numbered list | `listType` ("bullet" or "ordered") |
| `quote` | Block quote | - |
| `table` | Markdown table | - |
| `image` | Image | `url` (required), `alt` (optional) |
| `divider` | Horizontal rule | - |
Text content supports inline markdown: `**bold**`, `*italic*`, `` `code` ``, `[link](url)`
## Error Handling
```python
from surfacedocs import (
SurfaceDocs,
SurfaceDocsError,
AuthenticationError,
DocumentNotFoundError,
FolderNotFoundError,
ValidationError,
)
try:
result = client.save(content)
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Invalid document: {e}")
except SurfaceDocsError as e:
print(f"API error: {e}")
try:
doc = client.get_document("doc_abc123")
except DocumentNotFoundError:
print("Document does not exist")
try:
folder = client.create_folder("Docs", parent_id="fld_nonexistent")
except FolderNotFoundError:
print("Parent folder does not exist")
```
## Environment Variables
```bash
# API key (alternative to passing in code)
export SURFACEDOCS_API_KEY=sd_live_...
```
## Examples
### OpenAI
```python
from surfacedocs import SurfaceDocs, DOCUMENT_SCHEMA, SYSTEM_PROMPT
from openai import OpenAI
openai = OpenAI()
docs = SurfaceDocs()
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": "Write documentation for user authentication"},
],
response_format={
"type": "json_schema",
"json_schema": {"name": "document", "schema": DOCUMENT_SCHEMA},
},
)
result = docs.save(response.choices[0].message.content)
print(f"Saved: {result.url}")
```
### Anthropic
Using Claude's structured outputs with tool use:
```python
from surfacedocs import SurfaceDocs, DOCUMENT_SCHEMA, SYSTEM_PROMPT
import anthropic
client = anthropic.Anthropic()
docs = SurfaceDocs()
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
system=SYSTEM_PROMPT,
messages=[
{"role": "user", "content": "Write documentation for user authentication"},
],
tools=[{
"name": "create_document",
"description": "Create a structured document",
"input_schema": DOCUMENT_SCHEMA,
}],
tool_choice={"type": "tool", "name": "create_document"},
)
tool_use = next(b for b in response.content if b.type == "tool_use")
result = docs.save(tool_use.input)
print(f"Saved: {result.url}")
```
### Google Gemini
Using Gemini's structured output with JSON schema:
```python
from surfacedocs import SurfaceDocs, DOCUMENT_SCHEMA, SYSTEM_PROMPT
import google.generativeai as genai
genai.configure(api_key="...")
docs = SurfaceDocs()
model = genai.GenerativeModel(
model_name="gemini-2.0-flash",
system_instruction=SYSTEM_PROMPT,
generation_config=genai.GenerationConfig(
response_mime_type="application/json",
response_schema=DOCUMENT_SCHEMA,
),
)
response = model.generate_content("Write documentation for user authentication")
result = docs.save(response.text)
print(f"Saved: {result.url}")
```
### Manual Document
```python
from surfacedocs import SurfaceDocs
docs = SurfaceDocs()
result = docs.save_raw(
title="Meeting Notes",
blocks=[
{"type": "heading", "content": "Action Items", "metadata": {"level": 1}},
{"type": "list", "content": "- Review PR #123\n- Update docs", "metadata": {"listType": "bullet"}},
{"type": "divider", "content": ""},
{"type": "paragraph", "content": "Next meeting: Monday 10am"},
],
metadata={"source": "meeting-bot"},
)
```
### Managing Documents
```python
from surfacedocs import SurfaceDocs, DocumentNotFoundError
docs = SurfaceDocs()
# Save a document
result = docs.save_raw(
title="API Guide",
blocks=[{"type": "paragraph", "content": "Welcome to the API."}],
)
# Retrieve it
doc = docs.get_document(result.id)
print(doc.title) # "API Guide"
# Delete it
docs.delete_document(result.id)
```
### Managing Folders
```python
from surfacedocs import SurfaceDocs
docs = SurfaceDocs()
# Create a folder hierarchy
parent = docs.create_folder("Engineering")
child = docs.create_folder("Backend", parent_id=parent.id)
# List root folders
for folder in docs.list_folders():
print(folder.name)
# List subfolders
for folder in docs.list_folders(parent_id=parent.id):
print(f" {folder.name}")
# Save a document to a folder
result = docs.save_raw(
title="Architecture Overview",
blocks=[{"type": "paragraph", "content": "Our system uses..."}],
folder_id=child.id,
)
```
## License
MIT
| text/markdown | null | SurfaceDocs <hello@surfacedocs.dev>, Sam Gallagher <gallaghersam95@gmail.com> | null | null | null | ai, documentation, llm, sdk | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: P... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.26.0",
"build; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"respx>=0.20.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://surfacedocs.dev",
"Documentation, https://app.surfacedocs.dev/public/d/doc_VYMSDUFWvYBO"
] | uv/0.9.0 | 2026-02-19T13:26:28.576768 | surfacedocs-0.4.1.tar.gz | 52,482 | b4/fa/c43461b0a93d30a40a286c4a002f9bb561a6ef34fbc0d900bfdab300ea27/surfacedocs-0.4.1.tar.gz | source | sdist | null | false | 09cc473c16ffa982f8e4f71cfe15340c | 97d4c5c45d06f6226e9bc4e0e609d91d1290f5decddac19d87a3158365c6c235 | b4fac43461b0a93d30a40a286c4a002f9bb561a6ef34fbc0d900bfdab300ea27 | MIT | [
"LICENSE"
] | 213 |
2.1 | airbyte-source-gcs | 0.10.6.dev202602191326 | Source implementation for Gcs. | # Gcs source connector
This is the repository for the Gcs source connector, written in Python.
For information about how to use this connector within Airbyte, see [the documentation](https://docs.airbyte.com/integrations/sources/gcs).
## Local development
### Prerequisites
- Python (~=3.9)
- Poetry (~=1.7) - installation instructions [here](https://python-poetry.org/docs/#installation)
### Installing the connector
From this connector directory, run:
```bash
poetry install --with dev
```
### Create credentials
**If you are a community contributor**, follow the instructions in the [documentation](https://docs.airbyte.com/integrations/sources/gcs)
to generate the necessary credentials. Then create a file `secrets/config.json` conforming to the `source_gcs/spec.yaml` file.
Note that any directory named `secrets` is gitignored across the entire Airbyte repo, so there is no danger of accidentally checking in sensitive information.
See `sample_files/sample_config.json` for a sample config file.
### Locally running the connector
```
poetry run source-gcs spec
poetry run source-gcs check --config secrets/config.json
poetry run source-gcs discover --config secrets/config.json
poetry run source-gcs read --config secrets/config.json --catalog sample_files/configured_catalog.json
```
### Running unit tests
To run unit tests locally, from the connector directory run:
```
poetry run pytest unit_tests
```
### Building the docker image
1. Install [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md)
2. Run the following command to build the docker image:
```bash
airbyte-ci connectors --name=source-gcs build
```
An image will be available on your host with the tag `airbyte/source-gcs:dev`.
### Running as a docker container
Then run any of the connector commands as follows:
```
docker run --rm airbyte/source-gcs:dev spec
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-gcs:dev check --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets airbyte/source-gcs:dev discover --config /secrets/config.json
docker run --rm -v $(pwd)/secrets:/secrets -v $(pwd)/integration_tests:/integration_tests airbyte/source-gcs:dev read --config /secrets/config.json --catalog /integration_tests/configured_catalog.json
```
### Running our CI test suite
You can run our full test suite locally using [`airbyte-ci`](https://github.com/airbytehq/airbyte/blob/master/airbyte-ci/connectors/pipelines/README.md):
```bash
airbyte-ci connectors --name=source-gcs test
```
### Customizing acceptance Tests
Customize `acceptance-test-config.yml` file to configure acceptance tests. See [Connector Acceptance Tests](https://docs.airbyte.com/connector-development/testing-connectors/connector-acceptance-tests-reference) for more information.
If your connector requires to create or destroy resources for use during acceptance tests create fixtures for it and place them inside integration_tests/acceptance.py.
### Dependency Management
All of your dependencies should be managed via Poetry.
To add a new dependency, run:
```bash
poetry add <package-name>
```
Please commit the changes to `pyproject.toml` and `poetry.lock` files.
## Publishing a new version of the connector
You've checked out the repo, implemented a million dollar feature, and you're ready to share your changes with the world. Now what?
1. Make sure your changes are passing our test suite: `airbyte-ci connectors --name=source-gcs test`
2. Bump the connector version (please follow [semantic versioning for connectors](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#semantic-versioning-for-connectors)):
- bump the `dockerImageTag` value in in `metadata.yaml`
- bump the `version` value in `pyproject.toml`
3. Make sure the `metadata.yaml` content is up to date.
4. Make sure the connector documentation and its changelog is up to date (`docs/integrations/sources/gcs.md`).
5. Create a Pull Request: use [our PR naming conventions](https://docs.airbyte.com/contributing-to-airbyte/resources/pull-requests-handbook/#pull-request-title-convention).
6. Pat yourself on the back for being an awesome contributor.
7. Someone from Airbyte will take a look at your PR and iterate with you to merge it into master.
8. Once your PR is merged, the new version of the connector will be automatically published to Docker Hub and our connector registry.
| text/markdown | Airbyte | contact@airbyte.io | null | null | ELv2 | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://airbyte.com | null | <3.14,>=3.10 | [] | [] | [] | [
"pytz==2024.2",
"google-cloud-storage==2.12.0",
"smart-open[gcs]==5.1.0",
"airbyte-cdk[file-based]<8.0.0,>=7.0.0"
] | [] | [] | [] | [
"Repository, https://github.com/airbytehq/airbyte",
"Documentation, https://docs.airbyte.com/integrations/sources/gcs"
] | poetry/1.8.5 CPython/3.11.14 Linux/6.14.0-1017-azure | 2026-02-19T13:26:26.628383 | airbyte_source_gcs-0.10.6.dev202602191326.tar.gz | 12,252 | ec/91/2086f24427905fab2049c8df6b86d1420a1e5d4e9518a5564e6abe579557/airbyte_source_gcs-0.10.6.dev202602191326.tar.gz | source | sdist | null | false | 9e461b12abbac4fea33212fbf08abd50 | 3425e444db8136ad5de75ae355f3d300fc0576ad7c7df96f460e474baaeefc37 | ec912086f24427905fab2049c8df6b86d1420a1e5d4e9518a5564e6abe579557 | null | [] | 186 |
2.1 | odoo-addon-auth-oauth-login-field | 18.0.1.0.0.2 | Handle the login field in OAuth signup | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
======================
Auth Oauth Login Field
======================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:0cd90cc579e7c20557c8e1f820f160d40c991eb2d9976c9a964510bc2571485c
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fserver--auth-lightgray.png?logo=github
:target: https://github.com/OCA/server-auth/tree/18.0/auth_oauth_login_field
:alt: OCA/server-auth
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/server-auth-18-0/server-auth-18-0-auth_oauth_login_field
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/server-auth&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
Handle the ``login`` field from the JWT token in OAuth signup. This is
useful when you need to create users where the ``login`` field is
different from the ``email`` field.
**Table of contents**
.. contents::
:local:
Usage
=====
If a ``login`` field is present in the token, it will be used as
``login`` field on user signup. When using the ``auth_oidc`` module, the
Token Map can be populated like this, for instance:
``preferred_username:login``.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/server-auth/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/server-auth/issues/new?body=module:%20auth_oauth_login_field%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* ACSONE SA/NV
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-sbidoul| image:: https://github.com/sbidoul.png?size=40px
:target: https://github.com/sbidoul
:alt: sbidoul
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-sbidoul|
This module is part of the `OCA/server-auth <https://github.com/OCA/server-auth/tree/18.0/auth_oauth_login_field>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | ACSONE SA/NV,Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/server-auth | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T13:24:06.068416 | odoo_addon_auth_oauth_login_field-18.0.1.0.0.2-py3-none-any.whl | 22,542 | 01/24/9a2a8a563544212227e6b95ccf194cc5af45975a20adbef4f8e899d8a143/odoo_addon_auth_oauth_login_field-18.0.1.0.0.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 6ba35b2aebeef6a4eb53416d3de99149 | 36ae64e5b7687ba6b59f16393a60da03bc4f89efe8e35ff248abefa56dad2c3f | 01249a2a8a563544212227e6b95ccf194cc5af45975a20adbef4f8e899d8a143 | null | [] | 111 |
2.4 | hjs-client | 0.1.0 | Python client for HJS API - A Protocol for Structural Traceability | <p align="center">
<a href="README.zh-CN.md">中文</a> | <strong>English</strong>
</p>
# HJS Python Client
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/)
Python client for [HJS API](https://hjs-api.onrender.com) — a responsibility tracing service.
## 📦 Installation
### From PyPI (when published)
```bash
pip install hjs-client
```
### From GitHub (current)
```bash
pip install git+https://github.com/schchit/hjs-api.git#subdirectory=client-py
```
### From local source
```bash
cd /workspaces/hjs-api/client-py
pip install -e .
```
## 🚀 Quick Start
### Basic Example
```python
from hjs_client import HJSClient
# Create client
client = HJSClient()
# Record a judgment
result = client.record_judgment(
entity="alice@bank.com",
action="loan_approved",
scope={"amount": 100000}
)
print("✅ Recorded:", result)
# Retrieve it
judgment = client.get_judgment(result['id'])
print("✅ Retrieved:", judgment)
```
### Using Context Manager
```python
from hjs_client import HJSClient
with HJSClient() as client:
result = client.record_judgment("alice@bank.com", "test_action")
print("✅ Recorded:", result)
```
### Error Handling
```python
from hjs_client import HJSClient
import requests
client = HJSClient()
try:
result = client.record_judgment("alice@bank.com", "test_action")
print("✅ Success:", result)
except ValueError as e:
print("❌ Validation error:", e)
except requests.RequestException as e:
print("❌ API error:", e)
```
## 📚 API Reference
### `HJSClient(base_url, timeout)`
Create a new client instance.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `base_url` | str | `"https://hjs-api.onrender.com"` | API base URL |
| `timeout` | int | `30` | Request timeout in seconds |
### `record_judgment(entity, action, scope)`
Record a judgment.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `entity` | str | ✅ | Who made the judgment |
| `action` | str | ✅ | What action was judged |
| `scope` | dict | ❌ | Optional additional context |
**Returns**: `{ id, status, timestamp }`
**Raises**:
- `ValueError`: If required parameters are missing
- `requests.RequestException`: If API request fails
### `get_judgment(id)`
Retrieve a judgment by ID.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `id` | str | ✅ | Judgment ID from `record_judgment` |
**Returns**: Complete judgment record
**Raises**:
- `ValueError`: If ID is missing or not found
- `requests.RequestException`: If API request fails
## 🧪 Testing
```bash
cd /workspaces/hjs-api/client-py
python -c "
from hjs_client import HJSClient
client = HJSClient()
result = client.record_judgment('test@example.com', 'test_action')
print('✅ Recorded:', result)
judgment = client.get_judgment(result['id'])
print('✅ Retrieved:', judgment)
"
```
Expected output:
```
✅ Recorded: {'id': 'jgd_...', 'status': 'recorded', 'timestamp': '...'}
✅ Retrieved: {'id': 'jgd_...', 'entity': 'test@example.com', 'action': 'test_action', ...}
```
## 📄 License
MIT © HJS Contributors
## 🤝 Contributing
Contributions are welcome! Please:
- Open an [Issue](https://github.com/schchit/hjs-api/issues) for bugs or suggestions
- Submit Pull Requests for improvements
---
| text/markdown | HJS Contributors | signal@humanjudgment.org | null | null | null | hjs, traceability, structural, judgment, api | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Pyt... | [] | https://github.com/schchit/hjs-api | null | >=3.7 | [] | [] | [] | [
"requests>=2.25.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.1 | 2026-02-19T13:23:44.977604 | hjs_client-0.1.0.tar.gz | 2,781 | 06/8f/a4844503dc9b14f9f3c1ee4c0b64fac4b4d89bdf666b958fc15aad6c41ae/hjs_client-0.1.0.tar.gz | source | sdist | null | false | ef6376fc992abbc6d1156c062ce760e1 | 430a05d169fa0f1d01335050996efd02aa69fe8a1a16c04f2ba9d60403de0798 | 068fa4844503dc9b14f9f3c1ee4c0b64fac4b4d89bdf666b958fc15aad6c41ae | null | [] | 230 |
2.4 | pystow | 0.7.28 | Easily pick a place to store data for your Python code | <h1 align="center">
PyStow
</h1>
<p align="center">
<a href="https://github.com/cthoyt/pystow/actions/workflows/tests.yml">
<img alt="Tests" src="https://github.com/cthoyt/pystow/actions/workflows/tests.yml/badge.svg" /></a>
<a href="https://pypi.org/project/pystow">
<img alt="PyPI" src="https://img.shields.io/pypi/v/pystow" /></a>
<a href="https://pypi.org/project/pystow">
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/pystow" /></a>
<a href="https://github.com/cthoyt/pystow/blob/main/LICENSE">
<img alt="PyPI - License" src="https://img.shields.io/pypi/l/pystow" /></a>
<a href='https://pystow.readthedocs.io/en/latest/?badge=latest'>
<img src='https://readthedocs.org/projects/pystow/badge/?version=latest' alt='Documentation Status' /></a>
<a href="https://codecov.io/gh/cthoyt/pystow/branch/main">
<img src="https://codecov.io/gh/cthoyt/pystow/branch/main/graph/badge.svg" alt="Codecov status" /></a>
<a href="https://github.com/cthoyt/cookiecutter-python-package">
<img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" /></a>
<a href="https://github.com/astral-sh/ruff">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
<a href="https://github.com/cthoyt/pystow/blob/main/.github/CODE_OF_CONDUCT.md">
<img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"/></a>
<a href="https://zenodo.org/badge/latestdoi/318194121">
<img src="https://zenodo.org/badge/318194121.svg" alt="DOI"></a>
</p>
👜 Easily pick a place to store data for your Python code
## 💪 Getting Started
Get a directory for your application.
```python
import pystow
# Get a directory (as a pathlib.Path) for ~/.data/pykeen
pykeen_directory = pystow.join('pykeen')
# Get a subdirectory (as a pathlib.Path) for ~/.data/pykeen/experiments
pykeen_experiments_directory = pystow.join('pykeen', 'experiments')
# You can go as deep as you want
pykeen_deep_directory = pystow.join('pykeen', 'experiments', 'a', 'b', 'c')
```
If you reuse the same directory structure a lot, you can save them in a module:
```python
import pystow
pykeen_module = pystow.module("pykeen")
# Access the module's directory with .base
assert pystow.join("pykeen") == pystow.module("pykeen").base
# Get a subdirectory (as a pathlib.Path) for ~/.data/pykeen/experiments
pykeen_experiments_directory = pykeen_module.join('experiments')
# You can go as deep as you want past the original "pykeen" module
pykeen_deep_directory = pykeen_module.join('experiments', 'a', 'b', 'c')
```
Get a file path for your application by adding the `name` keyword argument. This
is made explicit so PyStow knows which parent directories to automatically
create. This works with `pystow` or any module you create with `pystow.module`.
```python
import pystow
# Get a directory (as a pathlib.Path) for ~/.data/indra/database.tsv
indra_database_path = pystow.join('indra', 'database', name='database.tsv')
```
Ensure a file from the internet is available in your application's directory:
```python
import pystow
url = 'https://raw.githubusercontent.com/pykeen/pykeen/master/src/pykeen/datasets/nations/test.txt'
path = pystow.ensure('pykeen', 'datasets', 'nations', url=url)
```
Ensure a tabular data file from the internet and load it for usage (requires
`pip install pandas`):
```python
import pystow
import pandas as pd
url = 'https://raw.githubusercontent.com/pykeen/pykeen/master/src/pykeen/datasets/nations/test.txt'
df: pd.DataFrame = pystow.ensure_csv('pykeen', 'datasets', 'nations', url=url)
```
Ensure a comma-separated tabular data file from the internet and load it for
usage (requires `pip install pandas`):
```python
import pystow
import pandas as pd
url = 'https://raw.githubusercontent.com/cthoyt/pystow/main/tests/resources/test_1.csv'
df: pd.DataFrame = pystow.ensure_csv('pykeen', 'datasets', 'nations', url=url, read_csv_kwargs=dict(sep=","))
```
Ensure a RDF file from the internet and load it for usage (requires
`pip install rdflib`)
```python
import pystow
import rdflib
url = 'https://ftp.expasy.org/databases/rhea/rdf/rhea.rdf.gz'
rdf_graph: rdflib.Graph = pystow.ensure_rdf('rhea', url=url)
```
Also see `pystow.ensure_excel()`, `pystow.ensure_rdf()`,
`pystow.ensure_zip_df()`, and `pystow.ensure_tar_df()`.
If your data comes with a lot of different files in an archive, you can ensure
the archive is downloaded and get specific files from it:
```python
import numpy as np
import pystow
url = "https://cloud.enterprise.informatik.uni-leipzig.de/index.php/s/LHPbMCre7SLqajB/download/MultiKE_D_Y_15K_V1.zip"
# the path inside the archive to the file you want
inner_path = "MultiKE/D_Y_15K_V1/721_5fold/1/20210219183115/ent_embeds.npy"
with pystow.ensure_open_zip("kiez", url=url, inner_path=inner_path) as file:
emb = np.load(file)
```
Also see `pystow.module.ensure_open_lzma()`,
`pystow.module.ensure_open_tarfile()` and `pystow.module.ensure_open_gz()`.
## ⚙️️ Configuration
By default, data is stored in the `$HOME/.data` directory. By default, the
`<app>` app will create the `$HOME/.data/<app>` folder.
If you want to use an alternate folder name to `.data` inside the home
directory, you can set the `PYSTOW_NAME` environment variable. For example, if
you set `PYSTOW_NAME=mydata`, then the following code for the `pykeen` app will
create the `$HOME/mydata/pykeen/` directory:
```python
import os
import pystow
# Only for demonstration purposes. You should set environment
# variables either with your .bashrc or in the command line REPL.
os.environ['PYSTOW_NAME'] = 'mydata'
# Get a directory (as a pathlib.Path) for ~/mydata/pykeen
pykeen_directory = pystow.join('pykeen')
```
If you want to specify a completely custom directory that isn't relative to your
home directory, you can set the `PYSTOW_HOME` environment variable. For example,
if you set `PYSTOW_HOME=/usr/local/`, then the following code for the `pykeen`
app will create the `/usr/local/pykeen/` directory:
```python
import os
import pystow
# Only for demonstration purposes. You should set environment
# variables either with your .bashrc or in the command line REPL.
os.environ['PYSTOW_HOME'] = '/usr/local/'
# Get a directory (as a pathlib.Path) for /usr/local/pykeen
pykeen_directory = pystow.join('pykeen')
```
Note: if you set `PYSTOW_HOME`, then `PYSTOW_NAME` is disregarded.
### X Desktop Group (XDG) Compatibility
While PyStow's main goal is to make application data less opaque and less
hidden, some users might want to use the
[XDG specifications](http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html)
for storing their app data.
If you set the environment variable `PYSTOW_USE_APPDIRS` to `true` or `True`,
then the [`appdirs`](https://pypi.org/project/appdirs/) or
[`platformdirs`](https://pypi.org/project/platformdirs/) package will be used to
choose the base directory based on the `user data dir` option. This can still be
overridden by `PYSTOW_HOME`.
## 🚀 Installation
The most recent release can be installed from
[PyPI](https://pypi.org/project/pystow/) with uv:
```console
$ uv pip install pystow
```
or with pip:
```console
$ python3 -m pip install pystow
```
The most recent code and data can be installed directly from GitHub with uv:
```console
$ uv pip install git+https://github.com/cthoyt/pystow.git
```
or with pip:
```console
$ python3 -m pip install git+https://github.com/cthoyt/pystow.git
```
## 👐 Contributing
Contributions, whether filing an issue, making a pull request, or forking, are
appreciated. See
[CONTRIBUTING.md](https://github.com/cthoyt/pystow/blob/master/.github/CONTRIBUTING.md)
for more information on getting involved.
## 👋 Attribution
### ⚖️ License
The code in this package is licensed under the MIT License.
### 🍪 Cookiecutter
This package was created with
[@audreyfeldroy](https://github.com/audreyfeldroy)'s
[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using
[@cthoyt](https://github.com/cthoyt)'s
[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack)
template.
## 🛠️ For Developers
<details>
<summary>See developer instructions</summary>
The final section of the README is for if you want to get involved by making a
code contribution.
### Development Installation
To install in development mode, use the following:
```console
$ git clone git+https://github.com/cthoyt/pystow.git
$ cd pystow
$ uv pip install -e .
```
Alternatively, install using pip:
```console
$ python3 -m pip install -e .
```
### Updating Package Boilerplate
This project uses `cruft` to keep boilerplate (i.e., configuration, contribution
guidelines, documentation configuration) up-to-date with the upstream
cookiecutter package. Install cruft with either `uv tool install cruft` or
`python3 -m pip install cruft` then run:
```console
$ cruft update
```
More info on Cruft's update command is available
[here](https://github.com/cruft/cruft?tab=readme-ov-file#updating-a-project).
### 🥼 Testing
After cloning the repository and installing `tox` with
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, the
unit tests in the `tests/` folder can be run reproducibly with:
```console
$ tox -e py
```
Additionally, these tests are automatically re-run with each commit in a
[GitHub Action](https://github.com/cthoyt/pystow/actions?query=workflow%3ATests).
### 📖 Building the Documentation
The documentation can be built locally using the following:
```console
$ git clone git+https://github.com/cthoyt/pystow.git
$ cd pystow
$ tox -e docs
$ open docs/build/html/index.html
```
The documentation automatically installs the package as well as the `docs` extra
specified in the [`pyproject.toml`](pyproject.toml). `sphinx` plugins like
`texext` can be added there. Additionally, they need to be added to the
`extensions` list in [`docs/source/conf.py`](docs/source/conf.py).
The documentation can be deployed to [ReadTheDocs](https://readthedocs.io) using
[this guide](https://docs.readthedocs.io/en/stable/intro/import-guide.html). The
[`.readthedocs.yml`](.readthedocs.yml) YAML file contains all the configuration
you'll need. You can also set up continuous integration on GitHub to check not
only that Sphinx can build the documentation in an isolated environment (i.e.,
with `tox -e docs-test`) but also that
[ReadTheDocs can build it too](https://docs.readthedocs.io/en/stable/pull-requests.html).
#### Configuring ReadTheDocs
1. Log in to ReadTheDocs with your GitHub account to install the integration at
https://readthedocs.org/accounts/login/?next=/dashboard/
2. Import your project by navigating to https://readthedocs.org/dashboard/import
then clicking the plus icon next to your repository
3. You can rename the repository on the next screen using a more stylized name
(i.e., with spaces and capital letters)
4. Click next, and you're good to go!
### 📦 Making a Release
#### Configuring Zenodo
[Zenodo](https://zenodo.org) is a long-term archival system that assigns a DOI
to each release of your package.
1. Log in to Zenodo via GitHub with this link:
https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page
that lists all of your organizations and asks you to approve installing the
Zenodo app on GitHub. Click "grant" next to any organizations you want to
enable the integration for, then click the big green "approve" button. This
step only needs to be done once.
2. Navigate to https://zenodo.org/account/settings/github/, which lists all of
your GitHub repositories (both in your username and any organizations you
enabled). Click the on/off toggle for any relevant repositories. When you
make a new repository, you'll have to come back to this
After these steps, you're ready to go! After you make "release" on GitHub (steps
for this are below), you can navigate to
https://zenodo.org/account/settings/github/repository/cthoyt/pystow to see the
DOI for the release and link to the Zenodo record for it.
#### Registering with the Python Package Index (PyPI)
You only have to do the following steps once.
1. Register for an account on the
[Python Package Index (PyPI)](https://pypi.org/account/register)
2. Navigate to https://pypi.org/manage/account and make sure you have verified
your email address. A verification email might not have been sent by default,
so you might have to click the "options" dropdown next to your address to get
to the "re-send verification email" button
3. 2-Factor authentication is required for PyPI since the end of 2023 (see this
[blog post from PyPI](https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2fa/)).
This means you have to first issue account recovery codes, then set up
2-factor authentication
4. Issue an API token from https://pypi.org/manage/account/token
#### Configuring your machine's connection to PyPI
You have to do the following steps once per machine.
```console
$ uv tool install keyring
$ keyring set https://upload.pypi.org/legacy/ __token__
$ keyring set https://test.pypi.org/legacy/ __token__
```
Note that this deprecates previous workflows using `.pypirc`.
#### Uploading to PyPI
After installing the package in development mode and installing `tox` with
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, run
the following from the console:
```console
$ tox -e finish
```
This script does the following:
1. Uses [bump-my-version](https://github.com/callowayproject/bump-my-version) to
switch the version number in the `pyproject.toml`, `CITATION.cff`,
`src/pystow/version.py`, and [`docs/source/conf.py`](docs/source/conf.py) to
not have the `-dev` suffix
2. Packages the code in both a tar archive and a wheel using
[`uv build`](https://docs.astral.sh/uv/guides/publish/#building-your-package)
3. Uploads to PyPI using
[`uv publish`](https://docs.astral.sh/uv/guides/publish/#publishing-your-package).
4. Push to GitHub. You'll need to make a release going with the commit where the
version was bumped.
5. Bump the version to the next patch. If you made big changes and want to bump
the version by minor, you can use `tox -e bumpversion -- minor` after.
#### Releasing on GitHub
1. Navigate to https://github.com/cthoyt/pystow/releases/new to draft a new
release
2. Click the "Choose a Tag" dropdown and select the tag corresponding to the
release you just made
3. Click the "Generate Release Notes" button to get a quick outline of recent
changes. Modify the title and description as you see fit
4. Click the big green "Publish Release" button
This will trigger Zenodo to assign a DOI to your release as well.
</details>
| text/markdown | Charles Tapley Hoyt | Charles Tapley Hoyt <cthoyt@gmail.com> | Charles Tapley Hoyt | Charles Tapley Hoyt <cthoyt@gmail.com> | null | snekpack, cookiecutter, caching, file management | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Framework :: Pytest",
"Framework :: tox",
"Framework :: Sphinx",
"Natural Language :: English",
"Programming ... | [] | null | null | >=3.10 | [] | [] | [] | [
"click",
"requests",
"tqdm",
"typing-extensions",
"boto3; extra == \"aws\"",
"bs4; extra == \"bs4\"",
"sphinx>=8; extra == \"docs\"",
"sphinx-rtd-theme>=3.0; extra == \"docs\"",
"sphinx-click; extra == \"docs\"",
"sphinx-automodapi; extra == \"docs\"",
"pandas; extra == \"pandas\"",
"pydantic;... | [] | [] | [] | [
"Bug Tracker, https://github.com/cthoyt/pystow/issues",
"Homepage, https://github.com/cthoyt/pystow",
"Repository, https://github.com/cthoyt/pystow.git",
"Documentation, https://pystow.readthedocs.io",
"Funding, https://github.com/sponsors/cthoyt"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:23:33.182109 | pystow-0.7.28-py3-none-any.whl | 57,950 | 2b/3a/8dfc7e10341d6331dc2ccc09afb8d25501565112798adbd90cf35528c9ab/pystow-0.7.28-py3-none-any.whl | py3 | bdist_wheel | null | false | 9c37550dae5459a8549fac2c7781ad88 | aa73dec0f6dea8bfc844f3abd3d985d6d3594291ebd234d9ec05b16884b13336 | 2b3a8dfc7e10341d6331dc2ccc09afb8d25501565112798adbd90cf35528c9ab | null | [
"LICENSE"
] | 5,069 |
2.4 | crowterminal | 0.1.0 | CrowTerminal Python SDK - External Brain for AI Agents | # CrowTerminal Python SDK
> External Brain for AI Agents - Persistent memory for AI agents working with creators.
While your agent stores 10-50 lines of context, CrowTerminal stores 6 months of versioned history.
## Installation
```bash
pip install crowterminal
```
## Quick Start
```python
from crowterminal import CrowTerminal
# Initialize with your API key
client = CrowTerminal("ct_your_api_key")
# Get memory for a creator
skill = client.memory.get("client_123")
print(f"Niche: {skill.primary_niche}")
print(f"Engagement: {skill.avg_engagement}%")
print(f"Best hooks: {skill.hook_patterns}")
```
## Self-Registration
Don't have an API key? Register programmatically:
```python
from crowterminal import CrowTerminal
# This creates a new API key and returns an initialized client
client = CrowTerminal.register(
agent_name="MyBot",
agent_description="Content optimization agent"
)
# API key is printed - save it!
```
## Core Features
### Memory Operations
```python
# Get current skill
skill = client.memory.get("client_123")
# Get version history
versions = client.memory.get_versions("client_123", limit=10)
# Compare versions
diff = client.memory.get_diff("client_123", from_version=5, to_version=10)
# Track a field over time
pattern = client.memory.get_pattern("client_123", field="avgEngagement")
print(f"Trend: {pattern['trend']}") # increasing, decreasing, stable
```
### Validate Before Changing (Prevent Mistakes)
```python
result = client.memory.validate("client_123", [
{"field": "hookPatterns", "oldValue": ["POV"], "newValue": ["tutorial"]}
])
if result.validation == "blocked":
print("Don't make this change!")
for warning in result.warnings:
print(f" - {warning['message']}")
```
### Engagement Analysis (The Killer Feature)
```python
analysis = client.memory.engagement_analysis("client_123", {
"hookPatterns": ["confession"],
"contentStyle": "casual",
"primaryNiche": "fitness"
})
print(f"Peak engagement: {analysis.peak_engagement}%")
print(f"Your similarity to top performers: {analysis.similarity_to_top}")
for rec in analysis.recommendations:
print(f"Recommendation: {rec}")
```
### Data Ingestion (Push Your Data)
Push platform data we can't access via API:
```python
# Push retention data from TikTok Studio
client.data.ingest(
client_id="client_123",
platform="TIKTOK",
data_type="retention",
video_id="video_456",
data={
"retentionCurve": [100, 95, 88, 75, 60, 45, 30],
"avgWatchTime": 12.5,
"completionRate": 0.30
}
)
# Push demographics
client.data.ingest(
client_id="client_123",
platform="TIKTOK",
data_type="demographics",
data={
"ageGroups": {"18-24": 45, "25-34": 35, "35-44": 15, "45+": 5},
"genderSplit": {"male": 40, "female": 58, "other": 2},
"topCountries": ["BR", "US", "PT"]
}
)
# Bulk ingest (up to 50 items)
client.data.ingest_bulk([
{"clientId": "client_123", "platform": "TIKTOK", "dataType": "retention", "data": {...}},
{"clientId": "client_123", "platform": "TIKTOK", "dataType": "demographics", "data": {...}},
])
```
### Intelligence (Read-Only)
```python
# Get creator profile
profile = client.intelligence.get_profile("client_123")
# Get hook recommendations
hooks = client.intelligence.get_hooks("client_123", count=5)
# Get optimal posting times
timing = client.intelligence.get_timing("client_123")
# Get platform algorithm insights
intel = client.intelligence.get_platform_intel(["TIKTOK", "INSTAGRAM"])
```
## Error Handling
```python
from crowterminal import (
CrowTerminal,
AuthenticationError,
RateLimitError,
ResourceNotFoundError,
)
client = CrowTerminal("ct_your_api_key")
try:
skill = client.memory.get("client_123")
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after} seconds")
except ResourceNotFoundError:
print("Client not found")
```
## Webhooks (Async Notifications)
```python
# Register a webhook
webhook = client.webhooks.register(
url="https://your-server.com/webhook",
events=["skill.updated", "data.ingested"]
)
print(f"Webhook ID: {webhook['id']}")
print(f"Secret (save this!): {webhook['secret']}")
# List webhooks
webhooks = client.webhooks.list()
# Delete a webhook
client.webhooks.delete(webhook_id="wh_xxx")
```
## Service Status
```python
# Check service health (no auth required)
status = client.status.get()
print(f"Service status: {status['status']}")
print(f"Database: {status['services']['database']['status']}")
```
## Sandbox Testing
Use the sandbox endpoints for testing without affecting real data:
```python
# Test without auth
import requests
# Get mock client data
response = requests.get("https://api.crowterminal.com/api/agent/sandbox/client")
print(response.json())
# Test validation
response = requests.post(
"https://api.crowterminal.com/api/agent/sandbox/validate",
json={"proposedChanges": [{"field": "hookPatterns", "newValue": ["tutorial"]}]}
)
print(response.json()) # Will show "blocked" response
```
## Valid Data Types
### TikTok
- retention, demographics, traffic_sources, watch_time
- audience_activity, follower_growth, video_performance
- sound_performance, hashtag_performance
### Instagram
- retention, demographics, reach_sources, watch_time
- audience_activity, follower_growth, content_interactions
- story_metrics, reel_metrics
### YouTube
- retention, demographics, traffic_sources, watch_time
- audience_activity, subscriber_growth, click_through_rate
- impression_sources, end_screen_performance
## Links
- [Full Documentation](https://crowterminal.com/llms.txt)
- [MCP Manifest](https://crowterminal.com/.well-known/mcp.json)
- [GitHub](https://github.com/WillNigri/FluxOps)
- [Contact](mailto:agents@crowterminal.com)
## License
MIT
| text/markdown | null | CrowTerminal <agents@crowterminal.com> | null | null | MIT | ai, agents, memory, creators, influencers, crowterminal | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"types-requests>=2.28.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://crowterminal.com",
"Documentation, https://crowterminal.com/llms.txt",
"Repository, https://github.com/WillNigri/FluxOps",
"Issues, https://github.com/WillNigri/FluxOps/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T13:23:33.029447 | crowterminal-0.1.0.tar.gz | 10,351 | 3e/f1/f626c481466cfbf31c0874da58f7a97e20fcddbc8f18f096f6cdf392676d/crowterminal-0.1.0.tar.gz | source | sdist | null | false | 618a097088be138a0fd00b9fbc13f31c | 62733f12a744029f485f7c3f42fc6b75c6f891ce9b5246a74a2f67bdbbb6d47e | 3ef1f626c481466cfbf31c0874da58f7a97e20fcddbc8f18f096f6cdf392676d | null | [] | 230 |
2.4 | changes-roller | 0.2.0 | A command-line tool for creating and managing coordinated patch series across multiple Git repositories | # changes-roller
[](https://badge.fury.io/py/changes-roller)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://changes-roller.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/astral-sh/ruff)
[](https://github.com/k-pavlo/changes-roller/actions/workflows/ci.yml)
[](https://codecov.io/gh/k-pavlo/changes-roller)
[](https://github.com/k-pavlo/changes-roller/actions/workflows/security.yml)
**Stop manually patching dozens of repositories. Automate it.**
**Changes-Roller** is a command-line tool for creating and managing coordinated
patch series across multiple Git repositories simultaneously.

## Why changes-roller?
When you need to apply the same change across multiple repositories—whether it's a security patch, dependency update, or configuration change—doing it manually is time-consuming and error-prone. You have to clone each repository, apply the change, commit, and submit for review, repeating this process dozens of times.
changes-roller automates this workflow. Write your patch script once, and it executes across all repositories in parallel. Changes are applied consistently with uniform commit messages, and optionally submitted for code review—all from a single command.
**Perfect for:**
- Security updates across multiple microservices
- Dependency upgrades throughout your service ecosystem
- API migrations affecting client libraries
- License header updates for compliance
- Configuration file standardization
- Any scenario requiring identical changes across multiple repositories
## Project Status
This project maintains high quality standards through automated testing and continuous integration:
- **Comprehensive test suite** with high code coverage
- **Multi-platform testing** across Python 3.10-3.13 on Linux, macOS, and Windows
- **Automated quality checks** including strict type checking (MyPy), linting (Ruff), and security scanning (Bandit)
- **Pre-commit hooks** enforce code quality before commits
- **Continuous security monitoring** with pip-audit and dependency review
All pull requests undergo comprehensive automated testing to ensure reliability and maintainability.
## How It Works
Configure once, execute everywhere. You provide the repositories to update and a script containing your changes. changes-roller handles everything else—cloning, patching, testing, committing, and submitting for review. Parallel execution means 50 repositories finish almost as quickly as one. Built-in error handling ensures you get clear feedback about any issues, while successful repositories continue processing.
## Features
- Apply patches to multiple Git repositories in parallel
- Custom patch scripts with full repository access
- Automated Git operations (clone, commit, stage)
- **Git branch switching** - Apply changes to specific branches (e.g., stable branches)
- **Custom command execution** - Run commands before/after applying changes
- **Dry-run mode** - Preview operations without executing them
- Automatic commit sign-off (Signed-off-by line)
- Automatic git-review setup for Gerrit integration
- Commit message templating with variables
- Gerrit code review integration with topic grouping
- Optional test execution before committing (e.g., `tox -e pep8`)
- Clear progress reporting and error handling
## Installation
```bash
# Install in development mode
pip install -e .
# Or install from source
pip install .
```
## Requirements
- Python 3.10 or higher
- Git command-line client
- git-review (optional, for Gerrit integration)
## Quick Start
1. Generate a configuration file:
```bash
roller init --output my-series.ini
```
2. Create a patch script (`my_patch.sh`):
```bash
#!/bin/bash
# Example: Update a dependency version
sed -i 's/old-library==1.0/old-library==2.0/' requirements.txt
chmod +x my_patch.sh
```
3. Edit the configuration file to specify your repositories and patch script:
```bash
nano my-series.ini
# Update the 'projects' list and 'commands' path
```
4. Run the patch series:
```bash
roller create --config-file my-series.ini
```
## Configuration
### [SERIE] Section
**Basic Options:**
- `projects` (required): Comma-separated list of Git repository URLs
- `commands` (required): Path to executable patch script
- `commit_msg` (required): Commit message template (supports `{{ project_name }}`)
- `topic` (optional): Code review topic name
- `commit` (optional): Enable automatic commits (default: true)
- `review` (optional): Enable Gerrit review submission (default: false)
**Branch Switching Options:**
- `branch` (optional): Target branch to switch to before applying changes
- `create_branch` (optional): Create branch if it doesn't exist (default: false)
- `stay_on_branch` (optional): Don't return to original branch after completion (default: false)
**Command Execution Options:**
- `pre_commands` (optional): Commands to run before applying changes (one per line)
- `post_commands` (optional): Commands to run after committing (one per line)
- `continue_on_error` (optional): Continue if commands fail (default: false)
- `dry_run` (optional): Preview operations without executing (default: false)
### [TESTS] Section
- `run` (optional): Enable test execution (default: false)
- `blocking` (optional): Fail if tests fail (default: false)
- `command` (optional): Test command to run (default: tox)
Example: `command = tox -e pep8` runs PEP8 checks before committing
## Command-Line Options
### roller init
Generate a template configuration file.
```bash
roller init [options]
Options:
-o, --output PATH Output file path (default: series.ini)
-f, --force Overwrite existing file
--help Show help message
```
### roller create
Create a new patch series across multiple repositories.
```bash
roller create --config-file <path> [options]
Options:
--config-file PATH Path to configuration file (required)
--config-dir PATH Additional directory for config files
-e, --exit-on-error Exit immediately on first failure
-v, --verbose Enable verbose output
# Branch switching
--branch NAME Target branch to switch to before applying changes
--create-branch Create branch if it doesn't exist (requires --branch)
--stay-on-branch Don't return to original branch after completion
# Command execution
--pre-command CMD Command to execute before changes (repeatable)
--post-command CMD Command to execute after changes (repeatable)
--continue-on-error Continue if commands fail instead of stopping
--dry-run Preview operations without executing them
--help Show help message
```
## Examples
### Basic Usage
```bash
# Apply patch to multiple repositories
roller create --config-file my-series.ini
```
### Branch Switching
```bash
# Apply changes to a specific branch
roller create --config-file security-fix.ini --branch stable/2024.2
# Multi-branch backport
for branch in stable/2024.1 stable/2024.2 stable/2025.1; do
roller create --config-file fix.ini --branch $branch
done
```
### With Commands
```bash
# Pull latest before patching, push after committing
roller create --config-file series.ini \
--pre-command "git pull origin main" \
--post-command "git push origin main"
# Validate before and after
roller create --config-file series.ini \
--pre-command "pytest tests/" \
--post-command "git push"
```
### Dry Run
```bash
# Preview what would happen without executing
roller create --config-file series.ini --dry-run
```
### With Testing
Configuration file with PEP8 validation:
```ini
[SERIE]
projects = https://github.com/org/repo1,
https://github.com/org/repo2
commands = ./my-patch.sh
commit_msg = Fix styling in {{ project_name }}
[TESTS]
run = true
blocking = true
command = tox -e pep8
```
## Examples
See the `examples/` directory for complete working examples:
### [Dependency Update](examples/dependency-update/) - Template
Generic example showing how to update dependencies across multiple repos. Uses placeholder repository URLs - copy and customize for your own projects.
### [Oslo Dependency Update](examples/oslo-dependency-update/) - Real Example
Update pbr dependency across oslo.\* libraries. Uses real OpenStack repositories and demonstrates Gerrit integration.
Each example includes:
- Complete patch script with error handling
- Configured series.ini file
- README with usage instructions and customization guide
For more examples and use cases, see the [documentation examples page](https://changes-roller.readthedocs.io/en/latest/examples.html).
## Development
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
ruff format .
# Linting
ruff check .
# Type checking
mypy roller/
```
## Contributing
We welcome contributions! Please see our contributing guidelines and community standards:
- **[Contributing Guide](CONTRIBUTING.md)** - Development setup, code standards, and PR process
- **[Code of Conduct](CODE_OF_CONDUCT.md)** - Community standards and expectations
- **[Changelog](CHANGELOG.md)** - Release history and version changes
- **[Security Policy](SECURITY.md)** - Reporting issues and safe usage guidelines
## License
See LICENSE file for details.
| text/markdown | null | Pavlo Kostianov <pkostian@redhat.com> | null | null | MIT | automation, batch-operations, cli, code-review, gerrit, git, multi-repo, patch, repository | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: On... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.0",
"bandit>=1.7.10; extra == \"dev\"",
"mypy>=1.13.0; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"furo>=2023.0.0; extra == \"docs\"",
"linkify-it-py>=2.0.0; extra == \"docs\"",
"myst-parser>=2.0.0; extra == \"docs\"",
"sphinx-autoapi>=3.0.... | [] | [] | [] | [
"Homepage, https://github.com/k-pavlo/changes-roller",
"Repository, https://github.com/k-pavlo/changes-roller",
"Issues, https://github.com/k-pavlo/changes-roller/issues",
"Documentation, https://changes-roller.readthedocs.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:23:29.317192 | changes_roller-0.2.0.tar.gz | 1,469,182 | f4/03/3155bf8ac658bcf15d0563e32bbf1d211d7186d41ae3b520698b63ce8a85/changes_roller-0.2.0.tar.gz | source | sdist | null | false | daefa5f5c5c635c4df453b3752beb80f | 7e2fd8bd34705e3261512300d7a2da97368b09aa3d07d422926b608f386fe95e | f4033155bf8ac658bcf15d0563e32bbf1d211d7186d41ae3b520698b63ce8a85 | null | [
"LICENSE"
] | 231 |
2.4 | gpu-memory-profiler | 0.2.1 | A comprehensive GPU memory profiler for PyTorch and TensorFlow with CLI, visualization, and analytics | # GPU Memory Profiler
[](https://github.com/Silas-Asamoah/gpu-memory-profiler/actions)
[](https://pypi.org/project/gpu-memory-profiler/)
[](LICENSE)
[](https://www.python.org/downloads/)
[](https://pytorch.org/)
[](https://tensorflow.org/)
[](CONTRIBUTING.md)
[](docs/tui.md)
[](docs/tui.md#prompt-toolkit-roadmap)
<p align="center">
<img src="https://raw.githubusercontent.com/Silas-Asamoah/gpu-memory-profiler/main/docs/gpu-profiler-overview.gif" alt="GPU Profiler TUI Demo" width="900">
<br/>
<em>Interactive Textual dashboard with live monitoring, visualizations, and CLI automation.</em>
</p>
A production-ready, open source tool for real-time GPU memory profiling, leak detection, and optimization in PyTorch and TensorFlow deep learning workflows.
## Why use GPU Memory Profiler?
- **Prevent Out-of-Memory Crashes**: Catch memory leaks and inefficiencies before they crash your training.
- **Optimize Model Performance**: Get actionable insights and recommendations for memory usage.
- **Works with PyTorch & TensorFlow**: Unified interface for both major frameworks.
- **Beautiful Visualizations**: Timeline plots, heatmaps, and interactive dashboards.
- **CLI & API**: Use from Python or the command line.
## Features
- Real-time GPU memory monitoring
- Memory leak detection & alerts
- Interactive and static visualizations
- Context-aware profiling (decorators, context managers)
- CLI tools for automation
- Data export (CSV, JSON)
- CPU compatibility mode
## Installation
### From PyPI
Package page: <https://pypi.org/project/gpu-memory-profiler/>
```bash
# Basic installation
pip install gpu-memory-profiler
# With visualization support
pip install gpu-memory-profiler[viz]
# With optional dependencies
pip install gpu-memory-profiler[dev] # Development tools
pip install gpu-memory-profiler[test] # Testing dependencies
pip install gpu-memory-profiler[docs] # Documentation tools
```
### From Source
```bash
git clone https://github.com/Silas-Asamoah/gpu-memory-profiler.git
cd gpu-memory-profiler
# Install in development mode
pip install -e .
# Install with visualization support
pip install -e .[viz]
# Install with development dependencies
pip install -e .[dev]
# Install with testing dependencies
pip install -e .[test]
```
### Development Setup
```bash
# Clone and setup development environment
git clone https://github.com/Silas-Asamoah/gpu-memory-profiler.git
cd gpu-memory-profiler
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e .[dev,test]
pre-commit install
```
**Note**: Black formatting check is temporarily disabled in CI. Code formatting will be addressed in a separate PR.
## Quick Start
### PyTorch Example
```python
from gpumemprof import GPUMemoryProfiler
profiler = GPUMemoryProfiler()
def train_step(model, data, target):
output = model(data)
loss = ...
loss.backward()
return loss
profile = profiler.profile_function(train_step, model, data, target)
summary = profiler.get_summary()
print(f"Profiled call: {profile.function_name}")
print(f"Peak memory: {summary['peak_memory_usage'] / (1024**3):.2f} GB")
```
### TensorFlow Example
```python
from tfmemprof import TFMemoryProfiler
profiler = TFMemoryProfiler()
with profiler.profile_context("training"):
model.fit(x_train, y_train, epochs=5)
results = profiler.get_results()
print(f"Peak memory: {results.peak_memory_mb:.2f} MB")
```
## Documentation
Start at the docs home page and follow the same structure locally or when hosted:
- **[Documentation Home (local)](docs/index.md)**
- **[Documentation Home (hosted)](https://gpu-memory-profiler.readthedocs.io/en/latest/)**
Key guides:
- [CLI Usage](docs/cli.md)
- [CPU Compatibility](docs/cpu_compatibility.md)
- [Compatibility Matrix (v0.2)](docs/compatibility_matrix.md)
- [GPU Setup (drivers + frameworks)](docs/gpu_setup.md)
- [Testing Guides](docs/pytorch_testing_guide.md), [TensorFlow](docs/tensorflow_testing_guide.md)
- [Example Test Guides (Markdown)](docs/examples/test_guides/README.md)
- [Terminal UI (Textual)](docs/tui.md)
- [In-depth Article](docs/article.md)
- [Example scripts](examples/basic)
- [Launch scenario scripts](examples/scenarios)
## Launch QA Scenarios (CPU + MPS + Telemetry + OOM)
Run the capability matrix for a launch-oriented smoke pass:
```bash
python -m examples.cli.capability_matrix --mode smoke --target both --oom-mode simulated
```
Run the full matrix (includes extra demos):
```bash
python -m examples.cli.capability_matrix --mode full --target both --oom-mode simulated
```
Key scenario modules:
```bash
python -m examples.scenarios.cpu_telemetry_scenario
python -m examples.scenarios.mps_telemetry_scenario
python -m examples.scenarios.oom_flight_recorder_scenario --mode simulated
python -m examples.scenarios.tf_end_to_end_scenario
```
## Terminal UI
Prefer an interactive dashboard? Install the optional TUI dependencies and
launch the Textual interface:
```bash
pip install "gpu-memory-profiler[tui]"
gpu-profiler
```
The TUI surfaces system info, PyTorch/TensorFlow quick actions, and CLI tips.
Future prompt_toolkit enhancements will add a command palette for advanced
workflows—see [docs/tui.md](docs/tui.md) for details.
<p align="center">
<img src="https://raw.githubusercontent.com/Silas-Asamoah/gpu-memory-profiler/main/docs/gpu-profiler-1.png" alt="GPU Profiler Overview" width="700">
<br/>
<em>Overview, PyTorch, and TensorFlow tabs inside the Textual dashboard.</em>
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/Silas-Asamoah/gpu-memory-profiler/main/docs/gpu-profiler-2.png" alt="GPU Profiler CLI Actions" width="700">
<br/>
<em>CLI & Actions tab with quick commands, loaders, and log output.</em>
</p>
Need charts without leaving the terminal? The new **Visualizations** tab renders
an ASCII timeline from the live tracker and can export the same data to PNG
(Matplotlib) or HTML (Plotly) under `./visualizations` for deeper inspection.
Just start tracking, refresh the tab, and hit the export buttons.
The PyTorch and TensorFlow tabs now surface recent decorator/context profiling
results as live tables—with refresh/clear controls—so you can review peak
memory, deltas, and durations gathered via `gpumemprof.context_profiler` or
`tfmemprof.context_profiler` without leaving the dashboard.
When the monitoring session is running you can also dump every tracked event to
`./exports/tracker_events_<timestamp>.{csv,json}` directly from the Monitoring
tab, making it easy to feed the same data into pandas, spreadsheets, or external
dashboards.
Need tighter leak warnings? Adjust the warning/critical sliders in the same tab
to update GPU `MemoryTracker` thresholds on the fly, and use the inline alert
history to review exactly when spikes occurred.
Need to run automation without opening another terminal? Use the CLI tab’s
command input (or quick action buttons) to execute `gpumemprof` /
`tfmemprof` commands in-place, trigger `gpumemprof diagnose`, run the OOM
flight-recorder scenario, and launch the capability-matrix smoke checks with a
single click.
## CPU Compatibility
Working on a laptop or CI agent without CUDA? The CLI, Python API, and TUI now
fall back to a psutil-powered `CPUMemoryProfiler`/`CPUMemoryTracker`. Run the
same `gpumemprof monitor` / `gpumemprof track` commands and you’ll see RSS data
instead of GPU VRAM, exportable to CSV/JSON and viewable inside the monitoring
tab. PyTorch sample workloads automatically switch to CPU tensors when CUDA
isn’t present, so every workflow stays accessible regardless of hardware.
## Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) and [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md).
## License
[MIT License](LICENSE)
---
**Version:** 0.2.0 (launch candidate)
| text/markdown | null | Silas Asamoah <silasbempong@gmail.com>, Prince Agyei Tuffour <prince.agyei.tuffour@gmail.com> | null | Silas Asamoah <silasbempong@gmail.com>, Prince Agyei Tuffour <prince.agyei.tuffour@gmail.com> | MIT License
Copyright (c) 2025 GPU Memory Profiler Team
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| gpu, memory, profiler, pytorch, tensorflow, deep-learning, monitoring | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python... | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=1.8.0",
"tensorflow>=2.4.0",
"numpy>=1.19.0",
"pandas>=1.2.0",
"psutil>=5.8.0",
"scipy>=1.7.0",
"matplotlib>=3.3.0; extra == \"viz\"",
"seaborn>=0.11.0; extra == \"viz\"",
"plotly>=5.0.0; extra == \"viz\"",
"dash>=2.0.0; extra == \"viz\"",
"dash-bootstrap-components>=1.6.0; extra == \"vi... | [] | [] | [] | [
"Homepage, https://github.com/Silas-Asamoah/gpu-memory-profiler",
"Documentation, https://github.com/Silas-Asamoah/gpu-memory-profiler/tree/main/docs",
"Repository, https://github.com/Silas-Asamoah/gpu-memory-profiler.git",
"Bug Tracker, https://github.com/Silas-Asamoah/gpu-memory-profiler/issues",
"Release... | twine/6.2.0 CPython/3.10.19 | 2026-02-19T13:23:25.169106 | gpu_memory_profiler-0.2.1.tar.gz | 2,218,894 | 41/f9/742c11bd2a1021ee81430beb56e8ae1593176188207c81ba3955251faaf3/gpu_memory_profiler-0.2.1.tar.gz | source | sdist | null | false | cf992ff929f2016fb997c6b1e6dda471 | 24a256284a9d44d43d051ec3a566ff2bb8e8c13dc0af4dea2f19d7476e0660cc | 41f9742c11bd2a1021ee81430beb56e8ae1593176188207c81ba3955251faaf3 | null | [
"LICENSE"
] | 230 |
2.4 | traceloop-sdk | 0.52.4 | Traceloop Software Development Kit (SDK) for Python | # traceloop-sdk
Traceloop’s Python SDK allows you to easily start monitoring and debugging your LLM execution. Tracing is done in a non-intrusive way, built on top of OpenTelemetry. You can choose to export the traces to Traceloop, or to your existing observability stack.
```python
Traceloop.init(app_name="joke_generation_service")
@workflow(name="joke_creation")
def create_joke():
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
)
return completion.choices[0].message.content
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"aiohttp<4,>=3.11.11",
"colorama<0.5.0,>=0.4.6",
"cuid<0.5,>=0.4",
"deprecated<2,>=1.2.14",
"jinja2<4,>=3.1.5",
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-exporter-otlp-proto-grpc<2,>=1.38.0",
"opentelemetry-exporter-otlp-proto-http<2,>=1.38.0",
"opentelemetry-instrumentation-agno",
"opentelem... | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry",
"Documentation, https://traceloop.com/docs/openllmetry"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:23:06.061382 | traceloop_sdk-0.52.4.tar.gz | 304,616 | 91/ac/c913ed3ff4511cff0f9f0d3d68068276516d0fe303ea57bc0d86e06332cd/traceloop_sdk-0.52.4.tar.gz | source | sdist | null | false | 227026e7b226014c339791d2138916e1 | f80a0ab24fd6e7d4145b8ac85708e4fc85c26dec39f1f5054a26eac4ca28c35d | 91acc913ed3ff4511cff0f9f0d3d68068276516d0fe303ea57bc0d86e06332cd | Apache-2.0 | [] | 31,705 |
2.4 | sreekarnv-fastauth | 0.3.1 | NextAuth-inspired pluggable authentication for FastAPI | # FastAuth
[](https://pypi.org/project/sreekarnv-fastauth/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/sreekarnv/fastauth/actions/workflows/ci.yml)
[](https://codecov.io/gh/sreekarnv/fastauth)
[](https://www.python.org/downloads/)
**NextAuth-inspired pluggable authentication for FastAPI.**
FastAuth gives you a complete auth system — credentials, OAuth, email verification, password reset, RBAC, and JWT — without locking you into any particular database or ORM.
---
## Features
- **Multiple providers** — email/password, Google OAuth, GitHub OAuth
- **Pluggable adapters** — SQLAlchemy (SQLite, PostgreSQL, MySQL) or bring your own
- **JWT & database sessions** — stateless tokens or server-side sessions
- **Cookie delivery** — HttpOnly, Secure, SameSite out of the box
- **Email flows** — verification and password reset with customizable transports
- **RBAC** — roles and fine-grained permissions on any route
- **Event hooks** — intercept sign-in/sign-up and modify JWT payloads
- **RS256 / JWKS** — rotate keys and expose a JWKS endpoint for microservices
- **CLI** — scaffold a project, check dependencies, generate secrets
---
## Install
```bash
pip install "sreekarnv-fastauth[standard]"
```
| Extra | Includes |
|-------|----------|
| `standard` | FastAPI, JWT (joserfc), SQLAlchemy, Argon2 |
| `oauth` | httpx (Google, GitHub OAuth) |
| `email` | aiosmtplib, Jinja2 |
| `redis` | redis-py async |
| `postgresql` | asyncpg |
| `cli` | typer, rich |
| `all` | everything |
---
## Quick start
```python
from contextlib import asynccontextmanager
from fastapi import Depends, FastAPI
from fastauth import FastAuth, FastAuthConfig
from fastauth.adapters.sqlalchemy import SQLAlchemyAdapter
from fastauth.api.deps import require_auth
from fastauth.providers.credentials import CredentialsProvider
adapter = SQLAlchemyAdapter(engine_url="sqlite+aiosqlite:///./auth.db")
auth = FastAuth(FastAuthConfig(
secret="change-me", # fastauth generate-secret
providers=[CredentialsProvider()],
adapter=adapter.user,
token_adapter=adapter.token,
))
@asynccontextmanager
async def lifespan(app: FastAPI):
await adapter.create_tables()
yield
app = FastAPI(lifespan=lifespan)
auth.mount(app) # registers /auth/signup, /auth/signin, /auth/signout, …
@app.get("/dashboard")
async def dashboard(user=Depends(require_auth)):
return {"hello": user["email"]}
```
```bash
uvicorn main:app --reload
```
---
## Documentation
Full documentation at **[sreekarnv.github.io/fastauth](https://sreekarnv.github.io/fastauth)**
- [Installation](https://sreekarnv.github.io/fastauth/getting-started/installation/)
- [Quick Start](https://sreekarnv.github.io/fastauth/getting-started/quick-start/)
- [Configuration](https://sreekarnv.github.io/fastauth/getting-started/configuration/)
- [How it Works](https://sreekarnv.github.io/fastauth/concepts/how-it-works/)
- [Guides](https://sreekarnv.github.io/fastauth/guides/basic/)
- [API Reference](https://sreekarnv.github.io/fastauth/api/fastauth/)
---
## License
MIT License - see [LICENSE](./LICENSE) for details.
| text/markdown | null | Sreekar Nutulapati <sreekarnv1@gmail.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
... | [] | null | null | >=3.11 | [] | [] | [] | [
"cuid2>=2.0.1",
"pydantic[email]>=2.12.5",
"aiosmtplib>=5.1.0; extra == \"all\"",
"aiosqlite>=0.22.1; extra == \"all\"",
"argon2-cffi>=25.1.0; extra == \"all\"",
"cryptography>=46.0.5; extra == \"all\"",
"fastapi>=0.129.0; extra == \"all\"",
"httpx>=0.28.1; extra == \"all\"",
"jinja2>=3.1.6; extra =... | [] | [] | [] | [
"Homepage, https://github.com/sreekarnv/fastauth",
"Repository, https://github.com/sreekarnv/fastauth",
"Documentation, https://sreekarnv.github.io/fastauth/",
"Bug Tracker, https://github.com/sreekarnv/fastauth/issues",
"Changelog, https://github.com/sreekarnv/fastauth/blob/main/CHANGELOG.md"
] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:23:00.667594 | sreekarnv_fastauth-0.3.1.tar.gz | 31,891 | c9/e3/22ecbffd19565a876ca78adf138fd348ca99c1d2fe6bad12656ef5e16cb0/sreekarnv_fastauth-0.3.1.tar.gz | source | sdist | null | false | 78809a8267c576f5d2e4331fd8581b52 | 7f5a2c717ebb4ad84e51cc157f5c0aba9c8a0e8a01847e1250a32aa97de21933 | c9e322ecbffd19565a876ca78adf138fd348ca99c1d2fe6bad12656ef5e16cb0 | null | [] | 207 |
2.4 | fanuc-rmi | 0.1.5 | Simple FANUC RMI client | # FANUC RMI Client
Python client for FANUC RMI with reusable functions (no CLI entrypoint).
**Install (pip)**
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install fanuc-rmi
```
**Robot URDF files**
Large datasets can be found here: `https://github.com/Daniella1/urdf_files_dataset?tab=readme-ov-file`
**Quick Start**
```python
from fanuc_rmi import RobotClient
robot = RobotClient(
host="192.168.1.22",
startup_port=16001,
main_port=16002,
connect_timeout=5.0,
socket_timeout=100.0,
reader_timeout=100.0,
attempts=5,
retry_delay=0.5,
startup_pause=0.25,
)
robot.connect()
robot.initialize(uframe=0, utool=1)
# Do work...
robot.close()
```
**Motion Commands**
```python
# set speed override (controller-specific range)
robot.speed_override(50)
# wait in seconds (uses sequence_id for ordering)
robot.wait_time(2.5, sequence_id=5)
# linear relative motion (mm / deg)
relative_displacement = {"X": 100, "Y": 0, "Z": 0, "W": 0, "P": 0, "R": 0}
robot.linear_relative(relative_displacement, speed=500, sequence_id=1)
# linear absolute motion (mm / deg)
absolute_position = {"X": 491.320, "Y": -507.016, "Z": 223.397, "W": -179.577, "P": 52.380, "R": -93.233}
robot.linear_absolute(absolute_position, speed=300, sequence_id=2)
# joint relative motion (deg)
relative_joints = {"J0": 0, "J1": 0, "J2": 0, "J3": 0, "J4": 0, "J5": 0, "J6": 0, "J7": 0, "J8": 0, "J9": 0}
robot.joint_relative(relative_joints, speed_percentage=40, sequence_id=3)
# joint absolute motion (deg)
absolute_joints = {"J1": 63.252, "J2": 31.488, "J3": -35.602, "J4": 18.504, "J5": -101.313, "J6": 108.650, "J7": 0.000, "J8": 0.000, "J9": 0.000}
robot.joint_absolute(absolute_joints, speed_percentage=40, sequence_id=4)
```
**Read Positions (writes file + returns dict)**
```python
cartesian = robot.read_cartesian_coordinates()
# writes to ./robot_position_cartesian.txt
joints = robot.read_joint_coordinates()
# writes to ./robot_position_joint.txt
```
**Coordinate Conversion (IKPy)**
```python
urdf_path = "robot_models/crx10ial/crx10ial.urdf"
cartesian = robot.read_cartesian_coordinates()
joints = robot.convert_coordinates(cartesian, robot_model_urdf_path=urdf_path, from_type="cartesian", to_type="joint")
cartesian_again = robot.convert_coordinates(joints, robot_model_urdf_path=urdf_path, from_type="joint", to_type="cartesian")
```
**Output Files**
- `robot_position_cartesian.txt`: created automatically when reading Cartesian poses
- `robot_position_joint.txt`: created automatically when reading joint poses
**Notes**
- Requires Python 3.11+.
- `convert_coordinates` requires `ikpy` (`pip install ikpy`).
- Coordinate conversion always uses W/P/R. Provide W/P/R in your input dicts (missing values default to `0.0`).
- Joint dicts must use ascending `J` keys (`J1`, `J2`, `J3`, ...). The function assumes that order matches the URDF joint order.
- URDF units are often meters. If your robot reports millimeters, you may need to scale values before conversion.
- If you cloned the repo, `main.py` is a runnable example.
- For back-to-back moves, increment `sequence_id` to preserve ordering.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"ikpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T13:22:55.630422 | fanuc_rmi-0.1.5.tar.gz | 9,350 | 68/a0/42ee89061cf57489520043e8221f52a2cb812ba61b2460574f2ba1c6f86a/fanuc_rmi-0.1.5.tar.gz | source | sdist | null | false | 55313db894e2d6525f60ccf0fd3ea255 | db50d52d9c84735e475f3b4b7e39c3b7b84e343d36010748bae9d15beaf87241 | 68a042ee89061cf57489520043e8221f52a2cb812ba61b2460574f2ba1c6f86a | null | [] | 224 |
2.4 | POPSRegression | 0.3.7 | Bayesian regression for low-noise data using POPS algorithm | # POPSRegression
**[Try the online demo](https://kermodegroup.github.io/demos/regression-demo.html) from Prof. James Kermode (U Warwick)**
Linear regression scheme from the paper
*Parameter uncertainties for imperfect surrogate models in the low-noise regime*
TD Swinburne and D Perez, [Machine Learning: Science and Technology 2025](http://iopscience.iop.org/article/10.1088/2632-2153/ad9fce)
```bibtex
@article{swinburne2025,
author={Swinburne, Thomas and Perez, Danny},
title={Parameter uncertainties for imperfect surrogate models in the low-noise regime},
journal={Machine Learning: Science and Technology},
doi={10.1088/2632-2153/ad9fce},
year={2025}
}
```
## Installation
There will be a PR on `scikit-learn` "soon", but in the meantime
```bash
pip install POPSRegression
```
## What is POPSRegression?
**Bayesian regression for low-noise data (vanishing aleatoric uncertainty).**
Fits the weights of a regression model using BayesianRidge, then estimates weight uncertainties (`sigma_` in `BayesianRidge`) accounting for model misspecification using the POPS (Pointwise Optimal Parameter Sets) algorithm [1]. The `alpha_` attribute which estimates aleatoric uncertainty is not used for predictions as correctly it should be assumed negligable.
Bayesian regression is often used in computational science to fit the weights of a surrogate model which approximates some complex calculation.
In many important cases the target calculation is near-deterministic, or low-noise, meaning the true data has vanishing aleatoric uncertainty. However, there can be large misspecification uncertainty, i.e. the model weights are instrinsically uncertain as the model is unable to exactly match training data.
Existing Bayesian regression schemes based on loss minimization can only estimate epistemic and aleatoric uncertainties. In the low-noise limit, weight uncertainties (`sigma_` in `BayesianRidge`) are significantly underestimated as they only account for epistemic uncertainties which decay with increasing data. Predictions then assume any additional error is due to an aleatoric uncertainty (`alpha_` in `BayesianRidge`), which is erroneous in a low-noise setting. This has significant implications on how uncertainty is propagated using weight uncertainties.
## Example usage
Here, usage follows `sklearn.linear_model`, inheriting `BayesianRidge`
After running `BayesianRidge.fit(..)`, the `alpha_` attribute is not used for predictions.
The `sigma_` matrix still contains epistemic weight uncertainties, whilst `misspecification_sigma_` contains the POPS uncertainties.
```python
from POPSRegression import POPSRegression
X_train,X_test,y_train,y_test = ...
# Sobol resampling of hypercube with 1.0 samples / training point
model = POPSRegression(resampling_method='sobol',resample_density=1.)
# fit the model, sample POPS hypercube
model.fit(X_train,y_train)
# Return mean and hypercube std
y_pred, y_std = model.predict(X_test,return_std=True)
# can also return max/min
y_pred, y_std, y_max, y_min = model.predict(X_test,return_std=True,return_bounds=True)
# can also return the epistemic uncertainty seperately
y_pred, y_std, y_max, y_min, y_epistemic_std = model.predict(X_test,return_std=True,return_bounds=True,return_epistemic_std=True)
```
# Toy example
Extreme low-dimensional case, fitting N data points to a quartic polynomial (P=5 parameters) to a complex oscillatory function.
Green: two sigma of `sigma_` weight uncertainty from Bayesian Regression (i.e. without `alpha_` term for aleatoric error)
Orange: two sigma of `sigma_` and `misspecification_sigma_` posterior from POPS Regression
Gray: min-max of posterior from POPS Regression
As can be seen, the final error bars give very good coverage of the test output
<img src="https://github.com/tomswinburne/POPS-Regression/blob/main/example_image.png?raw=true"></img>
| text/markdown | null | Thomas D Swinburne <thomas.swinburne@cnrs.fr>, Danny Perez <danny_perez@lanl.gov> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"scikit-learn>=1.6.1",
"scipy>=1.6.0",
"numpy>=1.20.0"
] | [] | [] | [] | [
"Homepage, https://github.com/tomswinburne/POPS-Regression",
"Bug Tracker, https://github.com/tomswinburne/POPS-Regression/issues",
"Documentation, https://github.com/tomswinburne/POPS-Regression/blob/main/README.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:39.493153 | popsregression-0.3.7.tar.gz | 10,559 | fc/64/5c114a9e4b27651c0c6116ee7dec2f24f58085528f23fd5635f0d10281e7/popsregression-0.3.7.tar.gz | source | sdist | null | false | 8b04d5b2ab1a4d52be19a83e63f5fab4 | ac771fcede04888eac72d1e554bf4ae927a7f43b56a62da3a567f12ce6ae8739 | fc645c114a9e4b27651c0c6116ee7dec2f24f58085528f23fd5635f0d10281e7 | null | [
"LICENSE"
] | 0 |
2.4 | opentelemetry-instrumentation-writer | 0.52.4 | OpenTelemetry Writer instrumentation | # OpenTelemetry Writer Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-writer/">
<img alt="PyPI version" src="https://badge.fury.io/py/opentelemetry-instrumentation-writer.svg">
</a>
This library allows tracing calls to any of Writer's endpoints sent with the official [Writer Python Library](https://github.com/writer/writer-python).
## Installation
```bash
pip install opentelemetry-instrumentation-writer
```
## Example usage
```python
from opentelemetry.instrumentation.writer import WriterInstrumentor
WriterInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Yan Tolstoy <yan.talstoi@writer.com>, "Writer, Inc." <dev-feedback@writer.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai>=0.4.11",
"opentelemetry-semantic-conventions>=0.59b0",
"writer; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-writer"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:16.911661 | opentelemetry_instrumentation_writer-0.52.4.tar.gz | 167,692 | eb/6a/afefd6641ef77687cf0cbdadd0c3f3995b57b83da4e34a018ace8e8ad90b/opentelemetry_instrumentation_writer-0.52.4.tar.gz | source | sdist | null | false | dc51d9103d40b48a4bc3d545d1108d1d | 4af98c6097c85048471233d84f644ca5070d17f882797514ac1c8c57ab7a467d | eb6aafefd6641ef77687cf0cbdadd0c3f3995b57b83da4e34a018ace8e8ad90b | Apache-2.0 | [] | 30,411 |
2.4 | opentelemetry-instrumentation-weaviate | 0.52.4 | OpenTelemetry Weaviate instrumentation | # OpenTelemetry Weaviate Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-weaviate/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-weaviate.svg">
</a>
This library allows tracing client-side calls to Weaviate vector DB sent with the official [Weaviate library](https://github.com/weaviate/weaviate-python-client).
## Installation
```bash
pip install opentelemetry-instrumentation-weaviate
```
## Example usage
```python
from opentelemetry.instrumentation.weaviate import WeaviateInstrumentor
WeaviateInstrumentor().instrument()
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"weaviate-client; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-weaviate"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:15.368912 | opentelemetry_instrumentation_weaviate-0.52.4.tar.gz | 602,778 | 25/f7/ae5d314513fa9ecc48f7dd856baf697cb3105e0648f73c70ea579b5e499e/opentelemetry_instrumentation_weaviate-0.52.4.tar.gz | source | sdist | null | false | c9d9c8e31e313405aa612f963ae31207 | 8a1982f647127de6345938ee43c5d5029627e0007b40ed735dd2ec534aa19b12 | 25f7ae5d314513fa9ecc48f7dd856baf697cb3105e0648f73c70ea579b5e499e | Apache-2.0 | [] | 53,272 |
2.4 | opentelemetry-instrumentation-watsonx | 0.52.4 | OpenTelemetry IBM Watsonx Instrumentation | # OpenTelemetry IBM Watsonx Instrumentation
This library allows tracing IBM Watsonx prompts and completions sent with the official [IBM Watson Machine Learning library](https://ibm.github.io/watson-machine-learning-sdk/) and [IBM watsonx.ai library](https://ibm.github.io/watsonx-ai-python-sdk/).
## Installation
```bash
pip install opentelemetry-instrumentation-watsonx
```
## Example usage
```python
from opentelemetry.instrumentation.watsonx import WatsonxInstrumentor
WatsonxInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
## SSL Issue
In case of SSL handshake issues (or similar ones) as follows:
```
E0423 17:04:25.197068000 6150713344 ssl_transport_security.cc:1420] Handshake failed with fatal error SSL_ERROR_SSL: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER.
```
You can instruct the exporter with an environment variable to ignore SSL errors:
```bash
OTEL_EXPORTER_OTLP_INSECURE=true
```
| text/markdown | null | Guangya Liu <gyliu@ibm.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"ibm-watson-machine-learning; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-watsonx"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:14.067441 | opentelemetry_instrumentation_watsonx-0.52.4.tar.gz | 85,325 | 5f/a9/1b3d6291e2c8589556e45244aa73d129652ebbf79f747019b381d15a0bad/opentelemetry_instrumentation_watsonx-0.52.4.tar.gz | source | sdist | null | false | bf1757728a038165868b9f6edcf699f9 | 842233cbb51e8af2aba77cebbfac4e505227ad3ff97fce60b097006194f8bb32 | 5fa91b3d6291e2c8589556e45244aa73d129652ebbf79f747019b381d15a0bad | Apache-2.0 | [] | 53,302 |
2.4 | opentelemetry-instrumentation-voyageai | 0.52.4 | OpenTelemetry Voyage AI instrumentation | # OpenTelemetry Voyage AI Instrumentation
This library allows tracing Voyage AI API calls with OpenTelemetry.
## Installation
```bash
pip install opentelemetry-instrumentation-voyageai
```
## Usage
```python
from opentelemetry.instrumentation.voyageai import VoyageAIInstrumentor
VoyageAIInstrumentor().instrument()
# Now use Voyage AI as usual
import voyageai
client = voyageai.Client()
# Embeddings
result = client.embed(texts=["Hello, world!"], model="voyage-3")
# Reranking
result = client.rerank(
query="What is the capital of France?",
documents=["Paris is the capital of France.", "London is in England."],
model="rerank-2.5"
)
```
## Semantic Conventions
This instrumentation follows the OpenTelemetry GenAI semantic conventions:
- `gen_ai.system`: "voyageai"
- `gen_ai.operation.name`: "embeddings" or "rerank"
- `gen_ai.request.model`: The model name
- `gen_ai.usage.input_tokens`: Token count from the response
- `gen_ai.embeddings.dimension.count`: Embedding vector dimension (for embed only)
| text/markdown | null | Gal Kleinman <gal@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"voyageai; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-voyageai"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:12.712170 | opentelemetry_instrumentation_voyageai-0.52.4.tar.gz | 168,927 | d3/ea/30ac474dc5ffe773f564282c49a26571f8fcab35a07ab0821076c177166c/opentelemetry_instrumentation_voyageai-0.52.4.tar.gz | source | sdist | null | false | 7c60e2974504a38b4b3bedb20f819d3c | aae19bf2033a0bbfc4c69ca52f44f885165d443e45ca0ccfba0fc11da3da35ea | d3ea30ac474dc5ffe773f564282c49a26571f8fcab35a07ab0821076c177166c | Apache-2.0 | [] | 30,304 |
2.4 | opentelemetry-instrumentation-vertexai | 0.52.4 | OpenTelemetry Vertex AI instrumentation | # OpenTelemetry VertexAI Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-vertexai/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-vertexai.svg">
</a>
This library allows tracing VertexAI prompts and completions sent with the official [VertexAI library](https://github.com/googleapis/python-aiplatform).
## Installation
```bash
pip install opentelemetry-instrumentation-vertexai
```
## Example usage
```python
from opentelemetry.instrumentation.vertexai import VertexAIInstrumentor
VertexAIInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com>, Swaroop <maddisaiswaroop@gmail.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"google-cloud-aiplatform; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-vertexai"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:11.547882 | opentelemetry_instrumentation_vertexai-0.52.4.tar.gz | 79,155 | 25/f1/4b1b4f255b8abf1f0d7ae7b37033c917e5c4aa601af2d203c6b2afdac46f/opentelemetry_instrumentation_vertexai-0.52.4.tar.gz | source | sdist | null | false | 99b72e1e44c965aa8cc7665d05101c87 | 64ba9391df7fbdac65068d708b606843a30f4cb4774755f8e94fcaea4cf1376c | 25f14b1b4f255b8abf1f0d7ae7b37033c917e5c4aa601af2d203c6b2afdac46f | Apache-2.0 | [] | 56,059 |
2.4 | opentelemetry-instrumentation-transformers | 0.52.4 | OpenTelemetry transformers instrumentation | # OpenTelemetry HuggingFace Transformers Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-transformers/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-transformers.svg">
</a>
This library allows tracing texte generation calls sent with the official [HuggingFace Transformers library](https://github.com/huggingface/transformers).
## Installation
```bash
pip install opentelemetry-instrumentation-transformers
```
## Example usage
```python
from opentelemetry.instrumentation.transformers import TransformersInstrumentor
TransformersInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"transformers; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-transformers"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:10.548910 | opentelemetry_instrumentation_transformers-0.52.4.tar.gz | 69,327 | 4c/71/56077bb7947315dceb23af24963c3c78c61d830f720dc40e305f97e1a3cf/opentelemetry_instrumentation_transformers-0.52.4.tar.gz | source | sdist | null | false | ebd4502f303854ad23681a3d5fdc2785 | f06a4fea4d00c6b5bb7856b314fe98168dbfa393fa5d4b906a4c2c7935a43724 | 4c7156077bb7947315dceb23af24963c3c78c61d830f720dc40e305f97e1a3cf | Apache-2.0 | [] | 53,355 |
2.4 | opentelemetry-instrumentation-together | 0.52.4 | OpenTelemetry Together AI instrumentation | # OpenTelemetry Together AI Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-together/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-together.svg">
</a>
This library allows tracing calls to any of Together AI's endpoints sent with the official [Together AI Library](https://github.com/togethercomputer/together-python).
## Installation
```bash
pip install opentelemetry-instrumentation-together
```
## Example usage
```python
from opentelemetry.instrumentation.together import TogetherAiInstrumentor
TogetherAiInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Benedikt Wolf <bene25@web.de> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"together; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-together"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:09.489992 | opentelemetry_instrumentation_together-0.52.4.tar.gz | 133,702 | d6/bb/9348b01aea0010124dc5a00374e68280d19b802d16b20a9f97734a3e31f3/opentelemetry_instrumentation_together-0.52.4.tar.gz | source | sdist | null | false | 08fa2dc71899457a41910f658d21e92f | f57303fbef85cf42097a7c36a1ee6c2eac244aacd0735bf2e37d0e6964ca221e | d6bb9348b01aea0010124dc5a00374e68280d19b802d16b20a9f97734a3e31f3 | Apache-2.0 | [] | 53,371 |
2.1 | odoo-addon-l10n-jp-summary-invoice | 16.0.1.3.2 | Japan Summary Invoice | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=====================
Japan Summary Invoice
=====================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:35befa1aa5c8eff98734b9e7f4dca010b02e7893de6fa4644c7e74e790ca7dc5
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Alpha-red.png
:target: https://odoo-community.org/page/development-status
:alt: Alpha
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fl10n--japan-lightgray.png?logo=github
:target: https://github.com/OCA/l10n-japan/tree/16.0/l10n_jp_summary_invoice
:alt: OCA/l10n-japan
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/l10n-japan-16-0/l10n-japan-16-0-l10n_jp_summary_invoice
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/l10n-japan&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module adds a summary invoice report print functionality based on the
account_billing module.
The printed summary invoice is intended to serve as the Qualified Tax Invoice (適格請求書),
meaning that consumption taxes should be recalculated based on the total amount of the
invoices per tax rate included in the summary invoice.
.. IMPORTANT::
This is an alpha version, the data model and design can change at any time without warning.
Only for development or testing purpose, do not use in production.
`More details on development status <https://odoo-community.org/page/development-status>`_
**Table of contents**
.. contents::
:local:
Configuration
=============
Go to *Invoicing/Accounting > Configuration > Settings* and update the following
settings as necessary:
- **Summary Invoice Remark**: The remark that shows in the header part of the summary
invoice, such as '下記の通り御請求申し上げます。'.
- **Show Sales Order Number**: If selected, the sales order number will be shown for
each line in the summary invoice.
- **Show Invoice Narration**: If selected, the narration will appear for each invoice in
the summary invoice report.
- **Show Invoice Total Amount**: If selected, the total amount per invoice will appear
in the summary invoice report.
Usage
=====
#. Create a billing for customer invoices using the functionality of the account_billing
module, and make adjustments as necessary.
- **Remit-to Bank**: If not selected, the bank account related to the company with
the smallest sequence will show in the printed document.
- **Due Date**: The earliest due date among the selected invoices will be proposed.
Adjust this as necessary as it will show in the printed document.
#. Validate the billing. An invoice for tax adjustment will be created automatically in
case the recalculated tax amount is different from the summary of the tax amounts in
the selected invoices.
#. Print the summary invoice report (合計請求書) from *Print > JP Summary Invoice* of the
billing.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/l10n-japan/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/l10n-japan/issues/new?body=module:%20l10n_jp_summary_invoice%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* Quartile
Contributors
~~~~~~~~~~~~
* `Quartile <https://www.quartile.co>`_:
* Aung Ko Ko Lin
* Yoshi Tashiro
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/l10n-japan <https://github.com/OCA/l10n-japan/tree/16.0/l10n_jp_summary_invoice>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | Quartile, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 3 - Alpha"
] | [] | https://github.com/OCA/l10n-japan | null | >=3.10 | [] | [] | [] | [
"odoo-addon-account-billing<16.1dev,>=16.0dev",
"odoo-addon-report-alternative-layout<16.1dev,>=16.0dev",
"odoo<16.1dev,>=16.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T13:22:08.296582 | odoo_addon_l10n_jp_summary_invoice-16.0.1.3.2-py3-none-any.whl | 43,934 | 65/27/f953a5939d9768aae64d890ae4db3fee76b1e3af3aae9dc8bff7700daeb2/odoo_addon_l10n_jp_summary_invoice-16.0.1.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | fdb4ae0bdeb61131797fe4a9a8240ab5 | b6cd2ea25c0fbfbb11016dfdfa0bba9649e8e460f77dab7295efae86d303a31a | 6527f953a5939d9768aae64d890ae4db3fee76b1e3af3aae9dc8bff7700daeb2 | null | [] | 88 |
2.4 | opentelemetry-instrumentation-sagemaker | 0.52.4 | OpenTelemetry SageMaker instrumentation | # OpenTelemetry SageMaker Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-sagemaker/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-sagemaker.svg">
</a>
This library allows tracing of any models deployed on Amazon SageMaker and invoked with [Boto3](https://github.com/boto/boto3) to SageMaker.
## Installation
```bash
pip install opentelemetry-instrumentation-sagemaker
```
## Example usage
```python
from opentelemetry.instrumentation.sagemaker import SageMakerInstrumentor
SageMakerInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs SageMaker endpoint request bodies and responses to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Bobby Lindsey <bwlind@amazon.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"boto3; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-sagemaker"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:08.040568 | opentelemetry_instrumentation_sagemaker-0.52.4.tar.gz | 35,687 | 70/3a/c3c1cb79058801507327bf1be10bd2dffd4ad8612bfe12913c7121e6c1d5/opentelemetry_instrumentation_sagemaker-0.52.4.tar.gz | source | sdist | null | false | c7aeb488e017179b5ab07b082e91ce64 | c70d7a107ef82811363a3c1c64560fa3ea2d105321cdbab4fbb4e7064498f0b7 | 703ac3c1cb79058801507327bf1be10bd2dffd4ad8612bfe12913c7121e6c1d5 | Apache-2.0 | [] | 53,322 |
2.4 | opentelemetry-instrumentation-replicate | 0.52.4 | OpenTelemetry Replicate instrumentation | # OpenTelemetry Replicate Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-replicate/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-replicate.svg">
</a>
This library allows tracing Replicate prompts and image generation sent with the official [replicate library](https://github.com/replicate/replicate-python).
## Installation
```bash
pip install opentelemetry-instrumentation-replicate
```
## Example usage
```python
from opentelemetry.instrumentation.replicate import ReplicateInstrumentor
ReplicateInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Kartik Prajapati <kartik@ktklab.org> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"replicate; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-replicate"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:06.969096 | opentelemetry_instrumentation_replicate-0.52.4.tar.gz | 61,118 | e6/7b/ab818daace3291b4faf5f161b4a2fb8b70ec687b9305c2affdf451300159/opentelemetry_instrumentation_replicate-0.52.4.tar.gz | source | sdist | null | false | efa92ff96a5d9f5d6b9775ad848abf99 | 3bb2685b0658889ff8173dbd5f6da19b9ac0f7e30abc417cc2e1d43bc63c362c | e67bab818daace3291b4faf5f161b4a2fb8b70ec687b9305c2affdf451300159 | Apache-2.0 | [] | 53,312 |
2.4 | opentelemetry-instrumentation-qdrant | 0.52.4 | OpenTelemetry Qdrant instrumentation | # OpenTelemetry Qdrant Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-qdrant/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-qdrant.svg">
</a>
This library allows tracing client-side calls to Qdrant vector DB sent with the official [Qdrant client library](https://github.com/qdrant/qdrant-client).
## Installation
```bash
pip install opentelemetry-instrumentation-qdrant
```
## Example usage
```python
from opentelemetry.instrumentation.qdrant import QdrantInstrumentor
QdrantInstrumentor().instrument()
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.9 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"qdrant-client; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-qdrant"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:05.898974 | opentelemetry_instrumentation_qdrant-0.52.4.tar.gz | 75,046 | 3b/05/fd4198327df0856bd011a5b468dee7c6fac756937cc204f48d1033907940/opentelemetry_instrumentation_qdrant-0.52.4.tar.gz | source | sdist | null | false | 629a255b0fce251719baba4693a91a85 | 85d145d37f5b1cc1529241876be1e54664556c1e9cbe080ac6c21aeb0d206443 | 3b05fd4198327df0856bd011a5b468dee7c6fac756937cc204f48d1033907940 | Apache-2.0 | [] | 53,659 |
2.4 | opentelemetry-instrumentation-pinecone | 0.52.4 | OpenTelemetry Pinecone instrumentation | # OpenTelemetry Pinecone Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-pinecone/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-pinecone.svg">
</a>
This library allows tracing client-side calls to Pinecone vector DB sent with the official [Pinecone library](https://github.com/pinecone-io/pinecone-python-client).
## Installation
```bash
pip install opentelemetry-instrumentation-pinecone
```
## Example usage
```python
from opentelemetry.instrumentation.pinecone import PineconeInstrumentor
PineconeInstrumentor().instrument()
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"pinecone-client; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-pinecone"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:04.820205 | opentelemetry_instrumentation_pinecone-0.52.4.tar.gz | 124,745 | e9/c8/5be4e5bd6b6f40fc478f9f4b83d6c2b3b0650cf667a0b8c729af8363afe6/opentelemetry_instrumentation_pinecone-0.52.4.tar.gz | source | sdist | null | false | cb9ae11c96f99d7e879f7fc1d12dffc9 | 600b1c0330440e736e5ed96f5d5f1554d8176dfa7ced1ff68eef0f150bb04777 | e9c85be4e5bd6b6f40fc478f9f4b83d6c2b3b0650cf667a0b8c729af8363afe6 | Apache-2.0 | [] | 53,275 |
2.4 | opentelemetry-instrumentation-openai-agents | 0.52.4 | OpenTelemetry OpenAI Agents instrumentation | # OpenTelemetry OpenAI Agents Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-openai-agents/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-openai-agents.svg">
</a>
This library enables tracing of agentic workflows implemented using the [OpenAI Agents framework](https://github.com/openai/openai-agents-python), allowing visibility into agent reasoning, tool usage, and decision-making steps.
## Installation
```bash
pip install opentelemetry-instrumentation-openai-agents
```
## Example usage
```python
from opentelemetry.instrumentation.openai_agents import OpenAIAgentsInstrumentor
OpenAIAgentsInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"openai-agents; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-openai-agents"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:03.566012 | opentelemetry_instrumentation_openai_agents-0.52.4.tar.gz | 286,486 | 4e/8f/a03ab03b0c0b87bca58a558f39b32edb02d9c39f933cb7d1e4709f15fe6e/opentelemetry_instrumentation_openai_agents-0.52.4.tar.gz | source | sdist | null | false | 8d14c65cfa37f41f6632c6dc45dbe665 | 82ed24c3fc5cdcb9127cbf1739061942638959796ec598b28be090ca6d3f45bb | 4e8fa03ab03b0c0b87bca58a558f39b32edb02d9c39f933cb7d1e4709f15fe6e | Apache-2.0 | [] | 30,508 |
2.4 | opentelemetry-instrumentation-openai | 0.52.4 | OpenTelemetry OpenAI instrumentation | # OpenTelemetry OpenAI Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-openai/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-openai.svg">
</a>
This library allows tracing OpenAI prompts and completions sent with the official [OpenAI library](https://github.com/openai/openai-python).
## Installation
```bash
pip install opentelemetry-instrumentation-openai
```
## Example usage
```python
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"openai; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-openai"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:02.051358 | opentelemetry_instrumentation_openai-0.52.4.tar.gz | 6,978,369 | 37/91/1f72807c84cfaf5b3386011f56357ea304f9e6c0039857d3fa38a13e4d03/opentelemetry_instrumentation_openai-0.52.4.tar.gz | source | sdist | null | false | a8f31f00f8608a633e718bd00ee3c9e6 | 690b9c14d68b50c87f24006122165e97819557c13ff7f520e2043e2f69f2789c | 37911f72807c84cfaf5b3386011f56357ea304f9e6c0039857d3fa38a13e4d03 | Apache-2.0 | [] | 43,145 |
2.4 | opentelemetry-instrumentation-ollama | 0.52.4 | OpenTelemetry Ollama instrumentation | # OpenTelemetry Ollama Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-ollama/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-ollama.svg">
</a>
This library allows tracing calls to any of Ollama's endpoints sent with the official [Ollama Python Library](https://github.com/ollama/ollama-python).
## Installation
```bash
pip install opentelemetry-instrumentation-ollama
```
## Example usage
```python
from opentelemetry.instrumentation.ollama import OllamaInstrumentor
OllamaInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"ollama; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-ollama"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:22:00.383258 | opentelemetry_instrumentation_ollama-0.52.4.tar.gz | 171,928 | 23/b7/7432820acc3fa44090897a447b6a8c5d9d082c93781f1c6dea13e4f0541b/opentelemetry_instrumentation_ollama-0.52.4.tar.gz | source | sdist | null | false | 2b1dfe4ecca0b7bad1fa7d54dd20a418 | 2f2bdea49b73d075610e4d698d65044e624d11037d8e0cd8c2ba6ec908563267 | 23b77432820acc3fa44090897a447b6a8c5d9d082c93781f1c6dea13e4f0541b | Apache-2.0 | [] | 53,450 |
2.4 | opentelemetry-instrumentation-mistralai | 0.52.4 | OpenTelemetry Mistral AI instrumentation | # OpenTelemetry Mistral AI Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-mistralai/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-mistralai.svg">
</a>
This library allows tracing calls to any of mistralai's endpoints sent with the official [Mistral AI library](https://github.com/mistralai-ai/mistralai-python).
## Installation
```bash
pip install opentelemetry-instrumentation-mistralai
```
## Example usage
```python
from opentelemetry.instrumentation.mistralai import MistralAiInstrumentor
MistralAiInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"mistralai; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-mistralai"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:59.183918 | opentelemetry_instrumentation_mistralai-0.52.4.tar.gz | 101,044 | 32/0e/b66f2a2c496828af539c5760930858dd9d3c0a599739f7a1a30ab5c8a9ec/opentelemetry_instrumentation_mistralai-0.52.4.tar.gz | source | sdist | null | false | 91a7c5c82df2d9bc80e8de197ba4edf3 | 2ffd3deb39b136cc61bed6faa291e34eaeadfeb70c3effbb64713a3f82416e38 | 320eb66f2a2c496828af539c5760930858dd9d3c0a599739f7a1a30ab5c8a9ec | Apache-2.0 | [] | 53,386 |
2.4 | opentelemetry-instrumentation-milvus | 0.52.4 | OpenTelemetry Milvus instrumentation | # OpenTelemetry Milvus Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-milvus/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-milvus.svg">
</a>
This library allows tracing client-side calls to Milvus vector DB sent with the official [Milvus library](https://github.com/milvus-io/milvus).
## Installation
```bash
pip install opentelemetry-instrumentation-milvus
```
## Example usage
```python
from opentelemetry.instrumentation.milvus import MilvusInstrumentor
MilvusInstrumentor().instrument()
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.9 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"pymilvus; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-milvus"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:58.049117 | opentelemetry_instrumentation_milvus-0.52.4.tar.gz | 70,514 | 4f/7c/8d90dd50a0db96b636be2a418e7f08213767ca2c5d01f7c5c912936ad470/opentelemetry_instrumentation_milvus-0.52.4.tar.gz | source | sdist | null | false | b391a1217eec4938f97d3ec68665c483 | 8b02a72324fe75f7146738ed8a44e2024effc21bbb0a8090fd0aa2cf158c7066 | 4f7c8d90dd50a0db96b636be2a418e7f08213767ca2c5d01f7c5c912936ad470 | Apache-2.0 | [] | 53,387 |
2.4 | opentelemetry-instrumentation-mcp | 0.52.4 | OpenTelemetry mcp instrumentation | OpenTelemetry MCP Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-mcp/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-mcp.svg">
</a>
This library allows tracing of agentic workflows implemented with MCP framework [mcp python sdk](https://github.com/modelcontextprotocol/python-sdk).
## Installation
```bash
pip install opentelemetry-instrumentation-mcp
```
## Example usage
```python
from opentelemetry.instrumentation.mcp import McpInstrumentor
McpInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application tool usage is working, and can make it easy to debug and evaluate the tool usage.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Felix George <felix.george@ibm.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"mcp; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-mcp"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:56.938854 | opentelemetry_instrumentation_mcp-0.52.4.tar.gz | 119,769 | b5/60/67f4ccd22ff8bd70463153773da0d0c9c22517e4cf4980dd345dc095e3b7/opentelemetry_instrumentation_mcp-0.52.4.tar.gz | source | sdist | null | false | 882ae2d73a087356f4b7b0611db82691 | d02866151d6e02af1d66deddec08258df76fb9f494eafe6a7094e550de112fb8 | b56067f4ccd22ff8bd70463153773da0d0c9c22517e4cf4980dd345dc095e3b7 | Apache-2.0 | [] | 54,066 |
2.4 | opentelemetry-instrumentation-marqo | 0.52.4 | OpenTelemetry Marqo instrumentation | # OpenTelemetry Marqo Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-marqo/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-marqo.svg">
</a>
This library allows tracing client-side calls to Marqo vector DB sent with the official [Marqo library](https://github.com/marqo-ai/marqo).
## Installation
```bash
pip install opentelemetry-instrumentation-marqo
```
## Example usage
```python
from opentelemetry.instrumentation.marqo import MarqoInstrumentor
MarqoInstrumentor().instrument()
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"marqo; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-marqo"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:55.802157 | opentelemetry_instrumentation_marqo-0.52.4.tar.gz | 48,923 | 4a/e6/16a1dcd98a6e2459909a4fcb544b6271ce966f905f0cc0a6f697850153a3/opentelemetry_instrumentation_marqo-0.52.4.tar.gz | source | sdist | null | false | 44310812300b0e3e4e2be8c9b0b45434 | ae151561c46e34f6390872935c28b3fc12a50f8cb2a7b8e77b5450cb7c1d2e14 | 4ae616a1dcd98a6e2459909a4fcb544b6271ce966f905f0cc0a6f697850153a3 | Apache-2.0 | [] | 53,244 |
2.4 | opentelemetry-instrumentation-llamaindex | 0.52.4 | OpenTelemetry LlamaIndex instrumentation | # OpenTelemetry LlamaIndex Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-llamaindex/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-llamaindex.svg">
</a>
This library allows tracing complete LLM applications built with [LlamaIndex](https://github.com/run-llama/llama_index).
## Installation
```bash
pip install opentelemetry-instrumentation-llamaindex
```
## Example usage
```python
from opentelemetry.instrumentation.llamaindex import LlamaIndexInstrumentor
LlamaIndexInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"inflection<0.6.0,>=0.5.1",
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"llama-index; extra == \"instruments\"",
"llama-parse; extra == \"llamaparse\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-llamaindex"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:54.632247 | opentelemetry_instrumentation_llamaindex-0.52.4.tar.gz | 1,274,118 | 6e/21/71a4d6bcf63b4bbda5f30dc56a8a7c72b9199ce2e38ddec5a7e986a14580/opentelemetry_instrumentation_llamaindex-0.52.4.tar.gz | source | sdist | null | false | 4e03f9a0340ae12bcb1bae938d76748e | 3e955caf88fa7549e84b61c8fa08fe19f471c71e74923d74ca5d81507aa81489 | 6e2171a4d6bcf63b4bbda5f30dc56a8a7c72b9199ce2e38ddec5a7e986a14580 | Apache-2.0 | [] | 53,915 |
2.4 | opentelemetry-instrumentation-langchain | 0.52.4 | OpenTelemetry Langchain instrumentation | # OpenTelemetry Langchain Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-langchain/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-langchain.svg">
</a>
This library allows tracing complete LLM applications built with [Langchain](https://github.com/langchain-ai/langchain).
## Installation
```bash
pip install opentelemetry-instrumentation-langchain
```
## Example usage
```python
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
LangchainInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"langchain; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-langchain"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:53.186174 | opentelemetry_instrumentation_langchain-0.52.4.tar.gz | 386,054 | 9b/bd/e640b8ba317285957fad7af645e43506f6d56218d8dd82cc213cd4c32594/opentelemetry_instrumentation_langchain-0.52.4.tar.gz | source | sdist | null | false | b76b1f558dfd42ddc5bab93af27c7e9a | 5de90e20781d96b57574a33ef76dacae7f915e48ffafa60f79206ff62d6279f3 | 9bbde640b8ba317285957fad7af645e43506f6d56218d8dd82cc213cd4c32594 | Apache-2.0 | [] | 59,961 |
2.4 | opentelemetry-instrumentation-lancedb | 0.52.4 | OpenTelemetry Lancedb instrumentation | # OpenTelemetry LanceDB Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-lancedb/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-lancedb.svg">
</a>
This library allows tracing client-side calls to LanceDB sent with the official [LanceDB library](https://github.com/lancedb/lancedb).
## Installation
```bash
pip install opentelemetry-instrumentation-lancedb
```
## Example usage
```python
from opentelemetry.instrumentation.lancedb import LanceInstrumentor
LanceInstrumentor().instrument()
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"lancedb; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-lancedb"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:51.995559 | opentelemetry_instrumentation_lancedb-0.52.4.tar.gz | 53,076 | ae/88/56411e73982f65fcecdbccd6da80755f4b5734db0c96ea514d7a040a4fe8/opentelemetry_instrumentation_lancedb-0.52.4.tar.gz | source | sdist | null | false | 2599942a71755f28766292b0082a8143 | 226a98c6bc4834ade56c3cfd69df9d38418af85615f8e00a4b6393016ae4515c | ae8856411e73982f65fcecdbccd6da80755f4b5734db0c96ea514d7a040a4fe8 | Apache-2.0 | [] | 53,250 |
2.4 | ibm-platform-services | 0.73.3 | Python client library for IBM Cloud Platform Services | [](https://github.com/IBM/platform-services-python-sdk/actions/workflows/build.yaml)
[](https://github.com/IBM/platform-services-python-sdk/releases/latest)
[](https://pypi.org/project/ibm-platform-services/)
[](https://pypi.org/project/ibm-platform-services/)

[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/semantic-release/semantic-release)
[](https://cla-assistant.io/IBM/platform-services-python-sdk)
# IBM Cloud Platform Services Python SDK Version 0.73.3
Python client library to interact with various
[IBM Cloud Platform Service APIs](https://cloud.ibm.com/docs?tab=api-docs&category=platform_services).
## Table of Contents
<!--
The TOC below is generated using the `markdown-toc` node package.
https://github.com/jonschlinkert/markdown-toc
You should regenerate the TOC after making changes to this file.
npx markdown-toc -i README.md
-->
<!-- toc -->
- [Overview](#overview)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Using the SDK](#using-the-sdk)
- [Questions](#questions)
- [Issues](#issues)
- [Open source @ IBM](#open-source--ibm)
- [Contributing](#contributing)
- [License](#license)
<!-- tocstop -->
## Overview
The IBM Cloud Platform Services Python SDK allows developers to programmatically interact with the following
IBM Cloud services:
Service Name | Module Name | Service Class Name
--- | --- | ---
[Case Management](https://cloud.ibm.com/apidocs/case-management?code=python) | case_management_v1 | CaseManagementV1
[Catalog Management](https://cloud.ibm.com/apidocs/resource-catalog/private-catalog?code=python) | catalog_management_v1 | CatalogManagementV1
[Context Based Restrictions](https://cloud.ibm.com/apidocs/context-based-restrictions?code=python) | context_based_restrictions_v1 | ContextBasedRestrictionsV1
[Enterprise Billing Units](https://cloud.ibm.com/apidocs/enterprise-apis/billing-unit?code=python) | enterprise_billing_units_v1 | EnterpriseBillingUnitsV1
[Enterprise Management](https://cloud.ibm.com/apidocs/enterprise-apis/enterprise?code=python) | enterprise_management_v1 | EnterpriseManagementV1
[Enterprise Usage Reports](https://cloud.ibm.com/apidocs/enterprise-apis/resource-usage-reports?code=python) | enterprise_usage_reports_v1 | EnterpriseUsageReportsV1
[Global Catalog](https://cloud.ibm.com/apidocs/resource-catalog/global-catalog?code=python) | global_catalog_v1 | GlobalCatalogV1
[Global Search](https://cloud.ibm.com/apidocs/search?code=python) | global_search_v2 | GlobalSearchV2
[Global Tagging](https://cloud.ibm.com/apidocs/tagging?code=python) | global_tagging_v1 | GlobalTaggingV1
[IAM Access Groups](https://cloud.ibm.com/apidocs/iam-access-groups?code=python) | iam_access_groups_v2 | IamAccessGroupsV2
[IAM Identity Service](https://cloud.ibm.com/apidocs/iam-identity-token-api?code=python) | iam_identity_v1 | IamIdentityV1
[IAM Policy Management](https://cloud.ibm.com/apidocs/iam-policy-management?code=python) | iam_policy_management_v1 | IamPolicyManagementV1
[IBM Cloud Shell](https://cloud.ibm.com/apidocs/cloudshell?code=python) | ibm_cloud_shell_v1 | IbmCloudShellV1
[Open Service Broker](https://cloud.ibm.com/apidocs/resource-controller/ibm-cloud-osb-api?code=python) | open_service_broker_v1 | OpenServiceBrokerV1
[Partner Management APIs](https://cloud.ibm.com/apidocs/partner-apis/partner?code=python) | partner_management_v1 | PartnerManagementV1
[Resource Controller](https://cloud.ibm.com/apidocs/resource-controller/resource-controller?code=python) | resource_controller_v2 | ResourceControllerV2
[Resource Manager](https://cloud.ibm.com/apidocs/resource-controller/resource-manager?code=python) | resource_manager_v2 | ResourceManagerV2
[Usage Metering](https://cloud.ibm.com/apidocs/usage-metering?code=python) | usage_metering_v4 | UsageMeteringV4
[Usage Reports](https://cloud.ibm.com/apidocs/metering-reporting?code=python) | usage_reports_v4 | UsageReportsV4
[User Management](https://cloud.ibm.com/apidocs/user-management?code=python) | user_management_v1 | UserManagementV1
The following services have been relocated to a different SDK project.
Please consult the documentation for each service to determine the new location:
Service Name | Module Name | Service Class Name
--- | --- | ---
[Configuration Governance](https://cloud.ibm.com/apidocs/security-compliance/config?code=python) | configuration_governance_v1 | ConfigurationGovernanceV1
[Posture Management](https://cloud.ibm.com/apidocs/security-compliance/posture?code=python) | posture_management_v1 | PostureManagementV1
## Prerequisites
[ibm-cloud-onboarding]: https://cloud.ibm.com/registration
* An [IBM Cloud][ibm-cloud-onboarding] account.
* An IAM API key to allow the SDK to access your account. Create one [here](https://cloud.ibm.com/iam/apikeys).
* Python 3.10 or above.
## Installation
To install, use `pip`:
```bash
python -m pip install --upgrade ibm-platform-services
```
Then in your code, you can import the appropriate service like this:
```
from ibm_platform_services.<service-module-name> import *
```
where `<service-module-name>` is the service's module name from the table above
## Using the SDK
For general SDK usage information, please see [this link](https://github.com/IBM/ibm-cloud-sdk-common/blob/main/README.md)
## Questions
If you are having difficulties using this SDK or have a question about the IBM Cloud services,
please ask a question at
[Stack Overflow](http://stackoverflow.com/questions/ask?tags=ibm-cloud).
## Issues
If you encounter an issue with the project, you are welcome to submit a
[bug report](https://github.com/IBM/platform-services-python-sdk/issues).
Before that, please search for similar issues. It's possible that someone has already reported the problem.
## Open source @ IBM
Find more open source projects on the [IBM Github Page](http://ibm.github.io/)
## Contributing
See [CONTRIBUTING.md](https://github.com/IBM/platform-services-python-sdk/blob/main/CONTRIBUTING.md).
## License
This SDK is released under the Apache 2.0 license.
The license's full text can be found in [LICENSE](https://github.com/IBM/platform-services-python-sdk/blob/main/LICENSE).
| text/markdown | null | IBM <devxsdk@us.ibm.com> | null | null | null | ibm, cloud, ibm cloud services, ibm cloud platform services | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Development Status... | [] | null | null | >=3.10 | [] | [] | [] | [
"ibm_cloud_sdk_core<4.0.0,>=3.24.4",
"coverage<8.0.0,>=7.9.0; extra == \"dev\"",
"pylint<4.0.0,>=3.3.7; extra == \"dev\"",
"pytest<8.0.0,>=7.4.4; extra == \"dev\"",
"pytest-cov<5.0.0,>=4.1.0; extra == \"dev\"",
"responses<1.0.0,>=0.25.7; extra == \"dev\"",
"black<26.0.0,>=25.0.0; extra == \"dev\"",
"b... | [] | [] | [] | [
"Repository, https://github.com/IBM/platform-services-python-sdk",
"Documentation, https://github.com/IBM/platform-services-python-sdk/blob/main/README.md",
"Issues, https://github.com/IBM/platform-services-python-sdk/issues",
"Changelog, https://github.com/IBM/platform-services-python-sdk/blob/main/CHANGELOG... | twine/6.2.0 CPython/3.13.12 | 2026-02-19T13:21:51.052344 | ibm_platform_services-0.73.3.tar.gz | 361,560 | 96/d8/fecda74020bea99ee392155f12cdaf679a51c860ac2f441e505028aba966/ibm_platform_services-0.73.3.tar.gz | source | sdist | null | false | deaa838b1408846bba3e299a45e1d2db | 5b11d9291c8598175d2a57d2509eddb7899ee1a67a96b0d939dce3d2d6001c19 | 96d8fecda74020bea99ee392155f12cdaf679a51c860ac2f441e505028aba966 | null | [
"LICENSE"
] | 18,620 |
2.4 | opentelemetry-instrumentation-haystack | 0.52.4 | OpenTelemetry Haystack instrumentation | # OpenTelemetry Haystack Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-haystack/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-haystack.svg">
</a>
This library allows tracing complete LLM applications built with [Haystack](https://github.com/deepset-ai/haystack).
## Installation
```bash
pip install opentelemetry-instrumentation-haystack
```
## Example usage
```python
from opentelemetry.instrumentation.haystack import HaystackInstrumentor
HaystackInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com>, Tomer Friedman <tomer@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"haystack-ai; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-haystack"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:50.891356 | opentelemetry_instrumentation_haystack-0.52.4.tar.gz | 78,006 | db/ca/9f34df2934cbc55407c8427b033a28b5110d99c62741ebbb1fecfbc1ff63/opentelemetry_instrumentation_haystack-0.52.4.tar.gz | source | sdist | null | false | 4d61c08121f2f50bd60eaaa04c06ae02 | fae564adf9148040c5b3cd291df6f53b4beb06750468e86d5c865ccfbe13b4ad | dbca9f34df2934cbc55407c8427b033a28b5110d99c62741ebbb1fecfbc1ff63 | Apache-2.0 | [] | 53,289 |
2.4 | opentelemetry-instrumentation-groq | 0.52.4 | OpenTelemetry Groq instrumentation | # OpenTelemetry Groq Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-groq/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-groq.svg">
</a>
This library allows tracing Groq prompts and completions sent with the official [Groq SDK](https://github.com/groq/groq-python).
## Installation
```bash
pip install opentelemetry-instrumentation-groq
```
## Example usage
```python
from opentelemetry.instrumentation.groq import GroqInstrumentor
GroqInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"groq; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-groq"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:49.275557 | opentelemetry_instrumentation_groq-0.52.4.tar.gz | 129,167 | 2a/33/8da77bba20285fa78690b86eca5a365709c969f0d1295f13edb061853488/opentelemetry_instrumentation_groq-0.52.4.tar.gz | source | sdist | null | false | 89635a48ecec151006b577938181ba42 | 22ed27ec863af00c0a0939c43248fe7c796cb12c330298fd448db9ac57001fa5 | 2a338da77bba20285fa78690b86eca5a365709c969f0d1295f13edb061853488 | Apache-2.0 | [] | 30,621 |
2.4 | opentelemetry-instrumentation-google-generativeai | 0.52.4 | OpenTelemetry Google Generative AI instrumentation | # OpenTelemetry Google Generative AI Instrumentation
<a href="https://pypi.org/project/opentelemetry-instrumentation-google-generativeai/">
<img src="https://badge.fury.io/py/opentelemetry-instrumentation-google-generativeai.svg">
</a>
This library allows tracing Google Gemini prompts and completions sent with the official [Google Generative AI library](https://github.com/google-gemini/generative-ai-python).
## Installation
```bash
pip install opentelemetry-instrumentation-google-generativeai
```
## Example usage
```python
from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAiInstrumentor
GoogleGenerativeAiInstrumentor().instrument()
```
## Privacy
**By default, this instrumentation logs prompts, completions, and embeddings to span attributes**. This gives you a clear visibility into how your LLM application is working, and can make it easy to debug and evaluate the quality of the outputs.
However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces.
To disable logging, set the `TRACELOOP_TRACE_CONTENT` environment variable to `false`.
```bash
TRACELOOP_TRACE_CONTENT=false
```
| text/markdown | null | Gal Kleinman <gal@traceloop.com>, Nir Gazit <nir@traceloop.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"opentelemetry-api<2,>=1.38.0",
"opentelemetry-instrumentation>=0.59b0",
"opentelemetry-semantic-conventions-ai<0.5.0,>=0.4.13",
"opentelemetry-semantic-conventions>=0.59b0",
"google-genai; extra == \"instruments\""
] | [] | [] | [] | [
"Repository, https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-google-generativeai"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:21:48.229899 | opentelemetry_instrumentation_google_generativeai-0.52.4.tar.gz | 68,587 | 69/6c/0192b9db5f1d7af09b7d7ed959a154fadff9bbcb151986c8020698c542e6/opentelemetry_instrumentation_google_generativeai-0.52.4.tar.gz | source | sdist | null | false | da3f40b28c34e53bb06d269d15f05d41 | 378b006a709376a02b715c055ceea43872efc09ba63e3a0f5cff278885a61854 | 696c0192b9db5f1d7af09b7d7ed959a154fadff9bbcb151986c8020698c542e6 | Apache-2.0 | [] | 32,460 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.