metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | entelipy | 0.0.1 | TUI tools for Delta Controls and enteliWEB. | # entelipy
Terminal tools for working with Delta Controls controllers and enteliWEB.
> [!NOTE]
> This project is in early development.
| text/markdown | Matt Kaufman | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"textual>=0.50.0"
] | [] | [] | [] | [
"Homepage, https://github.com/makaufmanOS/entelipy",
"Issues, https://github.com/makaufmanOS/entelipy/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T01:40:38.226337 | entelipy-0.0.1.tar.gz | 2,373 | 1f/ef/92da8bcc13b193ea51cb88953f1d6f85ffbdb379c4883dba7d644689114f/entelipy-0.0.1.tar.gz | source | sdist | null | false | 8f87ba882dabf615b9a7d0ca4304fc7c | 058c16c17d61ca41878b8b1c3aa3b62657a9ff66420c9403adbbcf121cc64367 | 1fef92da8bcc13b193ea51cb88953f1d6f85ffbdb379c4883dba7d644689114f | null | [
"LICENSE"
] | 255 |
2.4 | portui | 0.1.1 | A terminal UI for monitoring and managing local listening ports | # PorTUI
A terminal UI application for monitoring local listening ports and managing the processes that own them.
## Features
- Real-time display of listening ports with process information
- Configurable columns (Port, IP, Protocol, Process, PID, User, State, Command)
- Sortable by Port, IP, Process, PID, User, State, or Protocol (numerical IP sorting)
- Sort order maintained during auto-refresh
- Inline tree view with real process hierarchy (toggle with `t`) — parent processes are resolved from the OS even when they don't hold ports (e.g. Chrome → Chrome Helper)
- Protocol filter to show TCP only, UDP only, or both (cycle with `h`)
- Real-time text filtering across all fields
- Port detail overlay with full untruncated command line (press Enter, Escape to close)
- Auto-refresh with configurable interval (pauses during interaction)
- Prominent visual indicator when auto-refresh is paused
- Kill processes with choice of graceful (SIGTERM) or force (SIGKILL)
- Cross-platform (macOS, Linux, Windows)
## Installation & Usage
### Run without installing (recommended)
```bash
uvx portui
```
### Install globally
```bash
uv tool install portui
portui
```
### Install with pip
```bash
pip install portui
portui
```
### Run from source
```bash
uv run portui.py
```
## Keyboard Shortcuts
| Key | Action |
|-----|--------|
| `↑/↓` or `j/k` | Navigate rows |
| `Enter` | Show full port/process details |
| `/` | Focus filter input (Escape to clear and exit) |
| `h` | Cycle protocol filter (Both → TCP → UDP) |
| `c` | Toggle column configuration |
| `s` | Cycle sort column (Port → IP → Process → PID → User → State → Protocol) |
| `t` | Toggle tree view (htop-style process hierarchy) |
| `x` | Kill selected process |
| `r` | Manual refresh |
| `p` | Pause/resume auto-refresh |
| `i` | Set refresh interval |
| `?` | Show help |
| `q` | Quit |
## Requirements
- Python 3.9+
- [uv](https://docs.astral.sh/uv/) (for `uvx` usage)
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"psutil>=5.9.0",
"textual>=0.52.0"
] | [] | [] | [] | [
"Repository, https://github.com/lowtrak/PorTUIi"
] | uv/0.8.17 | 2026-02-20T01:39:00.858482 | portui-0.1.1-py3-none-any.whl | 11,924 | cc/00/470fac548ca826bf96deaeb47a28a72cc5976d8ef7de051767c8c197cea1/portui-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 113bed50e4425923ea4e44eb3309f431 | 3dbcb1aaa51912a7d8561b2206e9558188c141b5d82ed763c260b7767e70d76e | cc00470fac548ca826bf96deaeb47a28a72cc5976d8ef7de051767c8c197cea1 | null | [] | 228 |
2.4 | clichatty | 0.3.2 | A cli for texting that integrates with the chatty database | # CLIChatty
A curses-based terminal interface to the
[chatty](https://gitlab.gnome.org/World/Chatty) messaging app's database.
Messages are sent using
[mmcli](https://github.com/linux-mobile-broadband/ModemManager). It is designed
to be used via SSH, this way you can message from your computer.
SSH into your phone, then run `clic` or `clichatty` to start the program. For help with
keybindings, press `H`.
| text/markdown | null | Emmet Weyman <emmetweyman@vt.edu> | null | null | GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: <program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <https://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <https://www.gnu.org/licenses/why-not-lgpl.html>. | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T01:38:22.271831 | clichatty-0.3.2-py3-none-any.whl | 33,195 | e6/b4/dbb1c507c16b2172a0304131ea018a8054f716bca02a030bcfe767577403/clichatty-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | f9a14828ed6120e2eff857f398c1f382 | b498e398fad582c4ea0351c985ae2176ea6dde42aa192384b84e879f0a4d59ad | e6b4dbb1c507c16b2172a0304131ea018a8054f716bca02a030bcfe767577403 | null | [
"LICENSE.txt"
] | 226 |
2.4 | pwb-wrapper-for-simpler-uploading-to-commons | 0.3.1 | CLI wrapper around Pywikibot for simpler Wikimedia Commons uploads | It is a wrapper around [pywikibot](https://github.com/wikimedia/pywikibot) to make uploading to Wikimedia Commons from CLI simpler.
## Install
From [PyPI](https://pypi.org/project/pwb-wrapper-for-simpler-uploading-to-commons/):
```bash
pip install pwb-wrapper-for-simpler-uploading-to-commons
```
With shell autocompletion support:
```bash
pip install 'pwb-wrapper-for-simpler-uploading-to-commons[autocomplete]'
```
## Usage
Installed CLI command:
```bash
pwb-upload --file my.jpg --source https://example.com --license 'PD-old' --category 'Sunsets in Batumi' --date '2025-12-27' --desc 'A beautiful sunset from the beach' --target myrenamed.jpg
```
For multiple categories, repeat `--category`:
```bash
pwb-upload --file my.jpg --category 'Sunsets in Batumi' --category 'Evening in Georgia' --date '2025-12-27'
```
Default `source` is `{{own}}`.
Default `license` is `cc-by-4.0`.
`--target` (file name on Commons) is optional.
`--prefix` (prefix for file name on Commons) is optional.
`--i` lets you start from a specific index.
`--recursive` includes files from subfolders when no file is specified.
You can upload with minimal arguments:
```bash
pwb-upload --file my.jpg --category 'Sunsets in Batumi' --date '2025-12-27'
```
Or pass the file as positional argument:
```bash
pwb-upload my.jpg --category 'Sunsets in Batumi' --date '2025-12-27'
```
Or upload without `category` and `date` (set them later on Commons):
```bash
pwb-upload my.jpg
```
If no file is specified, it uploads eligible files from the current folder (no subfolders by default).
To include subfolders:
```bash
pwb-upload --recursive
```
For local development (without installation), you can still run:
```bash
./upload.py --file my.jpg
```
Author: for the current user use `me`.
## Autocompletion
Enable completion for the installed command:
```bash
eval "$(register-python-argcomplete pwb-upload)"
```
## Release to PyPI
Release scripts in this repository:
- `release_minor.sh` bumps `x.y.z` -> `x.(y+1).0`
- `release_patch.sh` bumps `x.y.z` -> `x.y.(z+1)`
- `release-added.sh` uses already staged files and already bumped version in `pyproject.toml` (no `git add .`)
Call them from git (one-time setup):
```bash
git config alias.release-minor '!f() { repo=$(git rev-parse --show-toplevel) && bash "$repo/release_minor.sh" "$@"; }; f'
git config alias.release-patch '!f() { repo=$(git rev-parse --show-toplevel) && bash "$repo/release_patch.sh" "$@"; }; f'
git config alias.release-added '!f() { repo=$(git rev-parse --show-toplevel) && bash "$repo/release-added.sh" "$@"; }; f'
```
Then run:
```bash
git release-minor 'Release message'
git release-patch 'Release message'
git release-added 'Release message'
```
Build distribution files:
```bash
python3 -m build
```
Upload to TestPyPI:
```bash
python3 -m twine upload --repository testpypi dist/*
```
Upload to PyPI:
```bash
python3 -m twine upload dist/*
```
Wikidata item about this tool https://www.wikidata.org/wiki/Q137601716
Commons category https://commons.wikimedia.org/wiki/Category:Uploaded_with_pwb_wrapper_script_by_Vitaly_Zdanevich
SonarCloud https://sonarcloud.io/project/overview?id=vitaly-zdanevich_pwb_wrapper_for_simpler_uploading_to_commons
## See also
My another Python script for [uploading to Commons from gThumb](https://gitlab.com/vitaly-zdanevich/upload_to_commons_with_categories_from_iptc)
My [web extension for uploading to Commons](https://gitlab.com/vitaly-zdanevich-extensions/uploading-to-wikimedia-commons)
[All upload tools](https://commons.wikimedia.org/wiki/Commons:Upload_tools)
[CLI upload tools](https://commons.wikimedia.org/wiki/Commons:Command-line_upload)
| text/markdown | Vitaly Zdanevich | null | null | null | null | wikimedia, commons, pywikibot, upload, cli | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3... | [] | null | null | >=3.9 | [] | [] | [] | [
"pywikibot",
"argcomplete; extra == \"autocomplete\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/vitaly-zdanevich/pwb_wrapper_for_simpler_uploading_to_commons",
"Repository, https://gitlab.com/vitaly-zdanevich/pwb_wrapper_for_simpler_uploading_to_commons",
"Documentation, https://gitlab.com/vitaly-zdanevich/pwb_wrapper_for_simpler_uploading_to_commons",
"Issues, https://gitl... | twine/6.2.0 CPython/3.13.12 | 2026-02-20T01:37:48.997050 | pwb_wrapper_for_simpler_uploading_to_commons-0.3.1.tar.gz | 5,889 | 11/93/5ab5f90c8487afb8e7eeaa5f6ec3c0350350756ef8a2217783db401b8c37/pwb_wrapper_for_simpler_uploading_to_commons-0.3.1.tar.gz | source | sdist | null | false | 0537fcd674c9b4baece352af525bda54 | 57ca3530c2e5a48f316e7bf7e191a9c369832f927d0370eb6330e6864b0bfc56 | 11935ab5f90c8487afb8e7eeaa5f6ec3c0350350756ef8a2217783db401b8c37 | MIT | [
"LICENSE"
] | 203 |
2.4 | xgen-doc2chunk | 0.2.20 | Convert raw documents into AI-understandable context with intelligent text extraction, table detection, and semantic chunking | # xgen-doc2chunk
**xgen-doc2chunk** is a document processing library that converts raw documents into AI-understandable context. It analyzes, restructures, and normalizes content so that language models can reason over documents with higher accuracy and consistency.
## Features
- **Multi-format Support**: Process a wide variety of document formats including:
- PDF (with table detection, OCR fallback, and complex layout handling)
- Microsoft Office: DOCX, DOC, PPTX, PPT, XLSX, XLS
- Korean documents: HWP, HWPX (Hangul Word Processor)
- Text formats: TXT, MD, RTF, CSV, HTML
- Code files: Python, JavaScript, TypeScript, and 20+ languages
- **Intelligent Text Extraction**:
- Preserves document structure (headings, paragraphs, lists)
- Extracts tables as HTML with proper `rowspan`/`colspan` handling
- Handles merged cells and complex table layouts
- Extracts and processes inline images
- **OCR Integration**:
- Pluggable OCR engine architecture
- Supports OpenAI, Anthropic, Google Gemini, and vLLM backends
- Automatic OCR fallback for scanned documents or image-based PDFs
- **Smart Chunking**:
- Semantic text chunking with configurable size and overlap
- Table-aware chunking that preserves table integrity
- Protected regions for code blocks and special content
- **Metadata Extraction**:
- Extracts document metadata (title, author, creation date, etc.)
- Formats metadata in a structured, parseable format
## Installation
```bash
pip install xgen-doc2chunk
```
Or using uv:
```bash
uv add xgen-doc2chunk
```
## Quick Start
### Basic Usage
```python
from xgen_doc2chunk import DocumentProcessor
# Create processor instance
processor = DocumentProcessor()
# Extract text from a document
text = processor.extract_text("document.pdf")
print(text)
# Extract text and chunk in one step
result = processor.extract_chunks(
"document.pdf",
chunk_size=1000,
chunk_overlap=200
)
# Access chunks
for i, chunk in enumerate(result.chunks):
print(f"Chunk {i + 1}: {chunk[:100]}...")
# Save chunks to markdown file
result.save_to_md("output/chunks.md")
```
### With OCR Processing
```python
from xgen_doc2chunk import DocumentProcessor
from xgen_doc2chunk.ocr.ocr_engine.openai_ocr import OpenAIOCREngine
# Initialize OCR engine
ocr_engine = OpenAIOCREngine(api_key="sk-...", model="gpt-4o")
# Create processor with OCR
processor = DocumentProcessor(ocr_engine=ocr_engine)
# Extract text with OCR processing enabled
text = processor.extract_text(
"scanned_document.pdf",
ocr_processing=True
)
```
## Supported Formats
| Category | Extensions |
|----------|------------|
| Documents | `.pdf`, `.docx`, `.doc`, `.pptx`, `.ppt`, `.hwp`, `.hwpx` |
| Spreadsheets | `.xlsx`, `.xls`, `.csv`, `.tsv` |
| Text | `.txt`, `.md`, `.rtf` |
| Web | `.html`, `.htm`, `.xml` |
| Code | `.py`, `.js`, `.ts`, `.java`, `.cpp`, `.c`, `.go`, `.rs`, and more |
| Config | `.json`, `.yaml`, `.yml`, `.toml`, `.ini`, `.env` |
## Architecture
```
libs/
├── core/
│ ├── document_processor.py # Main entry point
│ ├── processor/ # Format-specific handlers
│ │ ├── pdf_handler.py # PDF processing with V4 engine
│ │ ├── docx_handler.py # DOCX processing
│ │ ├── ppt_handler.py # PowerPoint processing
│ │ ├── excel_handler.py # Excel processing
│ │ ├── hwp_processor.py # HWP 5.0 OLE processing
│ │ ├── hwpx_processor.py # HWPX (ZIP/XML) processing
│ │ └── ...
│ └── functions/
│ └── img_processor.py # Image handling utilities
├── chunking/
│ ├── chunking.py # Main chunking interface
│ ├── text_chunker.py # Text-based chunking
│ ├── table_chunker.py # Table-aware chunking
│ └── page_chunker.py # Page-based chunking
└── ocr/
├── base.py # OCR base class
├── ocr_processor.py # OCR processing utilities
└── ocr_engine/ # OCR engine implementations
├── openai_ocr.py
├── anthropic_ocr.py
├── gemini_ocr.py
└── vllm_ocr.py
```
## Requirements
- Python 3.12+
- Required dependencies are automatically installed (see `pyproject.toml`)
### System Dependencies
For full functionality, you may need:
- **Tesseract OCR**: For local OCR fallback
- **LibreOffice**: For DOC/RTF conversion (optional)
- **Poppler**: For PDF image extraction
## Configuration
```python
# Custom configuration
config = {
"pdf": {
"extract_images": True,
"ocr_fallback": True,
},
"chunking": {
"default_size": 1000,
"default_overlap": 200,
}
}
processor = DocumentProcessor(config=config)
```
## License
Apache License 2.0 - see [LICENSE](LICENSE) for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | null | master0419 <7slwm7@khu.ac.kr> | null | master0419 <7slwm7@khu.ac.kr> | Apache-2.0 | ai, chunking, document-processing, docx, hwp, langchain, llm, ocr, pdf, text-extraction, xlsx | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scie... | [] | null | null | >=3.12 | [] | [] | [] | [
"beautifulsoup4==4.14.3",
"cachetools==6.2.4",
"chardet==5.2.0",
"docx2pdf==0.1.8",
"langchain-anthropic==1.3.1",
"langchain-aws==1.2.0",
"langchain-community==0.4.1",
"langchain-core==1.2.6",
"langchain-google-genai==4.1.3",
"langchain-openai==1.1.7",
"langchain-text-splitters==1.1.0",
"langc... | [] | [] | [] | [
"Homepage, https://github.com/master0419/doc2chunk",
"Documentation, https://github.com/master0419/doc2chunk#readme",
"Repository, https://github.com/master0419/doc2chunk.git",
"Issues, https://github.com/master0419/doc2chunk/issues",
"Changelog, https://github.com/master0419/doc2chunk/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:37:35.572544 | xgen_doc2chunk-0.2.20.tar.gz | 306,123 | 69/a5/75da468c46dbfceb14326363672775d005da469b21cc75914b9fc7814d86/xgen_doc2chunk-0.2.20.tar.gz | source | sdist | null | false | 3611a0dd29f877f9cfd69d394219a726 | b523cb096cd4ccfa40d602aedde7d74ccb4218fdf0f68fb9f73cc61e7cb62a16 | 69a575da468c46dbfceb14326363672775d005da469b21cc75914b9fc7814d86 | null | [
"LICENSE"
] | 205 |
2.4 | obffile | 2026.2.20 | Read Imspector object binary format files (OBF and MSR) | Read Imspector object binary format files (OBF and MSR)
=======================================================
Obffile is a Python library to read image and metadata from
Object Binary Format (OBF) and Measurement Summary Record (MSR) image files.
These files are written by Imspector software to store image and metadata
from microscopy experiments.
:Author: `Christoph Gohlke <https://www.cgohlke.com>`_
:License: BSD-3-Clause
:Version: 2026.2.20
Quickstart
----------
Install the obffile package and all dependencies from the
`Python Package Index <https://pypi.org/project/obffile/>`_::
python -m pip install -U obffile[all]
See `Examples`_ for using the programming interface.
Source code and support are available on
`GitHub <https://github.com/cgohlke/obffile>`_.
Requirements
------------
This revision was tested with the following requirements and dependencies
(other versions may work):
- `CPython <https://www.python.org>`_ 3.11.9, 3.12.10, 3.13.12, 3.14.3 64-bit
- `NumPy <https://pypi.org/project/numpy>`_ 2.4.2
- `Xarray <https://pypi.org/project/xarray>`_ 2026.2.0 (recommended)
- `Matplotlib <https://pypi.org/project/matplotlib/>`_ 3.10.8 (optional)
- `Tifffile <https://pypi.org/project/tifffile/>`_ 2026.2.16 (optional)
Revisions
---------
2026.2.20
- Initial alpha release.
- …
Notes
-----
`Imspector <https://imspectordocs.readthedocs.io>`_ is a software platform for
super-resolution and confocal microscopy developed by Abberior Instruments.
This library is in its early stages of development. It is not feature-complete.
Large, backwards-incompatible changes may occur between revisions.
Specifically, the following features are not supported:
writing or modifying OBF/MSR files, non-OBF based MSR files, reading
MSR-specific non-image data (window positions, hardware configuration),
and compression types other than zlib.
The library has been tested with a limited number of files only.
The Imspector image file formats are documented at
https://imspectordocs.readthedocs.io/en/latest/fileformat.html.
Other implementations for reading Imspector image files are
`msr-reader <https://github.com/hoerlteam/msr-reader>`_,
`obf_support.py <https://github.com/biosciflo/VISION>`_, and
`bio-formats <https://github.com/ome/bioformats>`_.
Examples
--------
Read an image stack and metadata from a OBF file:
>>> with ObfFile('tests/data/Test.obf') as obf:
... assert obf.header.metadata['ome_xml'].startswith('<?xml')
... for stack in obf.stacks:
... _ = stack.name, stack.dims, stack.shape, stack.dtype
... obf.stacks[0].asxarray()
...
<xarray.DataArray 'Abberior STAR RED.Confocal' (T: 18, Z: 2, Y: 339, X: 381)...
array([[[[0, 0, 0, ..., 3, 3, 3],
[0, 0, 0, ..., 4, 3, 3],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]]], shape=(18, 2, 339, 381), dtype=int16)
Coordinates:
* T (T) float64 144B 0.0 0.0 0.0 0.0 0.0 0.0 ...
* Z (Z) float64 16B 1.25e-07 3.75e-07
* Y (Y) float64 3kB 0.0 2e-07 4e-07 ...
* X (X) float64 3kB 0.0 2.002e-07 4.003e-07 ...
...
View the image stack and metadata in a OBF file from the console::
$ python -m obffile tests/data/Test.obf
| text/x-rst | Christoph Gohlke | cgohlke@cgohlke.com | null | null | BSD-3-Clause | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language ... | [
"any"
] | https://www.cgohlke.com | null | >=3.11 | [] | [] | [] | [
"numpy",
"xarray; extra == \"all\"",
"tifffile; extra == \"all\"",
"matplotlib; extra == \"all\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/cgohlke/obffile/issues",
"Source Code, https://github.com/cgohlke/obffile"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T01:33:04.363191 | obffile-2026.2.20.tar.gz | 23,671 | c7/cb/fa3b00e401bf19b073481af13a899bf1edf6688517336a9dffb43f051bfa/obffile-2026.2.20.tar.gz | source | sdist | null | false | 261a904769a1fb13ab8e354bdf788ee4 | a80188c6ca6263ac061a6bddd75ad3ceaa1160381783e8e5d1369583dce7151b | c7cbfa3b00e401bf19b073481af13a899bf1edf6688517336a9dffb43f051bfa | null | [
"LICENSE"
] | 255 |
2.4 | rds-proxy-password-rotation | 0.6.328 | A program to rotate the password of an RDS database accessed via a RDS proxy | # rds-proxy-password-rotation
:warning: **Work in progress** :warning:
- add Terraform module
Python script for multi-user password rotation using RDS and RDS proxy. It supports credentials for the application and the RDS
proxy.
We implemented this logic again, because current implementations
- have no tests
- have no release process
- are not published to PyPI
- have no Docker image available
- have no Terraform module available
## Pre-requisites
1. Python 3.10 or later
2. For each db user:
1. Clone the user in the database and grant the necessary permissions. We suggest to add a `-clone` suffix to the username.
2. Create a secret in AWS Secrets Manager with the following key-value pairs (for every user and its clone):
- `rotation_type`: "AWS RDS"
- `rotation_usernames`: Optional. The list of usernames that a part of the rotation, e.g. `["app_user", "app_user-clone"]`.
If not provided, `username` is used only.
- `proxy_secret_ids`: Optional. The list of ARNs of the secrets that are attached to the RDS Proxy, e.g.
`["arn:aws:secretsmanager:region:account-id:secret:secret-name"]`. If not provided, the proxy credentials are not adjusted.
- `database_host`: The hostname of the database
- `database_port`: The port of the database
- `database_name`: The name of the database
- `username`: The username for the user
- `password`: The password for the user
This credential will be used by the application to connect to the proxy. You may add additional key-value pairs as needed.
3. If you are using RDS Proxy:
1. Create a secret in AWS Secrets Manager with the following key-value pairs:
- `username`: The username for the user that the proxy will use to connect to the database
- `password`: The password for the user that the proxy will use to connect to the database
2. Attach the secret to the RDS Proxy.
4. The docker image can be pulled from GHCR:
```bash
docker pull ghcr.io/Hapag-Lloyd/rds-proxy-password-rotation:edge
```
:warning: The `edge` tag is used for the latest build. You SHOULD use a specific version tag in production.
## Architecture

## Challenges with RDS and RDS Proxy
RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications
more scalable, more resilient to database failures, and more secure. It allows applications to pool and share database connections
to improve efficiency and reduce the load on your database instances.
However, RDS Proxy does not support multi-user password rotation out of the box. This script provides a solution to this problem.
Using an RDS Proxy requires a secret in AWS Secrets Manager with the credentials to connect to the database. This secret is used by
the proxy to connect to the database. The proxy allows the application to connect to the database using the same credentials and
then forwards the requests to the database with the same credentials. This means that the credentials in the secret must be valid
in the database at all times. But what if you want to rotate the password for the user that the proxy uses to connect to the
database? You can’t just update the secret in SecretsManager because the proxy will stop working as soon as the secret is updated.
And you can’t just update the password in the database because the proxy will stop working as soon as the password is updated.
## Why password rotation is a good practice
Password rotation is a good idea for several reasons:
1. **Enhanced Security**: Regularly changing passwords reduces the risk of unauthorized access due to compromised credentials.
2. **Mitigates Risk**: Limits the time window an attacker has to exploit a stolen password.
3. **Compliance**: Many regulatory standards and security policies require periodic password changes.
4. **Reduces Impact of Breaches**: If a password is compromised, rotating it ensures that the compromised password is no longer valid.
5. **Encourages Good Practices**: Promotes the use of strong, unique passwords and discourages password reuse.
| text/markdown | Hapag-Lloyd AG | info@hlag.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/Hapag-Lloyd/rds-proxy-password-rotation | null | >=3.10 | [] | [] | [] | [
"aws-lambda-powertools==3.24.0",
"boto3==1.42.53",
"boto3-stubs[secretsmanager]==1.42.53",
"cachetools==6.2.6",
"dependency-injector==4.48.3",
"psycopg[binary]==3.3.3",
"pydantic==2.12.5",
"pytest==8.4.2; extra == \"test\"",
"pytest-cov==7.0.0; extra == \"test\"",
"requests===2.32.5; extra == \"te... | [] | [] | [] | [
"Bug Tracker, https://github.com/Hapag-Lloyd/rds-proxy-password-rotation/issues",
"repository, https://github.com/Hapag-Lloyd/rds-proxy-password-rotation"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:32:20.713400 | rds_proxy_password_rotation-0.6.328.tar.gz | 9,866 | d6/86/7fc06623d26b942e366d64a449c3cbad915d5993ddc24e63073a3a7e3e6e/rds_proxy_password_rotation-0.6.328.tar.gz | source | sdist | null | false | 346365033cde0230fbc01c3689ff4cc1 | 1d5a80220ec59373c4dcac53244d15e9e59b57089c196af26cf7fdbf368979cb | d6867fc06623d26b942e366d64a449c3cbad915d5993ddc24e63073a3a7e3e6e | null | [] | 233 |
2.1 | blocks-control-sdk | 0.2.0rc10 | A unified Python interface to interact with popular coding agents. | # Blocks Control SDK
A unified Python interface to interact with popular coding agents.
> Think of it like litellm, but for coding agents
## Supported Agents
- **Claude Code** - Anthropic's Claude
- **Gemini CLI** - Google's Gemini
- **Codex CLI** - OpenAI's Codex
- **Cursor CLI** - Cursor's AI agent
- **OpenCode** - OpenCode CLI agent
- **Kimi CLI** - Kimi's AI agent
## Installation
```bash
pip install blocks-control-sdk
```
You must also have the dependent agent packages installed to use a specific agent.
```bash
npm i -g @anthropic-ai/claude-code # For Claude Code
npm i -g @google/gemini-cli # For Gemini CLI
npm i -g @openai/codex-cli # For Codex CLI
npm i -g @anthropic-ai/cursor-cli # For Cursor CLI (if applicable)
```
## Usage
### Async Streaming
```python
import asyncio
from blocks_control_sdk import ClaudeCode
async def main():
agent = ClaudeCode()
async for message in agent.stream("Write a python script to print 'Hello, World!'"):
if isinstance(message, tuple):
tool_name, args = message
print(f"Tool call: {tool_name} with args {args}")
else:
print(message.content)
if __name__ == "__main__":
asyncio.run(main())
```
### Sync with Callbacks
```python
from blocks_control_sdk import Codex
agent = Codex()
def on_message(notification):
print(notification.message.content)
agent.register_notification(agent.notifications.NOTIFY_MESSAGE_V2, on_message)
agent.query("Write a python script to print 'Hello, World!'")
```
### All Agents
```python
from blocks_control_sdk import ClaudeCode, Codex, GeminiCLI, CursorCLI, OpenCode, KimiCLI
# Claude
claude = ClaudeCode()
# Gemini
gemini = GeminiCLI()
# Codex
codex = Codex()
# Cursor
cursor = CursorCLI()
# OpenCode
opencode = OpenCode()
# Kimi
kimi = KimiCLI()
```
## Environment Variables
```bash
export ANTHROPIC_API_KEY="your-key" # For Claude
export GEMINI_API_KEY="your-key" # For Gemini
export OPENAI_API_KEY="your-key" # For Codex
export CURSOR_API_KEY="your-key" # For Cursor
```
| text/markdown | BlocksOrg | dev@blocks.team | null | null | AGPL | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programmi... | [] | https://github.com/BlocksOrg/blocks-control-sdk | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.11 | 2026-02-20T01:31:46.802109 | blocks_control_sdk-0.2.0rc10.tar.gz | 133,184 | 76/70/84ee056780b832a10c406bff15bd54708dc1d907b4de1592087e1fe02ffb/blocks_control_sdk-0.2.0rc10.tar.gz | source | sdist | null | false | 15aa38a8023dd0507f884841e6523c9d | e19c609e65b929c92d998c2e21e4fd4e70e8395328d9d31028869c73eb909e2c | 767084ee056780b832a10c406bff15bd54708dc1d907b4de1592087e1fe02ffb | null | [] | 146 |
2.4 | gw-agn-watcher | 0.3.9 | Python tools for GW AGN follow-up | # 🛰️ gw_agn_watcher
[](https://pypi.org/project/gw-agn-watcher/)
[](LICENSE)
[](https://github.com/yourusername/gw_agn_watcher/actions)
---
### Overview
**`gw_agn_watcher`** is a Python package for the **automated crossmatching of gravitational-wave (GW) sky maps** from the LIGO–Virgo–KAGRA (LVK) Collaboration with ** ZTF alerts and AGN catalogs** using ALeRCE infrastructure.
It enables systematic searches for **electromagnetic counterparts** to compact binary mergers, with a particular focus on mergers that may occur in **active galactic nuclei (AGN) disks**.
---
### Key Features
- 📡 **Ingest LVK skymaps** (`.fits`, HEALPix format)
- 🌌 **Crossmatch ZTF alerts** with AGN catalogs (e.g., Milliquas)
- 🧠 **Apply ML-based filters** using ALeRCE classifiers, Pan-STARRS morphology, and Deep Real/Bogus scores
- 📅 **Temporal and spatial filtering** relative to the GW trigger time and sky localization
- 🎯 **Host-galaxy association** and ranking based on 2σ GW distance posteriors
- 🗺️ **Visualization tools** for probability maps, candidate locations, and sky coverage
- 🔧 **Modular and extensible** — suitable for ToO planning, multi-messenger analyses, and survey follow-up
---
### Installation
```bash
pip install gw_agn_watcher
| text/markdown | Hemanth Kumar | hemanth.bommireddy195@gmail.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/Hemanthb1/GW_AGN_watcher | null | >=3.8 | [] | [] | [] | [
"numpy",
"astropy",
"matplotlib",
"scipy",
"requests",
"pandas",
"astroquery",
"alphashape",
"psycopg2-binary",
"requests",
"ephem",
"dustmaps",
"dust_extinction",
"ligo.skymap"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T01:31:35.696275 | gw_agn_watcher-0.3.9.tar.gz | 20,724 | 51/54/7fc3331ba4c57bc9f314c4ac0d3d792752697fb27f2a4490ae06c2a9b978/gw_agn_watcher-0.3.9.tar.gz | source | sdist | null | false | bf28ec6c2b5c6d171a116521b65869fc | dae6e2f909f608e3f2f62f05a52eddc9229bad402b4728ac404c5aff45863990 | 51547fc3331ba4c57bc9f314c4ac0d3d792752697fb27f2a4490ae06c2a9b978 | null | [
"LICENSE"
] | 226 |
2.4 | verinfast | 0.7.6 | This tool safely and securely analyzes applications for benchmarking. | [](https://github.com/StartupOS/verinfast/actions/workflows/release.yml)
[](https://codecov.io/gh/StartupOS/verinfast)
[](code_of_conduct.md)
# VerinFast™
Scan your codebase to reveal language breakdown, dependencies, OWASP vulnerabilities, cloud costs, and exactly what AI is adding to your application.
## Installation
### pip
```sh
pip install verinfast
```
### pipx
```sh
pipx install verinfast
```
### Poetry
```sh
poetry add verinfast
```
### Docker
```sh
docker build -t verinfast .
docker run --rm -v $(pwd):/usr/src/app verinfast
```
## Requirements
- Python 3.9+ (test with `python3 --version`)
- SSH access to code repositories (test with `git status`)
- Command line tool access to cloud hosting providers (AWS CLI, Azure CLI, or gcloud)
- Your dependency management tools (e.g. `npm`, `yarn`, `maven`, `pip`, `poetry`)
- Outbound internet access (for posting results and fetching dependency metadata)
## Usage
```sh
# Run in a directory with a config.yaml file
verinfast
# Point to a specific config file
verinfast --config=/path/to/config.yaml
# Set a custom output directory
verinfast --output=/path/to/output
# Check the installed version
verinfast --version
```
## Config Options
If you want to check the output for yourself you can set `should_upload: false`, and use the flag `--output=/path/to/dir`. This will give you the chance to inspect what we collect before uploading. For large repositories, it is a lot of information, but we never upload your code or any credentials, just the summary data we collect.
## Troubleshooting
### Python
- Run `python3 -m pip install --upgrade pip setuptools wheel`
### git
- Run `which git`, `git --version`
- Run `ssh -vT git@github.com` to test access to GitHub
### AWS
- Run `which aws`, `aws --version`
### Azure
- Run `az login`, `az --version`
- Run `az account subscription list` to check subscription ID
### GCP
- Run `which gcloud`, `gcloud --version`
### Semgrep
- Run `which semgrep`, `semgrep --version`
Copyright ©2023-2026 Startos Inc.
| text/markdown | null | Jason Nichols <github@verinfast.com>, Sean Conrad <github@verinfast.com> | null | null | null | null | [
"License :: Free for non-commercial use",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | <=3.14,>=3.11 | [] | [] | [] | [
"azure-identity~=1.25.0",
"azure-mgmt-compute~=37.2.0",
"azure-mgmt-monitor~=7.0.0",
"azure-mgmt-network~=30.2.0",
"azure-mgmt-resource~=25.0.0",
"azure-mgmt-storage~=24.0.0",
"azure-monitor-query~=1.4.0",
"boto3~=1.42.0",
"defusedxml~=0.7.1",
"gemfileparser~=0.8.0",
"google-cloud-compute>=1.14.... | [] | [] | [] | [
"Homepage, https://github.com/VerinFast/verinfast",
"Bug Tracker, https://github.com/VerinFast/verinfast/issues",
"Source, https://github.com/VerinFast/verinfast"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:29:04.398920 | verinfast-0.7.6.tar.gz | 328,159 | e0/1e/f0530758ecadfe587959be9d9301aebf3f17506dbb831bef262145aa1102/verinfast-0.7.6.tar.gz | source | sdist | null | false | e781164a45e70c4a3492b31ad5016132 | 770fff007c7ab222c509c91bf31cd95fe548ad3054d77acffd5735c5c244ba65 | e01ef0530758ecadfe587959be9d9301aebf3f17506dbb831bef262145aa1102 | null | [
"LICENSE"
] | 227 |
2.4 | relbench | 2.1.0 | RelBench: Relational Deep Learning Benchmark | <p align="center"><img src="https://relbench.stanford.edu/img/logo.png" alt="logo" width="600px" /></p>
----
[](https://relbench.stanford.edu)
[](https://badge.fury.io/py/relbench)
[](https://github.com/snap-stanford/relbench/actions/workflows/testing.yml)
[](https://opensource.org/licenses/MIT)
[](https://twitter.com/RelBench)
<!-- **Get Started:** loading data [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/drive/1PAOktBqh_3QzgAKi53F4JbQxoOuBsUBY?usp=sharing), training model [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/drive/1_z0aKcs5XndEacX1eob6csDuR4DYhGQU?usp=sharing). -->
<!-- [<img align="center" src="https://relbench.stanford.edu/img/favicon.png" width="20px" /> -->
[**Website**](https://relbench.stanford.edu) | [**Position Paper**](https://proceedings.mlr.press/v235/fey24a.html) | [**Benchmark Paper**](https://arxiv.org/abs/2407.20060) | [**Mailing List**](https://groups.google.com/forum/#!forum/relbench/join)
# News
**February 13, 2026: RelBench v2 paper + Temporal Graph Benchmark integration**
The RelBench v2 paper is now accessible as a preprint! Please see the paper on [arXiv](https://arxiv.org/abs/2602.12606).
Alongside our paper, we also integrate the [Temporal Graph Benchmark](https://tgb.complexdatalab.com/) (TGB) into RelBench. TGB integration includes translating time-stamped event streams into normalized relational schemas, which enables direct comparison between temporal graph models and relational deep learning models.
**January 12, 2026: RelBench v2 is now released!**
- Introducing Autocomplete tasks: new task paradigm to predict existing columns in the database.
- 4 new databases: [SALT](https://relbench.stanford.edu/datasets/rel-salt), [RateBeer](https://relbench.stanford.edu/datasets/rel-ratebeer), [arXiv](https://relbench.stanford.edu/datasets/rel-arxiv), and [MIMIC-IV](https://relbench.stanford.edu/datasets/rel-mimic).
- 36 new predictive tasks, including 23 Autocomplete tasks across new and existing databases.
- CTU integration: 70+ relational datasets from the CTU repository via [ReDeLEx](https://github.com/jakubpeleska/redelex).
- Direct SQL database connectivity via [ReDeLEx](https://github.com/jakubpeleska/redelex).
- 4DBInfer integration: 7 relational datasets from the [4DBInfer](https://github.com/awslabs/multi-table-benchmark) repository in RelBench format.
- Bug fixes and performance improvements:
- Optionally include (time-censored) labels as features in the database. ([#327](https://github.com/snap-stanford/relbench/pull/327))
- Support NDCG metric for link prediction. ([#276](https://github.com/snap-stanford/relbench/pull/276"))
- Optimize SentenceTransformer encoding with Torch for 10-20% faster processing than default NumPy encoding. ([#261](https://github.com/snap-stanford/relbench/pull/261"))
- Enable configuring RelBench cache directory via environment variable. ([#336](https://github.com/snap-stanford/relbench/pull/336"))
- ... and more (see commit history for details)
**September 26, 2024: RelBench is accepted to the NeurIPS Datasets and Benchmarks track!**
**July 3rd, 2024: RelBench v1 is now released!**
# Overview
<!-- The Relational Deep Learning Benchmark (RelBench) is a collection of realistic, large-scale, and diverse benchmark datasets for machine learning on relational databases. RelBench supports deep learning framework agnostic data loading, task specification, standardized data splitting, and transforming data into graph format. RelBench also provides standardized evaluation metric computations and a leaderboard for tracking progress. -->
<!-- <p align="center"><img src="https://relbench.stanford.edu/img/relbench-fig.png" alt="pipeline" /></p> -->
Relational Deep Learning is a new approach for end-to-end representation learning on data spread across multiple tables, such as in a _relational database_ (see our [position paper](https://relbench.stanford.edu/paper.pdf)). Relational databases are the world's most widely used data management system, and are used for industrial and scientific purposes across many domains. RelBench is a benchmark designed to facilitate efficient, robust and reproducible research on end-to-end deep learning over relational databases.
RelBench v1 contains 7 realistic, large-scale, and diverse relational databases spanning domains including medical, social networks, e-commerce and sport. RelBench v2 adds 4 more, now totaling 11 databases. Each database has multiple predictive tasks (66 in total) defined, each carefully scoped to be both challenging and of domain-specific importance. It provides full support for data downloading, task specification and standardized evaluation in an ML-framework-agnostic manner.
Additionally, RelBench provides a first open-source implementation of a Graph Neural Network based approach to relational deep learning. This implementation uses [PyTorch Geometric](https://github.com/pyg-team/pytorch_geometric) to load the data as a graph and train GNN models, and [PyTorch Frame](https://github.com/pyg-team/pytorch-frame) for modeling tabular data. Finally, there is an open [leaderboard](https://huggingface.co/relbench) for tracking progress.
# Key Papers
[**RelBench: A Benchmark for Deep Learning on Relational Databases**](https://arxiv.org/abs/2407.20060)
This paper details our approach to designing the RelBench benchmark. It also includes a key user study showing that relational deep learning can produce performant models with a fraction of the manual human effort required by typical data science pipelines. This paper is useful for a detailed understanding of RelBench and our initial benchmarking results. If you just want to quickly familiarize with the data and tasks, the [**website**](https://relbench.stanford.edu) is a better place to start.
<!---Joshua Robinson*, Rishabh Ranjan*, Weihua Hu*, Kexin Huang*, Jiaqi Han, Alejandro Dobles, Matthias Fey, Jan Eric Lenssen, Yiwen Yuan, Zecheng Zhang, Xinwei He, Jure Leskovec-->
[**Position: Relational Deep Learning - Graph Representation Learning on Relational Databases (ICML 2024)**](https://proceedings.mlr.press/v235/fey24a.html)
This paper outlines our proposal for how to do end-to-end deep learning on relational databases by combining graph neural networsk with deep tabular models. We reccomend reading this paper if you want to think about new methods for end-to-end deep learning on relational databases. The paper includes a section on possible directions for future research to give a snapshot of some of the research possibilities there are in this area.
<!--- Matthias Fey*, Weihua Hu*, Kexin Huang*, Jan Eric Lenssen*, Rishabh Ranjan, Joshua Robinson*, Rex Ying, Jiaxuan You, Jure Leskovec.-->
# Design of RelBench
<p align="center"><img src="https://relbench.stanford.edu/img/relbench-fig.png" alt="logo" width="900px" /></p>
RelBench has the following main components:
1. 11 databases with a total of 66 tasks; both of these automatically downloadable for ease of use
2. Easy data loading, and graph construction from pkey-fkey links
3. Your own model, which can use any deep learning stack since RelBench is framework-agnostic. We provide a first model implementation using PyTorch Geometric and PyTorch Frame.
4. Standardized evaluators - all you need to do is produce a list of predictions for test samples, and RelBench computes metrics to ensure standardized evaluation
5. A leaderboard you can upload your results to, to track SOTA progress.
# Installation
You can install RelBench using `pip`:
```bash
pip install relbench
```
This will allow usage of the core RelBench data and task loading functionality.
<details markdown="1"><summary>Including CTU datasets</summary>
To use datasets from the CTU repository, use:
```bash
pip install relbench[ctu]
```
If you use the CTU datasets in your work, please cite [ReDeLEx](https://github.com/jakubpeleska/redelex) as below:
```
@misc{peleska2025redelex,
title={REDELEX: A Framework for Relational Deep Learning Exploration},
author={Jakub Peleška and Gustav Šír},
year={2025},
eprint={2506.22199},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2506.22199},
}
```
</details>
<details markdown="1"><summary>Including 4DBInfer datasets</summary>
To use datasets from the 4DBInfer repository, use:
```bash
pip install relbench[dbinfer]
```
If you use the 4DBInfer datasets in your work, please cite [4DBInfer](https://github.com/awslabs/multi-table-benchmark) as below:
```
@article{dbinfer,
title={4DBInfer: A 4D Benchmarking Toolbox for Graph-Centric Predictive Modeling on Relational DBs},
author={Wang, Minjie and Gan, Quan and Wipf, David and Cai, Zhenkun and Li, Ning and Tang, Jianheng and Zhang, Yanlin and Zhang, Zizhao and Mao, Zunyao and Song, Yakun and Wang, Yanbo and Li, Jiahang and Zhang, Han and Yang, Guang and Qin, Xiao and Lei, Chuan and Zhang, Muhan and Zhang, Weinan and Faloutsos, Christos and Zhang, Zheng},
journal={arXiv preprint arXiv:2404.18209},
year={2024}
}
```
</details>
<br>
To additionally use `relbench.modeling`, which requires [PyTorch](https://pytorch.org/), [PyTorch Geometric](https://github.com/pyg-team/pytorch_geometric) and [PyTorch Frame](https://github.com/pyg-team/pytorch-frame), install these dependencies manually or do:
```bash
pip install relbench[full]
```
For the scripts in the `examples` directory, use:
```bash
pip install relbench[example]
```
Then, to run a script:
```bash
git clone https://github.com/snap-stanford/relbench
cd relbench/examples
python gnn_entity.py --dataset rel-f1 --task driver-position
```
# Package Usage
This section provides a brief overview of using the RelBench package. For a more in-depth coverage see the [Tutorials](#tutorials) section. For detailed documentations, please see the code directly.
Imports:
```python
from relbench.base import Table, Database, Dataset, EntityTask
from relbench.datasets import get_dataset
from relbench.tasks import get_task
```
Get a dataset, e.g., `rel-amazon`:
```python
dataset: Dataset = get_dataset("rel-amazon", download=True)
```
<details markdown="1"><summary>Details on downloading and caching behavior.</summary>
RelBench datasets (and tasks) are cached to disk (usually at `~/.cache/relbench`, the location can be set using the `RELBENCH_CACHE_DIR` environment variable). If not present in cache, `download=True` downloads the data, verifies it against the known hash, and caches it. If present, `download=True` performs the verification and avoids downloading if verification succeeds. This is the recommended way.
`download=False` uses the cached data without verification, if present, or processes and caches the data from scratch / raw sources otherwise.
</details>
For faster download, please see [this](https://github.com/snap-stanford/relbench/issues/265).
`dataset` consists of a `Database` object and temporal splitting times `dataset.val_timestamp` and `dataset.test_timestamp`.
To get the database:
```python
db: Database = dataset.get_db()
```
<details markdown="1"><summary>Preventing temporal leakage</summary>
By default, rows with timestamp > `dataset.test_timestamp` are excluded to prevent accidental temporal leakage. The full database can be obtained with:
```python
full_db: Database = dataset.get_db(upto_test_timestamp=False)
```
</details>
Various tasks can be defined on a dataset. For example, to get the `user-churn` task for `rel-amazon`:
```python
task: EntityTask = get_task("rel-amazon", "user-churn", download=True)
```
A task provides train/val/test tables:
```python
train_table: Table = task.get_table("train")
val_table: Table = task.get_table("val")
test_table: Table = task.get_table("test")
```
<details markdown="1"><summary>Preventing test leakage</summary>
By default, the target labels are hidden from the test table to prevent accidental data leakage. The full test table can be obtained with:
```python
full_test_table: Table = task.get_table("test", mask_input_cols=False)
```
</details>
You can build your model on top of the database and the task tables. After training and validation, you can make prediction from your model on the test table. Suppose your prediction `test_pred` is a NumPy array following the order of `task.test_table`, you can call the following to get the evaluation metrics:
```python
task.evaluate(test_pred)
```
Additionally, you can evaluate validation (or training) predictions as such:
```python
task.evaluate(val_pred, val_table)
```
# Tutorials
| Notebook | Try on Colab | Description |
----------|--------------|---------------------------------------------------------|
| [load_data.ipynb](tutorials/load_data.ipynb) | [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/snap-stanford/relbench/blob/main/tutorials/load_data.ipynb) | Load and explore RelBench data
| [train_model.ipynb](tutorials/train_model.ipynb) | [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/snap-stanford/relbench/blob/main/tutorials/train_model.ipynb)| Train your first GNN-based model on RelBench
| [custom_dataset.ipynb](tutorials/custom_dataset.ipynb) | [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/snap-stanford/relbench/blob/main/tutorials/custom_dataset.ipynb) | Use your own data in RelBench
| [custom_task.ipynb](tutorials/custom_task.ipynb) | [<img align="center" src="https://colab.research.google.com/assets/colab-badge.svg" />](https://colab.research.google.com/github/snap-stanford/relbench/blob/main/tutorials/custom_task.ipynb)| Define your own ML tasks in RelBench
# Contributing
Please check out [CONTRIBUTING.md](CONTRIBUTING.md) if you are interested in contributing datasets, tasks, bug fixes, etc. to RelBench.
# Cite RelBench
If you use RelBench in your work, please cite our position and benchmark papers:
```bibtex
@inproceedings{rdl,
title={Position: Relational Deep Learning - Graph Representation Learning on Relational Databases},
author={Fey, Matthias and Hu, Weihua and Huang, Kexin and Lenssen, Jan Eric and Ranjan, Rishabh and Robinson, Joshua and Ying, Rex and You, Jiaxuan and Leskovec, Jure},
booktitle={Forty-first International Conference on Machine Learning}
}
```
```bibtex
@misc{relbench,
title={RelBench: A Benchmark for Deep Learning on Relational Databases},
author={Joshua Robinson and Rishabh Ranjan and Weihua Hu and Kexin Huang and Jiaqi Han and Alejandro Dobles and Matthias Fey and Jan E. Lenssen and Yiwen Yuan and Zecheng Zhang and Xinwei He and Jure Leskovec},
year={2024},
eprint={2407.20060},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.20060},
}
```
If you use RelBench v2 in your work, please cite:
```bibtex
@misc{gu2026relbenchv2,
title={{RelBench} v2: A Large-Scale Benchmark and Repository for Relational Data},
author={Justin Gu and Rishabh Ranjan and Charilaos Kanatsoulis and Haiming Tang and Martin Jurkovic and Valter Hudovernik and Mark Znidar and Pranshu Chaturvedi and Parth Shroff and Fengyu Li and Jure Leskovec},
year={2026},
eprint={2602.12606},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.12606},
}
```
| text/markdown | null | RelBench Team <relbench@cs.stanford.edu> | null | null | null | null | [
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas",
"pooch",
"pyarrow",
"numpy",
"duckdb",
"scikit-learn<=1.6.1",
"typing-extensions",
"datasets",
"redelex; extra == \"ctu\"",
"dbinfer-relbench-adapter; extra == \"dbinfer\"",
"pre-commit; extra == \"dev\"",
"sentence-transformers; extra == \"example\"",
"pytorch_frame[full]; extra =... | [] | [] | [] | [
"Home, https://relbench.stanford.edu"
] | twine/6.2.0 CPython/3.11.7 | 2026-02-20T01:28:39.131973 | relbench-2.1.0.tar.gz | 73,818 | dc/92/0768f993c8f474ed3ff6e133780bb0a13b578c575361605af2910e697b63/relbench-2.1.0.tar.gz | source | sdist | null | false | 77646de8766ace2cd06a00a4237e2d94 | b201584d3bd51b10b4513c029e38b24890be163f66ff9f519beda92bc143ef3b | dc920768f993c8f474ed3ff6e133780bb0a13b578c575361605af2910e697b63 | null | [
"LICENSE"
] | 301 |
2.4 | zoo_mcp | 0.10.3 | An MCP server utilizing Zoo's various tools | # Zoo Model Context Protocol (MCP) Server
An [MCP server](https://modelcontextprotocol.io/docs/getting-started/intro) housing various Zoo built utilities
<!-- mcp-name: io.github.KittyCAD/zoo-mcp -->
## Prerequisites
1. An API key for Zoo, get one [here](https://zoo.dev/account)
2. An environment variable `ZOO_API_TOKEN` set to your API key
```bash
export ZOO_API_TOKEN="your_api_key_here"
```
## Installation
1. [Ensure uv has been installed](https://docs.astral.sh/uv/getting-started/installation/)
2. [Create a uv environment](https://docs.astral.sh/uv/pip/environments/)
```bash
uv venv
```
3. [Activate your uv environment (Optional)](https://docs.astral.sh/uv/pip/environments/#using-a-virtual-environment)
4. Install the package from GitHub
```bash
uv pip install git+ssh://git@github.com/KittyCAD/zoo-mcp.git
```
## Running the Server
The server can be started by using [uvx](https://docs.astral.sh/uv/guides/tools/#running-tools)
```bash
uvx zoo-mcp
```
The server can be started locally by using uv and the zoo_mcp module
```bash
uv run -m zoo_mcp
```
The server can also be run with the [mcp package](https://github.com/modelcontextprotocol/python-sdk)
```bash
uv run mcp run src/zoo_mcp/server.py
```
## Integrations
The server can be used as is by [running the server](#running-the-server) or importing directly into your python code.
```python
from zoo_mcp.server import mcp
mcp.run()
```
Individual tools can be used in your own python code as well
```python
from mcp.server.fastmcp import FastMCP
from zoo_mcp.ai_tools import text_to_cad
mcp = FastMCP(name="My Example Server")
@mcp.tool()
async def my_text_text_to_cad(prompt: str) -> str:
"""
Example tool that uses the text_to_cad function from zoo_mcp.tools
"""
return await text_to_cad(prompt=prompt)
```
The server can be integrated with [Claude desktop](https://claude.ai/download) using the following command
```bash
uv run mcp install src/zoo_mcp/server.py
```
The server can also be integrated with [Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview) using the following command
```bash
claude mcp add --scope project "Zoo-MCP" uv -- --directory "$PWD"/src/zoo_mcp run server.py
```
The server can also be tested using the [MCP Inspector](https://modelcontextprotocol.io/legacy/tools/inspector#python)
```bash
uv run mcp dev src/zoo_mcp/server.py
```
For running with [codex-cli](https://github.com/openai/codex)
```bash
codex \
-c 'mcp_servers.zoo.command="uvx"' \
-c 'mcp_servers.zoo.args=["zoo-mcp"]' \
-c mcp_servers.zoo.env.ZOO_API_TOKEN="$ZOO_API_TOKEN"
```
You can also use the helper script included in this repo:
```bash
./codex-zoo.sh
```
The script prompts for a request, runs Codex with the Zoo MCP server, and saves a JSONL transcript (including token usage) to `codex-run-<timestamp>.jsonl`.
## Contributing
Contributions are welcome! Please open an issue or submit a pull request on the [GitHub repository](https://github.com/KittyCAD/zoo-mcp)
PRs will need to pass tests and linting before being merged.
### [ruff](https://docs.astral.sh/ruff/) is used for linting and formatting.
```bash
uvx ruff check
uvx ruff format
```
### [ty](https://docs.astral.sh/ty/) is used for type checking.
```bash
uvx ty check
```
## Testing
The server includes tests located in [`tests`](`tests`). To run the tests, use the following command:
```bash
uv run pytest -n auto
```
| text/markdown | Ryan Barton | Ryan Barton <ryan@zoo.dev> | Ryan Barton | Ryan Barton <ryan@zoo.dev> | null | zoo, mcp, kittycad, 3d, modeling, server | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming La... | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"aiofiles<26.0",
"kittycad<2.0",
"mcp[cli]<2.0",
"pillow<13.0",
"truststore<1.0",
"zoo-kcl>=0.3.122"
] | [] | [] | [] | [
"Repository, https://github.com/KittyCAD/zoo-mcp"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T01:27:20.298443 | zoo_mcp-0.10.3-py3-none-any.whl | 31,118 | 4c/c6/715cba4d1d3d51d18450275032d01b4f3505fc0bfc9a4abaff40fb02cd14/zoo_mcp-0.10.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 649a677952963a8086cd4c79a66535ef | fa965335e7641c31802fece890ee686377f44862569d5b9d0e2336c4bbd5637a | 4cc6715cba4d1d3d51d18450275032d01b4f3505fc0bfc9a4abaff40fb02cd14 | MIT | [
"LICENSE"
] | 0 |
2.4 | astris | 0.1.3 | Minimal Python framework for building static websites with component-style APIs. | # Astris
[](https://github.com/KeoH/astris/actions/workflows/tests.yml)
Astris is a minimal Python framework for building static websites using component-style APIs.
## Documentation
- User documentation (for building websites): `docs/user`
- Internal framework-maintainer documentation: `docs/internal`
## Installation
```bash
pip install astris
```
## Quick start with CLI
Create a new project scaffold:
```bash
uvx astris new my-project
cd my-project
uv run python main.py
```
Build static files:
```bash
uv run astris build
```
## Basic usage
```python
from astris import AstrisApp
from astris.lib import Body, H1, Html
app = AstrisApp()
@app.page("/")
def home():
return Html(children=[
Body(children=[
H1(children=["Hello from Astris"]),
])
])
if __name__ == "__main__":
app.run_dev()
```
## HTML tags API
`astris.lib` now provides wrappers for the modern standard HTML tag set (A to Z).
Each wrapper class includes an English docstring describing the underlying HTML element.
Void elements (for example `Img`, `Br`, `Input`, `Meta`) render without closing tags.
Layout helpers (`Container`, `Column`, `Row`) live in `astris.layout`.
```python
from astris.layout import Container, Column, Row
```
## JSON content collections (read-only)
You can register a folder of `.json` files as a read-only collection and generate detail pages from a Python template.
```python
from astris import AstrisApp, Text, register_json_collection
from astris.lib import Body, Div, Html
app = AstrisApp()
def post_template(entry: dict):
return Html(children=[
Body(children=[
Div(children=[Text(entry["title"])]),
])
])
posts = register_json_collection(
app,
name="posts",
directory="content/posts",
template=post_template,
api_prefix="/api/content",
)
print(posts.page_links())
```
Generated output:
- Static detail pages: `/posts/<slug>` (built as `dist/posts/<slug>.html`)
- Dev JSON API (read-only):
- `GET /api/content/posts`
- `GET /api/content/posts/<slug>`
## Head assets (CDN)
You can register external CSS and JavaScript files that Astris injects into the page `<head>`.
This works in both `run_dev()` and `build()` outputs.
```python
from astris import AstrisApp
app = AstrisApp()
app.add_head_link(
"https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/css/bootstrap.min.css"
)
app.add_head_script(
"https://cdn.jsdelivr.net/npm/bootstrap@5.3.3/dist/js/bootstrap.bundle.min.js"
)
```
## Development
```bash
uv sync --group dev
uv pip install -e .
uv run --group dev pytest
```
## Documentation (local)
Build user documentation in strict mode:
```bash
make docs
```
Serve user documentation with live reload:
```bash
make docs-serve
```
## Continuous Integration
GitHub Actions runs tests on push and pull request events targeting main using Python 3.11, 3.12, and 3.13.
The workflow is defined in `.github/workflows/tests.yml`.
## Release checklist
```bash
make release-check
```
Equivalent manual commands:
```bash
uv sync --group dev
uv run --group dev pytest
uv run --group dev python -m build
uv run --group dev twine check dist/*.whl dist/*.tar.gz
```
`example.py` in this repository is an internal framework demo and not the standard end-user workflow.
## Agent skill: release-prep-astris
This repository includes a workspace skill at `.agent/skills/release-prep-astris`.
Use this skill when preparing a new Astris version and you want a repeatable release-prep workflow that covers:
- Version alignment across project metadata.
- Changelog and internal release notes updates.
- Local validation checks (`pytest`, `pyright`, `release-check`).
By default, this skill prepares the repository for release but does not publish artifacts to TestPyPI or PyPI unless explicitly requested.
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: ... | [] | null | null | >=3.11 | [] | [] | [] | [
"fastapi>=0.129.0",
"uvicorn>=0.41.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T01:27:11.554563 | astris-0.1.3.tar.gz | 19,703 | 80/45/a163f401c1f2f9868db2aef4495727d11fa51ccf197d2c596a86bc6a0288/astris-0.1.3.tar.gz | source | sdist | null | false | a1a1e4038f89ceedfd265d882c1aff3a | ea7263a4335ad6ee37b6a2307913540a8920937cee72b332e7c364a6e982710e | 8045a163f401c1f2f9868db2aef4495727d11fa51ccf197d2c596a86bc6a0288 | MIT | [
"LICENSE"
] | 227 |
2.4 | lorax-arg | 0.1.6 | Interactive ARG visualization for genomics | # Lorax
Lorax is a web-native platform for real-time, interactive visualization and exploration of population-scale Ancestral Recombination Graphs.
- CLI entrypoint: `lorax` (alias: `lorax-arg`)

## Key features
- Scalable rendering: interactive visualization of ARGs at a biobank scale.
- Genome-wide navigation: Traverse genomic coordinates and inspect local genealogies at recombination breakpoints.
- Mutation-aware: Trace variant inheritance through local genealogies
- Metadata integration: Filter, color, and subset samples by population labels, phenotypes, or custom annotations.
- Flexible inputs: Supports .trees, .trees.tsz (tskit tree sequences), and CSV-based ARG representations
## Quick start (pip)
```bash
pip install lorax-arg
lorax # this opens lorax in a browser
lorax --file # to directly load file on lorax (preferred for large files.)
```
Input Formats
Tree sequences: .trees and .trees.tsz files (compatible with tskit/tsinfer/tsdate, Relate, ARGweaver output)
CSV: One row per recombination interval with columns for genomic position, Newick tree string, tree depth, and optional metadata. Ideal for custom inference pipelines or non-model organisms.
## Use Cases
- Explore signatures of natural selection in local genealogies.
- Visualize introgression and admixture across genomic regions.
- Trace ancestry of specific samples through population-scale ARGs
- Navigate from GWAS hits or functional annotations to underlying genealogical structure
## Links
Web platform: https://lorax.ucsc.edu
Source code: https://github.com/pratikkatte/lorax
| text/markdown | Lorax Team | null | null | null | MIT | lorax, arg, ancestral recombination graph, tree sequence, visualization, genomics, evolution, phylogenetics, bioinformatics | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Eng... | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.100.0",
"uvicorn>=0.20.0",
"python-socketio>=5.0.0",
"python-dotenv>=1.0.0",
"click>=8.0.0",
"aiofiles>=24.0.0",
"aiohttp>=3.0.0",
"numpy>=2.0.0",
"pandas>=2.0.0",
"pyarrow>=18.0.0",
"tskit>=1.0.0",
"tszip>=0.2.0",
"redis>=5.0.0",
"google-cloud-storage>=3.0.0",
"starlette>=0.... | [] | [] | [] | [
"Homepage, https://lorax.in",
"Documentation, https://lorax.in",
"Source, https://github.com/pratikkatte/lorax",
"Issues, https://github.com/pratikkatte/lorax/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T01:25:20.576794 | lorax_arg-0.1.6.tar.gz | 2,295,822 | e0/d8/a1e9fa9973b4150fd89498ad60efa2e0d1e1fb0af72ded9f77e5ffa6e1c0/lorax_arg-0.1.6.tar.gz | source | sdist | null | false | 952e035c3bb44d19ebb008ffc0d43515 | 0bf03f366072e6bcbbc1a2d242228d477eee49317aeb9294e38c6f4b0b1b4e28 | e0d8a1e9fa9973b4150fd89498ad60efa2e0d1e1fb0af72ded9f77e5ffa6e1c0 | null | [
"LICENSE"
] | 228 |
2.4 | telegrambotcli | 0.5.2 | A utility for quickly generating projects on aiogram 3 | ## 🚀 TelegramBotCLI
A lightweight command-line utility for quickly generating professional project structures for **Aiogram 3** bots. Stop wasting time on boilerplate and start coding your logic instantly.
## ✨ Features
- **Standard & Advanced Templates:** Choose between a lightweight setup or a production-ready structure.
- **Pro Components:** Includes **Anti-flood Middleware** and **Admin Filters** out of the box.
- **Database Ready:** Pre-configured **SQLModel** (SQLite/PostgreSQL) integration.
- **Smart OS Detection:** Suggests the correct run command (`python` vs `python3`) for your system.
- **Automated Init:** Handles all `__init__.py` files automatically for clean imports.
## 📦 Installation
Install the tool directly from PyPI:
```bash
pip install telegrambotcli
```
## 🌞 How to Start
Open your terminal in the desired project folder and run:
bash
```
telegrambotcli
```
You will be prompted to choose a template:
1. **[Standard]** **: Basic bot with DB, Keyboards, and essential handlers.**
2. **[Advanced]** **: Includes Admin logic, Anti-flood protection, and advanced filtering.**
📂 Generated Project Structure
text
```
your_project/
├── app/
│ ├── database/
│ │ └── database.py # SQLModel engine & User model
│ ├── filters/ # (Advanced) AdminFilter logic
│ ├── keyboards/
│ │ └── builders.py # Reply & Inline keyboard templates
│ ├── middlewares/ # (Advanced) Anti-flood middleware
│ └── main.py # Main Router (Help, Settings, Admin handlers)
├── bot.py # Main entry point (Dispatcher & Polling)
├── .env # Environment variables (Token, Admin ID)
└── .gitignore # Pre-configured for Python & VSCode
```
## 🚀 Quick Start Guide
1. **Configure:** **Open the generated** `.env` **file and fill in your credentials:**
env
```
BOT_TOKEN="123456:ABC-DEF..."
ADMIN_ID="987654321"
```
2. **Run:** **Launch your bot using the suggested command:**
bash
```
python bot.py # or python3 bot.py
```
### 🛠 Handlers Included
The generated `app/main.py` automatically includes:
* `/start`, `/keyboard`, `/inline`
* **Text filters** **for "Help 🆘" and "Settings ⚙️" buttons.**
* **Admin check** **for the** `/admin` **command (in Advanced mode).**
## 🧑💻GitHub repository
[Sourse code](https://github.com/AnonimPython/telegrambotcli)
[PyPi page](https://pypi.org/project/telegrambotcli/)
| text/markdown | null | AnonimPython <moscow.retro@list.ru> | null | null | null | aiogram, cli, generator, telegram, bot | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"typer[all]",
"sqlmodel",
"aiogram>=3.0.0",
"pydantic-settings",
"psycopg2",
"python-dotenv",
"pydantic"
] | [] | [] | [] | [] | twine/6.0.1 CPython/3.13.1 | 2026-02-20T01:25:20.382409 | telegrambotcli-0.5.2-py3-none-any.whl | 6,572 | ee/74/9b01762cc61f7ab21eeff2287e4f22763bdb3a070c14f9f3be6bd31025db/telegrambotcli-0.5.2-py3-none-any.whl | py3 | bdist_wheel | null | false | c4d0cc820479d74ed1e235d6711da8a6 | 286cca26a4a47593e82ee56d6365c61025d4e477bf9c13f77351e937d9950e48 | ee749b01762cc61f7ab21eeff2287e4f22763bdb3a070c14f9f3be6bd31025db | null | [] | 90 |
2.4 | mdb-engine | 0.7.11 | MongoDB Engine | # mdb-engine
**The MongoDB Engine for Python Apps** — Auto-sandboxing, index management, and AI services in one package.
[](https://pypi.org/project/mdb-engine/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[]()
[]()
## Get Running in 60 Seconds
### Step 1: Start MongoDB
You need a running MongoDB instance. Pick one:
```bash
# Local (Docker) — full Atlas features including Vector Search
docker run -d --name mongodb -p 27017:27017 mongodb/mongodb-atlas-local:latest
```
Or use [MongoDB Atlas](https://www.mongodb.com/cloud/atlas) (free tier):
```bash
export MONGODB_URI="mongodb+srv://user:pass@cluster.mongodb.net/"
```
### Step 2: Install and run
```bash
pip install mdb-engine fastapi uvicorn
```
Create `web.py`:
```python
from mdb_engine import quickstart
from mdb_engine.dependencies import get_scoped_db
from fastapi import Depends
app = quickstart("my_app")
@app.get("/items")
async def list_items(db=Depends(get_scoped_db)):
return await db.items.find({}).to_list(10)
```
```bash
uvicorn web:app --reload
```
Open [http://localhost:8000/items](http://localhost:8000/items) -- you're live.
> **Using AI features?** You'll also need an API key:
> `export OPENAI_API_KEY=sk-...` (or `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`).
> See [Environment Variables](#environment-variables) for the full list.
**What you get automatically:**
- **Data isolation** — every query is scoped by `app_id`; you cannot accidentally leak data across apps
- **Collection prefixing** — `db.items` transparently becomes `my_app_items`
- **Lifecycle management** — engine startup/shutdown handled for you
- **Dependency injection** — `get_scoped_db`, `get_memory_service`, etc. ready to use
---
## Installation
```bash
pip install mdb-engine
```
---
## Feature Layers
mdb-engine is designed for progressive adoption. Start with Layer 0 and add features as you need them.
| Layer | What it gives you | How to enable |
|-------|-------------------|---------------|
| **0: Scoped DB + Indexes** | Auto-sandboxed collections, declarative indexes | `quickstart("slug")` or minimal manifest |
| **1: Auth + GDPR** | JWT, RBAC (Casbin/OSO), SSO, data export/deletion | Add `auth` section to manifest |
| **2: LLM + Embeddings + Memory** | Persistent AI memory, semantic search, fact extraction | Add `llm_config` + `memory_config` to manifest |
| **3: GraphRAG + ChatEngine** | Knowledge graphs, conversation orchestration with STM + LTM | Add `graph_config`, use `ChatEngine` |
---
## Three Ways to Create an App
### 1. Zero-config (quickstart)
No manifest file, no explicit connection string. Best for getting started.
```python
from mdb_engine import quickstart
app = quickstart("my_app")
```
### 2. Inline manifest (dict)
Pass configuration directly in Python. Good for programmatic setups.
```python
from mdb_engine import MongoDBEngine
engine = MongoDBEngine()
app = engine.create_app(
slug="my_app",
manifest={
"schema_version": "2.0",
"slug": "my_app",
"name": "My App",
"managed_indexes": {
"tasks": [{"type": "regular", "keys": {"status": 1}, "name": "status_idx"}]
},
},
)
```
### 3. File-based manifest (recommended for production)
The full-featured approach. A single `manifest.json` defines your app's identity, indexes, auth, AI services, and more.
```python
from pathlib import Path
from mdb_engine import MongoDBEngine
engine = MongoDBEngine(
mongo_uri="mongodb+srv://...", # or set MONGODB_URI env var
db_name="production"
)
app = engine.create_app(slug="my_app", manifest=Path("manifest.json"))
```
**Minimal manifest.json (3 fields):**
```json
{
"schema_version": "2.0",
"slug": "my_app",
"name": "My App"
}
```
**Learn more**: [Manifest Reference](docs/MANIFEST_REFERENCE.md) | [Quick Start Guide](docs/QUICK_START.md)
---
## Examples Ladder
Start simple, add complexity when you need it.
| Example | What it shows | Lines of code |
|---------|---------------|:---:|
| [hello_world](examples/basic/hello_world/) | Zero-config CRUD, no manifest | ~15 |
| [memory_quickstart](examples/basic/memory_quickstart/) | AI memory with semantic search | ~25 |
| [chit_chat](examples/basic/chit_chat/) | Full AI chat with ChatEngine, auth, WebSockets | ~2400 |
| [interactive_rag](examples/basic/interactive_rag/) | RAG with vector search | — |
| [simple_app](examples/advanced/simple_app/) | Task management with `create_app()` pattern | — |
| [sso-multi-app](examples/advanced/sso-multi-app/) | SSO with shared user pool across apps | — |
---
## CRUD Operations (Auto-Scoped)
All database operations are automatically scoped to your app:
```python
from mdb_engine.dependencies import get_scoped_db
@app.post("/tasks")
async def create_task(task: dict, db=Depends(get_scoped_db)):
result = await db.tasks.insert_one(task)
return {"id": str(result.inserted_id)}
@app.get("/tasks")
async def list_tasks(db=Depends(get_scoped_db)):
return await db.tasks.find({"status": "pending"}).to_list(length=10)
```
**Under the hood:**
```python
# You write:
await db.tasks.find({}).to_list(length=10)
# Engine executes:
# Collection: my_app_tasks
# Query: {"app_id": "my_app"}
```
---
## AI Memory (MemoryService)
Add persistent, searchable AI memory to any app.
```python
from mdb_engine.dependencies import get_memory_service
@app.post("/remember")
async def remember(text: str, memory=Depends(get_memory_service)):
result = await memory.add(messages=text, user_id="user1")
return {"stored": result}
@app.get("/recall")
async def recall(q: str, memory=Depends(get_memory_service)):
results = await memory.search(query=q, user_id="user1")
return {"results": results}
```
> **Note**: All memory operations are async. Use `await` directly in your routes.
Enable in your manifest:
```json
{
"llm_config": {"enabled": true, "default_model": "openai/gpt-4o-mini"},
"embedding_config": {"enabled": true, "default_embedding_model": "text-embedding-3-small"},
"memory_config": {"enabled": true, "provider": "cognitive", "infer": true}
}
```
Optional cognitive features (add to `memory_config`):
- **Importance scoring**: AI evaluates memory significance
- **Memory reinforcement**: Similar memories strengthen each other
- **Memory decay**: Less relevant memories fade over time
- **Memory merging**: Related memories combined intelligently
---
## ChatEngine (Conversation Orchestration)
`ChatEngine` (formerly `CognitiveEngine`) orchestrates full conversations with short-term memory (chat history) + long-term memory (semantic search):
```python
from mdb_engine.memory import ChatEngine
cognitive = ChatEngine(
app_slug="my_app",
memory_service=memory_service,
chat_history_collection=db.chat_history,
llm_provider=llm_provider,
)
result = await cognitive.chat(
user_id="user123",
session_id="session456",
user_query="What did we discuss about the project?",
system_prompt="You are a helpful assistant.",
extract_facts=True,
)
```
---
## RequestContext (All Services in One Place)
```python
from mdb_engine import RequestContext, get_request_context
@app.post("/ai-chat")
async def chat(query: str, ctx: RequestContext = Depends(get_request_context)):
user = ctx.require_user() # 401 if not logged in
ctx.require_role("user") # 403 if missing role
# ctx.db, ctx.memory, ctx.llm, ctx.embedding_service — all available
if ctx.llm:
response = ctx.llm.chat.completions.create(
model=ctx.llm_model,
messages=[{"role": "user", "content": query}]
)
return {"response": response.choices[0].message.content}
```
---
## Index Management
Define indexes in `manifest.json` — they're auto-created on startup:
```json
{
"managed_indexes": {
"tasks": [
{"type": "regular", "keys": {"status": 1, "created_at": -1}, "name": "status_sort"},
{"type": "regular", "keys": {"email": 1}, "name": "email_unique", "unique": true}
]
}
}
```
Supported index types: `regular`, `text`, `vector`, `ttl`, `compound`.
---
## Advanced Features
| Feature | Description | Learn More |
|---------|-------------|------------|
| **Authentication** | JWT + Casbin/OSO RBAC | [Bible: Auth](MDB_ENGINE_BIBLE.md#authentication-and-authorization) |
| **Vector Search** | Atlas Vector Search + embeddings | [RAG Example](examples/basic/interactive_rag) |
| **MemoryService** | Persistent AI memory with cognitive features | [Memory Docs](docs/MEMORY_SERVICE.md) |
| **GraphService** | Knowledge graph with `$graphLookup` traversal | [Graph Docs](docs/GRAPH_SERVICE.md) |
| **ChatEngine** | Full RAG pipeline with STM + LTM | [Chat Example](examples/basic/chit_chat) |
| **WebSockets** | Real-time updates from manifest config | [Bible: WebSockets](MDB_ENGINE_BIBLE.md#websocket-system) |
| **Multi-App** | Secure cross-app data access | [SSO Example](examples/advanced/sso-multi-app) |
| **SSO** | Shared auth across apps | [SSO Example](examples/advanced/sso-multi-app) |
| **GDPR** | Data discovery, export, deletion, rectification | [Bible: GDPR](MDB_ENGINE_BIBLE.md#gdpr-compliance) |
---
## MongoDB Connection Reference
mdb-engine connects to `mongodb://localhost:27017` by default. Override via env var or constructor:
| Setup | Connection String |
|-------|-------------------|
| Local / Docker | `mongodb://localhost:27017` (default, no config needed) |
| Atlas (free tier) | `mongodb+srv://user:password@cluster.mongodb.net/dbname` |
```bash
# Option A: environment variable
export MONGODB_URI="mongodb+srv://user:password@cluster.mongodb.net/"
# Option B: in code
engine = MongoDBEngine(mongo_uri="mongodb+srv://...")
```
---
## Environment Variables
| Variable | Required | Purpose |
|----------|----------|---------|
| `MONGODB_URI` | No | MongoDB connection string (default: `localhost:27017`) |
| `MDB_DB_NAME` | No | Database name (default: `mdb_engine`) |
| `OPENAI_API_KEY` | For AI features | OpenAI API key for LLM/embeddings |
| `ANTHROPIC_API_KEY` | For AI features | Anthropic API key (alternative to OpenAI) |
| `GEMINI_API_KEY` | For AI features | Google Gemini API key (alternative to OpenAI) |
| `MDB_JWT_SECRET` | For auth | JWT signing secret for shared auth mode |
---
## Why mdb-engine?
- **Zero to running in 3 lines** — `quickstart("my_app")` and go
- **Data isolation built in** — multi-tenant ready with automatic app sandboxing
- **manifest.json is everything** — single source of truth for your app config
- **Incremental adoption** — start minimal, add features as needed
- **No lock-in** — standard Motor/PyMongo underneath; export with `mongodump --query='{"app_id":"my_app"}'`
---
## Links
- [GitHub Repository](https://github.com/ranfysvalle02/mdb-engine)
- [Documentation](docs/)
- [All Examples](examples/)
- [Quick Start Guide](docs/QUICK_START.md) — **Start here!**
- [Manifest Reference](docs/MANIFEST_REFERENCE.md)
- [Contributing](CONTRIBUTING.md)
---
**Stop building scaffolding. Start building features.**
| text/markdown | Fabian Valle | Fabian Valle <oblivio.company@gmail.com> | null | null | MIT | mongodb, runtime, engine, database, scoping | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"To... | [] | https://github.com/ranfysvalle02/mdb-engine | null | >=3.10 | [] | [] | [] | [
"motor<4.0.0,>=3.0.0",
"pymongo<5.0.0,>=4.0.0",
"fastapi<1.0.0,>=0.100.0",
"pydantic<3.0.0,>=2.0.0",
"pyjwt>=2.8.0",
"jsonschema>=4.0.0",
"bcrypt>=4.0.0",
"cryptography>=41.0.0",
"openai<3.0.0,>=1.0.0; extra == \"ai\"",
"litellm<3.0.0,>=1.0.0; extra == \"ai\"",
"semantic-text-splitter>=0.9.0; ex... | [] | [] | [] | [
"Homepage, https://github.com/ranfysvalle02/mdb-engine",
"Documentation, https://github.com/ranfysvalle02/mdb-engine#readme",
"Repository, https://github.com/ranfysvalle02/mdb-engine",
"Issues, https://github.com/ranfysvalle02/mdb-engine/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T01:25:18.734762 | mdb_engine-0.7.11.tar.gz | 635,716 | 71/0b/0a42ad00345050200f90629fc9d1fe70ef071f89527781ffeeacb9c0994a/mdb_engine-0.7.11.tar.gz | source | sdist | null | false | 175e0ea8dd8ef818512fb476c7831da7 | aacf042ca6abc48d09d37168c2e6439ee45475df5563dcd34a16b821afee1002 | 710b0a42ad00345050200f90629fc9d1fe70ef071f89527781ffeeacb9c0994a | null | [
"LICENSE"
] | 239 |
2.4 | dataframeit | 0.5.3 | Enriqueça DataFrames com análises de texto usando LLMs. Extraia informações estruturadas de textos com Pydantic. | # DataFrameIt
[](https://badge.fury.io/py/dataframeit)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
**Enriqueça DataFrames com LLMs de forma simples e estruturada.**
DataFrameIt processa textos em DataFrames usando Modelos de Linguagem (LLMs) e extrai informações estruturadas definidas por modelos Pydantic.
**[Documentação Completa](https://bdcdo.github.io/dataframeit)** | **[Referência para LLMs](https://bdcdo.github.io/dataframeit/reference/llm-reference/)**
## Instalação
```bash
pip install dataframeit[google] # Google Gemini (recomendado)
pip install dataframeit[openai] # OpenAI
pip install dataframeit[anthropic] # Anthropic Claude
```
Configure sua API key:
```bash
export GOOGLE_API_KEY="sua-chave" # ou OPENAI_API_KEY, ANTHROPIC_API_KEY
```
## Exemplo Rápido
```python
from pydantic import BaseModel
from typing import Literal
import pandas as pd
from dataframeit import dataframeit
# 1. Defina o que extrair
class Sentimento(BaseModel):
sentimento: Literal['positivo', 'negativo', 'neutro']
confianca: Literal['alta', 'media', 'baixa']
# 2. Seus dados
df = pd.DataFrame({
'texto': [
'Produto excelente! Superou expectativas.',
'Péssimo atendimento, nunca mais compro.',
'Entrega ok, produto mediano.'
]
})
# 3. Processe!
resultado = dataframeit(df, Sentimento, "Analise o sentimento do texto.")
print(resultado)
```
**Saída:**
| texto | sentimento | confianca |
|-------|------------|-----------|
| Produto excelente! ... | positivo | alta |
| Péssimo atendimento... | negativo | alta |
| Entrega ok... | neutro | media |
## Funcionalidades
- **Múltiplos providers**: Google Gemini, OpenAI, Anthropic, Cohere, Mistral via LangChain
- **Múltiplos tipos de entrada**: DataFrame, Series, list, dict
- **Saída estruturada**: Validação automática com Pydantic
- **Resiliência**: Retry automático com backoff exponencial
- **Performance**: Processamento paralelo, rate limiting configurável
- **Busca web**: Integração com Tavily para enriquecer dados
- **Tracking**: Monitoramento de tokens e métricas de throughput
- **Configuração per-field**: Prompts e parâmetros de busca personalizados por campo (v0.5.2+)
## Configuração Per-Field (Novo em v0.5.2)
Configure prompts e parâmetros de busca específicos para cada campo usando `json_schema_extra`:
```python
from pydantic import BaseModel, Field
class MedicamentoInfo(BaseModel):
# Campo com prompt padrão
principio_ativo: str = Field(description="Princípio ativo do medicamento")
# Campo com prompt customizado (substitui o prompt base)
doenca_rara: str = Field(
description="Classificação de doença rara",
json_schema_extra={
"prompt": "Busque em Orphanet (orpha.net). Analise: {texto}"
}
)
# Campo com prompt adicional (append ao prompt base)
avaliacao_conitec: str = Field(
description="Avaliação da CONITEC",
json_schema_extra={
"prompt_append": "Busque APENAS no site da CONITEC (gov.br/conitec)."
}
)
# Campo com parâmetros de busca customizados
estudos_clinicos: str = Field(
description="Estudos clínicos relevantes",
json_schema_extra={
"prompt_append": "Busque estudos clínicos recentes.",
"search_depth": "advanced",
"max_results": 10
}
)
# Requer search_per_field=True
resultado = dataframeit(
df,
MedicamentoInfo,
"Analise o medicamento: {texto}",
use_search=True,
search_per_field=True,
)
```
**Opções disponíveis em `json_schema_extra`:**
| Opção | Descrição |
|-------|-----------|
| `prompt` ou `prompt_replace` | Substitui completamente o prompt base |
| `prompt_append` | Adiciona texto ao prompt base |
| `search_depth` | `"basic"` ou `"advanced"` (override per-field) |
| `max_results` | Número de resultados de busca (1-20) |
## Documentação
- [Início Rápido](https://bdcdo.github.io/dataframeit/getting-started/quickstart/)
- [Guias](https://bdcdo.github.io/dataframeit/guides/basic-usage/)
- [Referência da API](https://bdcdo.github.io/dataframeit/reference/api/)
- [Referência para LLMs](https://bdcdo.github.io/dataframeit/reference/llm-reference/) - Página compacta otimizada para assistentes de código
## Exemplos
Veja a pasta [`example/`](example/) para notebooks Jupyter com casos de uso completos.
## Licença
MIT
| text/markdown | Bruno da Cunha de Oliveira | null | Bruno da Cunha de Oliveira | null | null | data-enrichment, dataframe, gemini, langchain, llm, nlp, pandas, pydantic, structured-output, text-extraction | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python... | [] | null | null | >=3.10 | [] | [] | [] | [
"langchain-core>=1.0.0",
"langchain>=1.0.0",
"pandas",
"pydantic>=2.0",
"tqdm",
"langchain-anthropic>=0.3.0; extra == \"all\"",
"langchain-exa>=0.2.0; extra == \"all\"",
"langchain-google-genai>=2.0.0; extra == \"all\"",
"langchain-openai>=0.3.0; extra == \"all\"",
"langchain-tavily>=0.1.0; extra ... | [] | [] | [] | [
"Homepage, https://github.com/bdcdo/dataframeit",
"Documentation, https://bdcdo.github.io/dataframeit",
"Repository, https://github.com/bdcdo/dataframeit.git",
"Issues, https://github.com/bdcdo/dataframeit/issues",
"Changelog, https://github.com/bdcdo/dataframeit/releases"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T01:25:07.905467 | dataframeit-0.5.3-py3-none-any.whl | 42,765 | 40/0f/dfc7d8ca0a41677729256308b0bdd54c924b8600e9c981f99f9cdc6ad108/dataframeit-0.5.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 12eed80d624693c8ba65a1980d6587ee | 349fc3e8edf3a89744ec19c5d6a31afe4d05ad896904e8121e26487e39307d5b | 400fdfc7d8ca0a41677729256308b0bdd54c924b8600e9c981f99f9cdc6ad108 | MIT | [
"LICENSE"
] | 231 |
2.4 | dss-python-backend | 0.15.0b3 | Low-level Python bindings and native libs for DSS-Python, OpenDSSDirect.py and AltDSS-Python. Not intended for direct usage, see the high-level packages instead. | [](https://github.com/dss-extensions/dss_python_backenbd/actions/workflows/builds.yml)
[](https://pypi.org/project/dss-python-backend/)
<img alt="Supports Linux" src="https://img.shields.io/badge/Linux-FCC624?logo=linux&logoColor=black"> <img alt="Supports macOS" src="https://img.shields.io/badge/macOS-000000?logo=apple&logoColor=white"> <img alt="Supports Microsoft Windows" src="https://img.shields.io/badge/Windows-0078D6?logo=windows&logoColor=white">
# DSS-Python: Backend
`dss_python_backend` provides low-level bindings for an implementation of EPRI's OpenDSS, using [CFFI](https://cffi.readthedocs.io/) and our [DSS C-API library and headers](https://github.com/dss-extensions/dss_capi/). It contains the native libraries (and DLLs) required by DSS-Python. This is considered an implementation detail.
**This is not intended for direct usage, [see DSS-Python](https://github.com/dss-extensions/DSS-Python/), [OpenDSSDirect.py](https://github.com/dss-extensions/OpenDSSDirect.py/), and [AltDSS-Python](https://github.com/dss-extensions/AltDSS-Python/) instead!**
After several years integrated into DSS-Python, this package was created in April 2023 to make the maintenance easier. See https://github.com/dss-extensions/dss_python/issues/51
The Python package includes:
- FastDSS modules for AltDSS/DSS C-API
- CFFI modules for AltDSS/DSS C-API
- CFFI modules for user-models (only the code for generator user-models is being compiled nowadays)
- AltDSS/DSS C-API related libraries (AltDSS Engine, Oddie, Loader, etc.), DLLs, and headers
This module contains source-code licensed under BSD3 and LGPL3. See each file for SPDX comments.
Although this repository does not contain code from OpenDSS, the license is listed in `OPENDSS_LICENSE` for reference. The final packages do include software derived from OpenDSS code and other libraries, such as KLUSolveX and KLU (from SuiteSparse), both licensed under the LGPL. Since the files listed in this repository contain multiple licenses, SPDX identifiers are now included in the file headers.
*Note: this package might be renamed in the future to reflect the new developments.*
| text/markdown | null | Paulo Meira <pmeira@ieee.org> | null | Paulo Meira <pmeira@ieee.org> | null | null | [
"Intended Audience :: Science/Research",
"Intended Audience :: Education",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython"... | [] | https://github.com/dss-extensions/dss_python_backend/ | null | >=3.11 | [] | [] | [] | [
"cffi<3,>=2",
"numpy<3,>=2"
] | [] | [] | [] | [
"Homepage, https://github.com/dss-extensions/dss_python_backend",
"Repository, https://github.com/dss-extensions/dss_python_backend.git",
"Bug Tracker, https://github.com/dss-extensions/dss-extensions/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T01:24:53.646012 | dss_python_backend-0.15.0b3-cp37-abi3-win_amd64.whl | 14,143,313 | a4/75/31185764058e8f6e663d46bc359629c02ba6cc10008e1d009de25babbc41/dss_python_backend-0.15.0b3-cp37-abi3-win_amd64.whl | cp37 | bdist_wheel | null | false | 62aa40a64726df7bbd985bdf55933fce | 199e68c2a46cbae12cda42b05961581badc9548410d23cc25af8e82492e72133 | a47531185764058e8f6e663d46bc359629c02ba6cc10008e1d009de25babbc41 | BSD-3-Clause AND LGPL-3.0-only | [
"LICENSE.BSD3",
"LICENSE.LGPL3",
"OPENDSS_LICENSE"
] | 760 |
2.4 | iterprod | 1.0.4 | This project provides a faster python-only alternative to itertools.product. | ========
iterprod
========
Visit the website `https://iterprod.johannes-programming.online/ <https://iterprod.johannes-programming.online/>`_ for more information.
| text/x-rst | null | Johannes <johannes.programming@gmail.com> | null | null | The MIT License (MIT)
Copyright (c) 2025 Johannes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Download, https://pypi.org/project/iterprod/#files",
"Index, https://pypi.org/project/iterprod/",
"Source, https://github.com/johannes-programming/iterprod/",
"Website, https://iterprod.johannes-programming.online/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T01:23:52.637101 | iterprod-1.0.4.tar.gz | 3,505 | 09/e5/e28b7805da8c58bed321e8126166b4b5a1bb835f8ebd17f27498d7586ded/iterprod-1.0.4.tar.gz | source | sdist | null | false | a7b45cfcee7484901189198e69ce1a25 | 69b3a16c4b4b689a19b4ab28562bd086018c9d1dc17de70a31e571c3a7f6a8c6 | 09e5e28b7805da8c58bed321e8126166b4b5a1bb835f8ebd17f27498d7586ded | null | [
"LICENSE.txt"
] | 229 |
2.4 | arize-phoenix | 13.3.0 | AI Observability and Evaluation | <p align="center">
<a target="_blank" href="https://phoenix.arize.com" style="background:none">
<img alt="phoenix banner" src="https://github.com/Arize-ai/phoenix-assets/blob/main/images/socal/github-large-banner-phoenix-v2.jpg?raw=true" width="auto" height="auto"></img>
</a>
<br/>
<br/>
<a href="https://arize.com/docs/phoenix/">
<img src="https://img.shields.io/static/v1?message=Docs&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAIAAAACACAYAAADDPmHLAAAG4ElEQVR4nO2d4XHjNhCFcTf+b3ZgdWCmgmMqOKUC0xXYrsBOBVEqsFRB7ApCVRCygrMriFQBM7h5mNlwKBECARLg7jeDscamSQj7sFgsQfBL27ZK4MtXsT1vRADMEQEwRwTAHBEAc0QAzBEBMEcEwBwRAHNEAMwRATBnjAByFGE+MqVUMcYOY24GVUqpb/h8VErVKAf87QNFcEcbd4WSw+D6803njHscO5sATmGEURGBiCj6yUlv1uX2gv91FsDViArbcA2RUKF8QhAV8RQc0b15DcOt0VaTE1oAfWj3dYdCBfGGsmSM0XX5HsP3nEMAXbqCeCdiOERQPx9og5exGJ0S4zRQN9KrUupfpdQWjZciure/YIj7K0bjqwTyAHdovA805iqCOg2xgnB1nZ97IvaoSCURdIPG/IHGjTH/YAz/A8KdJai7lBQzgbpx/0Hg6DT18UzWMXxSjMkDrElPNEmKfAbl6znwI3IMU/OCa0/1nfckwWaSbvWYYDnEsvCMJDNckhqu7GCMKWYOBXp9yPGd5kvqUAKf6rkAk7M2SY9QDXdEr9wEOr9x96EiejMFnixBNteDISsyNw7hHRqc22evWcP4vt39O85bzZH30AKg4+eo8cQRI4bHAJ7hyYM3CNHrG9RrimSXuZmUkZjN/O6nAPpcwCcJNmipAle2QM/1GU3vITCXhvY91u9geN/jOY27VuTnYL1PCeAcRhwh7/Bl8Ai+IuxPiOCShtfX/sPDtY8w+sZjby86dw6dBeoigD7obd/Ko6fI4BF8DA9HnGdrcU0fLt+n4dfE6H5jpjYcVdu2L23b5lpjHoo+18FDbcszddF1rUee/4C6ZiO+80rHZmjDoIQUQLdRtm3brkcKIUPjjqVPBIUHgW1GGN4YfawAL2IqAVB8iEE31tvIelARlCPPVaFOLoIupzY6xVcM4MoRUyHXyHhslH6PaPl5RP1Lh4UsOeKR2e8dzC0Aiuvc2Nx3fwhfxf/hknouUYbWUk5GTAIwmOh5e+H0cor8vEL91hfOdEqINLq1AV+RKImJ6869f9tFIBVc6y7gd3lHfWyNX0LEr7EuDElhRdAlQjig0e/RU31xxDltM4pF7IY3pLIgxAhhgzF/iC2M0Hi4dkOGlyGMd/g7dsMbUlsR9ICe9WhxbA3DjRkSdjiHzQzlBSKNJsCzIcUlYdfI0dcWS8LMkPDkcJ0n/O+Qyy/IAtDkSPnp4Fu4WpthQR/zm2VcoI/51fI28iYld9/HEh4Pf7D0Bm845pwIPnHMUJSf45pT5x68s5T9AW6INzhHDeP1BYcNMew5SghkinWOwVnaBhHGG5ybMn70zBDe8buh8X6DqV0Sa/5tWOIOIbcWQ8KBiGBnMb/P0OuTd/lddCrY5jn/VLm3nL+fY4X4YREuv8vS9wh6HSkAExMs0viKySZRd44iyOH2FzPe98Fll7A7GNMmjay4GF9BAKGXesfCN0sRsDG+YrhP4O2ACFgZXzHdKPL2RMJoxc34ivFOod3AMMNUj5XxFfOtYrUIXvB5MandS+G+V/AzZ+MrEcBPlpoFtUIEwBwRAG+OIgDe1CIA5ogAmCMCYI4IgDkiAOaIAJgjAmCOCIA5IgDmiACYIwJgjgiAOSIA5ogAmCMCYI4IgDkiAOaIAJgjAmCOCIA5IgDmiACYIwJgjgiAOSIA5ogAmCMCYI4IgDkiAOaIAJgjAmDOVYBXvwvxQV8NWJOd0esvJ94babZaz7B5ovldxnlDpYhp0JFr/KTlLKcEMMQKpcDPXIQxGXsYmhZnXAXQh/EWBQrr3bc80mATyyrEvs4+BdBHgbdxFOIhrDkSg1/6Iu2LCS0AyoqI4ftUF00EY/Q3h1fRj2JKAVCMGErmnsH1lfnemEsAlByvgl0z2qx5B8OPCuB8EIMADBlEEOV79j1whNE3c/X2PmISAGUNr7CEmUSUhjfEKgBDAY+QohCiNrwhdgEYzPv7UxkadvBg0RrekMrNoAozh3vLN4DPhc7S/WL52vkoSO1u4BZC+DOCulC0KJ/gqWaP7C8hlSGgjxyCmDuPsEePT/KuasrrAcyr4H+f6fq01yd7Sz1lD0CZ2hs06PVJufs+lrIiyLwufjfBtXYpjvWnWIoHoJSYe4dIK/t4HX1ULFEACkPCm8e8wXFJvZ6y1EWhJkDcWxw7RINzLc74auGrgg8e4oIm9Sh/CA7LwkvHqaIJ9pLI6Lmy1BigDy2EV8tjdzh+8XB6MGSLKH4INsZXDJ8MGhIBK+Mrpo+GnRIBO+MrZjFAFxoTNBwCvj6u4qvSZJiM3iNX4yvmHoA9Sh4PF0QAzBEBMEcEwBwRAHNEAMwRAXBGKfUfr5hKvglRfO4AAAAASUVORK5CYII=&labelColor=grey&color=blue&logoColor=white&label=%20"/>
</a>
<a target="_blank" href="https://arize-ai.slack.com/join/shared_invite/zt-11t1vbu4x-xkBIHmOREQnYnYDH1GDfCg?__hstc=259489365.a667dfafcfa0169c8aee4178d115dc81.1733501603539.1733501603539.1733501603539.1&__hssc=259489365.1.1733501603539&__hsfp=3822854628&submissionGuid=381a0676-8f38-437b-96f2-fc10875658df#/shared-invite/email">
<img src="https://img.shields.io/static/v1?message=Community&logo=slack&labelColor=grey&color=blue&logoColor=white&label=%20"/>
</a>
<a target="_blank" href="https://bsky.app/profile/arize-phoenix.bsky.social">
<img src="https://img.shields.io/badge/-phoenix-blue.svg?color=blue&labelColor=gray&logo=bluesky">
</a>
<a target="_blank" href="https://x.com/ArizePhoenix">
<img src="https://img.shields.io/badge/-ArizePhoenix-blue.svg?color=blue&labelColor=gray&logo=x">
</a>
<a target="_blank" href="https://pypi.org/project/arize-phoenix/">
<img src="https://img.shields.io/pypi/v/arize-phoenix?color=blue">
</a>
<a target="_blank" href="https://anaconda.org/conda-forge/arize-phoenix">
<img src="https://img.shields.io/conda/vn/conda-forge/arize-phoenix.svg?color=blue">
</a>
<a target="_blank" href="https://pypi.org/project/arize-phoenix/">
<img src="https://img.shields.io/pypi/pyversions/arize-phoenix">
</a>
<a target="_blank" href="https://hub.docker.com/r/arizephoenix/phoenix/tags">
<img src="https://img.shields.io/docker/v/arizephoenix/phoenix?sort=semver&logo=docker&label=image&color=blue">
</a>
<a target="_blank" href="https://hub.docker.com/r/arizephoenix/phoenix-helm">
<img src="https://img.shields.io/badge/Helm-blue?style=flat&logo=helm&labelColor=grey"/>
</a>
<a target="_blank" href="https://github.com/Arize-ai/phoenix/tree/main/js/packages/phoenix-mcp">
<img src="https://badge.mcpx.dev?status=on" title="MCP Enabled"/>
</a>
<a href="cursor://anysphere.cursor-deeplink/mcp/install?name=phoenix&config=eyJjb21tYW5kIjoibnB4IC15IEBhcml6ZWFpL3Bob2VuaXgtbWNwQGxhdGVzdCAtLWJhc2VVcmwgaHR0cHM6Ly9teS1waG9lbml4LmNvbSAtLWFwaUtleSB5b3VyLWFwaS1rZXkifQ%3D%3D"><img src="https://cursor.com/deeplink/mcp-install-dark.svg" alt="Add Arize Phoenix MCP server to Cursor" height=20 /></a>
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=8e8e8b34-7900-43fa-a38f-1f070bd48c64&page=README.md" />
</p>
Phoenix is an open-source AI observability platform designed for experimentation, evaluation, and troubleshooting. It provides:
- [**_Tracing_**](https://arize.com/docs/phoenix/tracing/llm-traces) - Trace your LLM application's runtime using OpenTelemetry-based instrumentation.
- [**_Evaluation_**](https://arize.com/docs/phoenix/evaluation/llm-evals) - Leverage LLMs to benchmark your application's performance using response and retrieval evals.
- [**_Datasets_**](https://arize.com/docs/phoenix/datasets-and-experiments/overview-datasets) - Create versioned datasets of examples for experimentation, evaluation, and fine-tuning.
- [**_Experiments_**](https://arize.com/docs/phoenix/datasets-and-experiments/overview-datasets#experiments) - Track and evaluate changes to prompts, LLMs, and retrieval.
- [**_Playground_**](https://arize.com/docs/phoenix/prompt-engineering/overview-prompts)- Optimize prompts, compare models, adjust parameters, and replay traced LLM calls.
- [**_Prompt Management_**](https://arize.com/docs/phoenix/prompt-engineering/overview-prompts/prompt-management)- Manage and test prompt changes systematically using version control, tagging, and experimentation.
Phoenix is vendor and language agnostic with out-of-the-box support for popular frameworks (🦙[LlamaIndex](https://arize.com/docs/phoenix/tracing/integrations-tracing/llamaindex), 🦜⛓[LangChain](https://arize.com/docs/phoenix/tracing/integrations-tracing/langchain), [Haystack](https://arize.com/docs/phoenix/tracing/integrations-tracing/haystack), 🧩[DSPy](https://arize.com/docs/phoenix/tracing/integrations-tracing/dspy), 🤗[smolagents](https://arize.com/docs/phoenix/tracing/integrations-tracing/hfsmolagents)) and LLM providers ([OpenAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/openai), [Bedrock](https://arize.com/docs/phoenix/tracing/integrations-tracing/bedrock), [MistralAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/mistralai), [VertexAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/vertexai), [LiteLLM](https://arize.com/docs/phoenix/tracing/integrations-tracing/litellm), [Google GenAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/google-genai) and more). For details on auto-instrumentation, check out the [OpenInference](https://github.com/Arize-ai/openinference) project.
Phoenix runs practically anywhere, including your local machine, a Jupyter notebook, a containerized deployment, or in the cloud.
## Installation
Install Phoenix via `pip` or `conda`
```shell
pip install arize-phoenix
```
Phoenix container images are available via [Docker Hub](https://hub.docker.com/r/arizephoenix/phoenix) and can be deployed using Docker or Kubernetes. Arize AI also provides cloud instances at [app.phoenix.arize.com](https://app.phoenix.arize.com/).
## Packages
The `arize-phoenix` package includes the entire Phoenix platform. However, if you have deployed the Phoenix platform, there are lightweight Python sub-packages and TypeScript packages that can be used in conjunction with the platform.
### Python Subpackages
| Package | Version & Docs | Description |
| --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
| [arize-phoenix-otel](https://github.com/Arize-ai/phoenix/tree/main/packages/phoenix-otel) | [](https://pypi.org/project/arize-phoenix-otel/) [](https://arize-phoenix.readthedocs.io/projects/otel/en/latest/index.html) | Provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults |
| [arize-phoenix-client](https://github.com/Arize-ai/phoenix/tree/main/packages/phoenix-client) | [](https://pypi.org/project/arize-phoenix-client/) [](https://arize-phoenix.readthedocs.io/projects/client/en/latest/index.html) | Lightweight client for interacting with the Phoenix server via its OpenAPI REST interface |
| [arize-phoenix-evals](https://github.com/Arize-ai/phoenix/tree/main/packages/phoenix-evals) | [](https://pypi.org/project/arize-phoenix-evals/) [](https://arize-phoenix.readthedocs.io/projects/evals/en/latest/index.html) | Tooling to evaluate LLM applications including RAG relevance, answer relevance, and more |
### TypeScript Subpackages
| Package | Version & Docs | Description |
| --------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
| [@arizeai/phoenix-otel](https://github.com/Arize-ai/phoenix/tree/main/js/packages/phoenix-otel) | [](https://www.npmjs.com/package/@arizeai/phoenix-otel) [](https://arize-ai.github.io/phoenix/) | Provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults |
| [@arizeai/phoenix-client](https://github.com/Arize-ai/phoenix/tree/main/js/packages/phoenix-client) | [](https://www.npmjs.com/package/@arizeai/phoenix-client) [](https://arize-ai.github.io/phoenix/) | Client for the Arize Phoenix API |
| [@arizeai/phoenix-evals](https://github.com/Arize-ai/phoenix/tree/main/js/packages/phoenix-evals) | [](https://www.npmjs.com/package/@arizeai/phoenix-evals) [](https://arize-ai.github.io/phoenix/) | TypeScript evaluation library for LLM applications (alpha release) |
| [@arizeai/phoenix-mcp](https://github.com/Arize-ai/phoenix/tree/main/js/packages/phoenix-mcp) | [](https://www.npmjs.com/package/@arizeai/phoenix-mcp) [](./js/packages/phoenix-mcp/README.md) | MCP server implementation for Arize Phoenix providing unified interface to Phoenix's capabilities |
| [@arizeai/phoenix-cli](https://github.com/Arize-ai/phoenix/tree/main/js/packages/phoenix-cli) | [](https://www.npmjs.com/package/@arizeai/phoenix-cli) [](https://arize.com/docs/phoenix/sdk-api-reference/typescript/arizeai-phoenix-cli) | CLI for fetching traces, datasets, and experiments for use with Claude Code, Cursor, and other coding agents |
## Tracing Integrations
Phoenix is built on top of OpenTelemetry and is vendor, language, and framework agnostic. For details about tracing integrations and example applications, see the [OpenInference](https://github.com/Arize-ai/openinference) project.
**Python Integrations**
| Integration | Package | Version Badge |
|------------------|-----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| [OpenAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/openai) | `openinference-instrumentation-openai` | [](https://pypi.python.org/pypi/openinference-instrumentation-openai) |
| [OpenAI Agents](https://arize.com/docs/phoenix/tracing/integrations-tracing/openai-agents-sdk) | `openinference-instrumentation-openai-agents` | [](https://pypi.python.org/pypi/openinference-instrumentation-openai-agents) |
| [LlamaIndex](https://arize.com/docs/phoenix/tracing/integrations-tracing/llamaindex) | `openinference-instrumentation-llama-index` | [](https://pypi.python.org/pypi/openinference-instrumentation-llama-index) |
| [DSPy](https://arize.com/docs/phoenix/tracing/integrations-tracing/dspy) | `openinference-instrumentation-dspy` | [](https://pypi.python.org/pypi/openinference-instrumentation-dspy) |
| [AWS Bedrock](https://arize.com/docs/phoenix/tracing/integrations-tracing/bedrock) | `openinference-instrumentation-bedrock` | [](https://pypi.python.org/pypi/openinference-instrumentation-bedrock) |
| [LangChain](https://arize.com/docs/phoenix/tracing/integrations-tracing/langchain) | `openinference-instrumentation-langchain` | [](https://pypi.python.org/pypi/openinference-instrumentation-langchain) |
| [MistralAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/mistralai) | `openinference-instrumentation-mistralai` | [](https://pypi.python.org/pypi/openinference-instrumentation-mistralai) |
| [Google GenAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/google-gen-ai) | `openinference-instrumentation-google-genai` | [](https://pypi.python.org/pypi/openinference-instrumentation-google-genai) |
| [Google ADK](https://arize.com/docs/phoenix/integrations/llm-providers/google-gen-ai/google-adk-tracing) | `openinference-instrumentation-google-adk` | [](https://pypi.python.org/pypi/openinference-instrumentation-google-adk) |
| [Guardrails](https://arize.com/docs/phoenix/tracing/integrations-tracing/guardrails) | `openinference-instrumentation-guardrails` | [](https://pypi.python.org/pypi/openinference-instrumentation-guardrails) |
| [VertexAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/vertexai) | `openinference-instrumentation-vertexai` | [](https://pypi.python.org/pypi/openinference-instrumentation-vertexai) |
| [CrewAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/crewai) | `openinference-instrumentation-crewai` | [](https://pypi.python.org/pypi/openinference-instrumentation-crewai) |
| [Haystack](https://arize.com/docs/phoenix/tracing/integrations-tracing/haystack) | `openinference-instrumentation-haystack` | [](https://pypi.python.org/pypi/openinference-instrumentation-haystack) |
| [LiteLLM](https://arize.com/docs/phoenix/tracing/integrations-tracing/litellm) | `openinference-instrumentation-litellm` | [](https://pypi.python.org/pypi/openinference-instrumentation-litellm) |
| [Groq](https://arize.com/docs/phoenix/tracing/integrations-tracing/groq) | `openinference-instrumentation-groq` | [](https://pypi.python.org/pypi/openinference-instrumentation-groq) |
| [Instructor](https://arize.com/docs/phoenix/tracing/integrations-tracing/instructor) | `openinference-instrumentation-instructor` | [](https://pypi.python.org/pypi/openinference-instrumentation-instructor) |
| [Anthropic](https://arize.com/docs/phoenix/tracing/integrations-tracing/anthropic) | `openinference-instrumentation-anthropic` | [](https://pypi.python.org/pypi/openinference-instrumentation-anthropic) |
| [Smolagents](https://huggingface.co/docs/smolagents/en/tutorials/inspect_runs) | `openinference-instrumentation-smolagents` | [](https://pypi.python.org/pypi/openinference-instrumentation-smolagents) |
| [Agno](https://arize.com/docs/phoenix/tracing/integrations-tracing/agno) | `openinference-instrumentation-agno` | [](https://pypi.python.org/pypi/openinference-instrumentation-agno) |
| [MCP](https://arize.com/docs/phoenix/tracing/integrations-tracing/model-context-protocol-mcp) | `openinference-instrumentation-mcp` | [](https://pypi.python.org/pypi/openinference-instrumentation-mcp) |
| [Pydantic AI](https://arize.com/docs/phoenix/integrations/pydantic) | `openinference-instrumentation-pydantic-ai` | [](https://pypi.python.org/pypi/openinference-instrumentation-pydantic-ai) |
| [Autogen AgentChat](https://arize.com/docs/phoenix/integrations/frameworks/autogen/autogen-tracing) | `openinference-instrumentation-autogen-agentchat` | [](https://pypi.python.org/pypi/openinference-instrumentation-autogen-agentchat) |
| [Portkey](https://arize.com/docs/phoenix/integrations/portkey) | `openinference-instrumentation-portkey` | [](https://pypi.python.org/pypi/openinference-instrumentation-portkey) |
| [Agent Spec](https://arize.com/docs/phoenix/tracing/integrations-tracing/agentspec) | `openinference-instrumentation-agentspec` | [](https://pypi.python.org/pypi/openinference-instrumentation-agentspec) |
## Span Processors
Normalize and convert data across other instrumentation libraries by adding span processors that unify data.
| Package | Description | Version |
| ----------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [`openinference-instrumentation-openlit`](./python/instrumentation/openinference-instrumentation-openlit) | OpenInference Span Processor for OpenLIT traces. | [](https://pypi.python.org/pypi/openinference-instrumentation-openlit) |
| [`openinference-instrumentation-openllmetry`](./python/instrumentation/openinference-instrumentation-openllmetry) | OpenInference Span Processor for OpenLLMetry (Traceloop) traces. | [](https://pypi.python.org/pypi/openinference-instrumentation-openllmetry) |
### JavaScript Integrations
| Integration | Package | Version Badge |
| ------------------------------------------------------------------------------------------ | -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [OpenAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/openai-node-sdk) | `@arizeai/openinference-instrumentation-openai` | [](https://www.npmjs.com/package/@arizeai/openinference-instrumentation-openai) |
| [LangChain.js](https://arize.com/docs/phoenix/tracing/integrations-tracing/langchain) | `@arizeai/openinference-instrumentation-langchain` | [](https://www.npmjs.com/package/@arizeai/openinference-instrumentation-langchain) |
| [Vercel AI SDK](https://arize.com/docs/phoenix/tracing/integrations-tracing/vercel-ai-sdk) | `@arizeai/openinference-vercel` | [](https://www.npmjs.com/package/@arizeai/openinference-vercel) |
| [BeeAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/beeai) | `@arizeai/openinference-instrumentation-beeai` | [](https://www.npmjs.com/package/@arizeai/openinference-instrumentation-beeai) |
| [Mastra](https://arize.com/docs/phoenix/integrations/typescript/mastra) | `@mastra/arize` | [](https://www.npmjs.com/package/@mastra/arize) |
### Java Integrations
| Integration | Package | Version Badge |
| --------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [LangChain4j](https://github.com/Arize-ai/openinference/tree/main/java/instrumentation/openinference-instrumentation-langchain4j) | `openinference-instrumentation-langchain4j` | [](https://central.sonatype.com/artifact/com.arize/openinference-instrumentation-langchain4j) |
| SpringAI | `openinference-instrumentation-springAI` | [](https://central.sonatype.com/artifact/com.arize/openinference-instrumentation-springAI) |
### Platforms
| Platform | Description | Docs |
| -------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
| [BeeAI](https://docs.beeai.dev/observability/agents-traceability) | AI agent framework with built-in observability | [Integration Guide](https://docs.beeai.dev/observability/agents-traceability) |
| [Dify](https://docs.dify.ai/en/guides/monitoring/integrate-external-ops-tools/integrate-phoenix) | Open-source LLM app development platform | [Integration Guide](https://docs.dify.ai/en/guides/monitoring/integrate-external-ops-tools/integrate-phoenix) |
| [Envoy AI Gateway](https://github.com/envoyproxy/ai-gateway) | AI Gateway built on Envoy Proxy for AI workloads | [Integration Guide](https://github.com/envoyproxy/ai-gateway/tree/main/cmd/aigw#opentelemetry-setup-with-phoenix) |
| [LangFlow](https://arize.com/docs/phoenix/tracing/integrations-tracing/langflow) | Visual framework for building multi-agent and RAG applications | [Integration Guide](https://arize.com/docs/phoenix/tracing/integrations-tracing/langflow) |
| [LiteLLM Proxy](https://docs.litellm.ai/docs/observability/phoenix_integration#using-with-litellm-proxy) | Proxy server for LLMs | [Integration Guide](https://docs.litellm.ai/docs/observability/phoenix_integration#using-with-litellm-proxy) |
## Security & Privacy
We take data security and privacy very seriously. For more details, see our [Security and Privacy documentation](https://arize.com/docs/phoenix/self-hosting/security/privacy).
### Telemetry
By default, Phoenix collects basic web analytics (e.g., page views, UI interactions) to help us understand how Phoenix is used and improve the product. **None of your trace data, evaluation results, or any sensitive information is ever collected.**
You can opt-out of telemetry by setting the environment variable: `PHOENIX_TELEMETRY_ENABLED=false`
## Community
Join our community to connect with thousands of AI builders.
- 🌍 Join our [Slack community](https://arize-ai.slack.com/join/shared_invite/zt-11t1vbu4x-xkBIHmOREQnYnYDH1GDfCg?__hstc=259489365.a667dfafcfa0169c8aee4178d115dc81.1733501603539.1733501603539.1733501603539.1&__hssc=259489365.1.1733501603539&__hsfp=3822854628&submissionGuid=381a0676-8f38-437b-96f2-fc10875658df#/shared-invite/email).
- 📚 Read our [documentation](https://arize.com/docs/phoenix).
- 💡 Ask questions and provide feedback in the _#phoenix-support_ channel.
- 🌟 Leave a star on our [GitHub](https://github.com/Arize-ai/phoenix).
- 🐞 Report bugs with [GitHub Issues](https://github.com/Arize-ai/phoenix/issues).
- 𝕏 Follow us on [𝕏](https://twitter.com/ArizePhoenix).
- 🗺️ Check out our [roadmap](https://github.com/orgs/Arize-ai/projects/45) to see where we're heading next.
- 🧑🏫 Deep dive into everything [Agents](http://arize.com/ai-agents/) and [LLM Evaluations](https://arize.com/llm-evaluation) on Arize's Learning Hubs.
## Breaking Changes
See the [migration guide](./MIGRATION.md) for a list of breaking changes.
## Copyright, Patent, and License
Copyright 2025 Arize AI, Inc. All Rights Reserved.
Portions of this code are patent protected by one or more U.S. Patents. See the [IP_NOTICE](https://github.com/Arize-ai/phoenix/blob/main/IP_NOTICE).
This software is licensed under the terms of the Elastic License 2.0 (ELv2). See [LICENSE](https://github.com/Arize-ai/phoenix/blob/main/LICENSE).
| text/markdown | null | Arize AI <phoenix-devs@arize.com> | null | null | Elastic-2.0 | Explainability, Monitoring, Observability | [
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"aioitertools",
"aiosqlite",
"alembic<2,>=1.3.0",
"arize-phoenix-client>=1.29.0",
"arize-phoenix-evals>=2.8.0",
"arize-phoenix-otel>=0.14.0",
"authlib",
"cachetools",
"email-validator",
"fastapi",
"grpc-interceptor",
"grpcio",
"httpx",
"jinja2",
"jmespath",
"jsonpath-ng",
"jsonschema... | [] | [] | [] | [
"Documentation, https://arize.com/docs/phoenix/",
"Issues, https://github.com/Arize-ai/phoenix/issues",
"Source, https://github.com/Arize-ai/phoenix"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:23:48.827500 | arize_phoenix-13.3.0.tar.gz | 695,803 | 65/81/f491d14dd119017b263f75858993c253cec4540815e3e64c6bc8fad4da5f/arize_phoenix-13.3.0.tar.gz | source | sdist | null | false | d3df0d5aa5e8d02b380e9328dabdafef | f0db989a7f0899ea3bab224928e0012ae6eab25b5979191423c5620ee8586390 | 6581f491d14dd119017b263f75858993c253cec4540815e3e64c6bc8fad4da5f | null | [
"IP_NOTICE",
"LICENSE"
] | 6,775 |
2.4 | confusius | 0.0.1a5 | Python package for analysis and visualization of functional ultrasound imaging data. | 


[](https://doi.org/10.5281/zenodo.18611124)
[](https://codecov.io/github/sdiebolt/confusius)
# ConfUSIus
> [!WARNING]
> ConfUSIus is currently in pre-alpha and under active development. The API is subject
> to change, and features may be incomplete or unstable.
ConfUSIus is a Python package for handling, visualization, preprocessing, and
statistical analysis of functional ultrasound imaging (fUSI) data.
## Installation
### 1. Setup a virtual environment
We recommend that you install ConfUSIus in a virtual environment to avoid dependency
conflicts with other Python packages. Using
[uv](https://docs.astral.sh/uv/guides/install-python/), you may create a new project
folder with a virtual environment as follows:
```bash
uv init new_project
```
If you already have a project folder, you may create a virtual environment as follows:
```bash
uv venv
```
### 2. Install ConfUSIus
ConfUSIus is available on PyPI. Install it using:
```bash
uv add confusius
```
Or with pip:
```bash
pip install confusius
```
To install the latest development version from GitHub:
```bash
uv add git+https://github.com/sdiebolt/confusius.git
```
### 3. Check installation
Check that ConfUSIus is correctly installed by opening a Python interpreter and
importing the package:
```python
import confusius
```
If no error is raised, you have installed ConfUSIus correctly.
| text/markdown | Samuel Le Meur-Diebolt | Samuel Le Meur-Diebolt <samuel@diebolt.io> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent",
"Intended Audience :: S... | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"dask[complete]>=2025.9.0",
"h5py>=3.15.1",
"ipython>=9.6.0",
"joblib>=1.5.2",
"joblib-progress>=1.0.6",
"matplotlib>=3.10.7",
"napari[all]>=0.6.6",
"nibabel>=5.0.0",
"numpy>=2.3.4",
"rich>=14.2.0",
"scikit-learn>=1.5.0",
"scipy>=1.16.3",
"simpleitk>=2.5.3",
"xarray[complete]>=2025.1.0",
... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T01:23:36.630703 | confusius-0.0.1a5-py3-none-any.whl | 103,823 | 6e/f5/09563660e0a5431825b7f1204968fdf14b72a11957ea4aaa9bc78fc9ad25/confusius-0.0.1a5-py3-none-any.whl | py3 | bdist_wheel | null | false | 8d6c32dae4b5aec80b61132ba939c6e8 | dd3075d408fa1b931f0d701116c263b4589208f4b54d6f34b20f95c7c22adf75 | 6ef509563660e0a5431825b7f1204968fdf14b72a11957ea4aaa9bc78fc9ad25 | BSD-3-Clause AND Apache-2.0 AND BSD-2-Clause | [
"LICENSE",
"licenses/LICENSE-Nilearn",
"licenses/LICENSE-Nipype",
"licenses/LICENSE-transforms3d"
] | 214 |
2.4 | gaze-estimation-lib | 0.1.7 | Gaze augmentation stage: attach gaze (yaw/pitch + gaze_vec) using face boxes in upstream JSON. |
# gaze-estimation-lib
**Minimum Python:** `==3.10.*`
**gaze-estimation-lib** is a modular **gaze estimation + JSON augmentation** toolkit that attaches gaze predictions to detections containing face boxes.
This is the **Gaze Augmentation stage** of the Vision Pipeline.
Estimators included:
- **l2cs**: L2CS-Net backend (face-box driven; no internal detector)
> By default, `gaze-estimation-lib` **does not write any files**. You opt-in to saving JSON, frames, or annotated video via flags.
---
## Vision Pipeline
```
Original Video (.mp4)
│
▼
detect-lib
(Detection Stage)
│
└── detections.json (det-v1)
│
▼
track-lib
(Tracking + ReID)
│
└── tracked.json (track-v1)
│
▼
detect-face-lib
(Face Augmentation)
│
└── faces.json (face-v1 meta)
│
▼
gaze-estimation-lib
(Gaze Augmentation)
│
└── gaze.json (augmented; gaze-v1 meta)
```
Stage 1 (Detection):
- PyPI: https://pypi.org/project/detect-lib/
- GitHub: https://github.com/Surya-Rayala/VideoPipeline-detection
Stage 2 (Tracking + ReID):
- PyPI: https://pypi.org/project/gallery-track-lib/
- GitHub: https://github.com/Surya-Rayala/VisionPipeline-gallery-track
Stage 3 (Face Augmentation):
- PyPI: https://pypi.org/project/detect-face-lib/
- GitHub: https://github.com/Surya-Rayala/VisionPipeline-detect-face
Note: Each stage consumes the original video + the upstream JSON from the previous stage.
---
## What gaze-estimation-lib expects
`gaze-estimation-lib` **does not run a face detector**.
Input JSON must contain:
- `frames[*].detections[*]`
- Inside detections: `faces` with valid `bbox`
The parent schema may be:
- `face-v1`
- `det-v1`
- `track-v1`
- or unknown
As long as detections and face boxes exist, normalization will adapt.
---
## Output: augmented JSON (returned + optionally saved)
`gaze-estimation-lib` returns an **augmented JSON payload** in-memory that preserves the upstream schema and adds:
- `gaze_augment`: metadata about the estimator + association rules (versioned)
- `detections[*].gaze`: minimal gaze payload
### What gets attached to a detection
Each gaze entry is intentionally minimal:
- `yaw` (radians)
- `pitch` (radians)
- `gaze_vec`: `[x,y,z]` unit vector
- `face_ind`: which face entry was used
- `origin`: `[x,y]` pixel location (if available)
- `origin_source`: `"kpt"` or `"box"`
No redundant or derivable data is stored.
---
## Minimal schema example
```json
{
"schema_version": "track-v1",
"gaze_augment": {
"version": "gaze-v1",
"parent_schema_version": "track-v1",
"estimator": {
"name": "l2cs",
"variant": "resnet50",
"weights": "weights.pkl",
"device": "auto"
}
},
"frames": [
{
"frame_index": 0,
"detections": [
{
"bbox": [100.0, 50.0, 320.0, 240.0],
"faces": [
{
"bbox": [140.0, 70.0, 210.0, 150.0],
"score": 0.98
}
],
"gaze": {
"yaw": -0.12,
"pitch": 0.08,
"gaze_vec": [0.11, -0.08, -0.99],
"face_ind": 0
}
}
]
}
]
}
```
---
## Returned vs saved
- **Returned (always):** payload available in memory via `GazeResult.payload`
- **Saved (opt-in):**
- `--json` → `<run>/gaze.json`
- `--frames` → `<run>/frames/`
- `--save-video` → `<run>/annotated.mp4`
If no artifact flags are enabled, nothing is written.
---
# Install with pip (future PyPI)
Requires Python >= 3.10.
```bash
pip install gaze-estimation-lib
# Install the L2CS backend (required to run gaze estimation)
pip install "l2cs @ git+https://github.com/edavalosanaya/L2CS-Net.git@main"
```
Module import name remains:
```python
import gaze
```
### Installing the L2CS backend (pip)
PyPI packages cannot declare Git/VCS dependencies. The default `l2cs` backend must be installed separately:
```bash
pip install "l2cs @ git+https://github.com/edavalosanaya/L2CS-Net.git@main"
```
If you already installed `gaze-estimation-lib`, you can run the command above at any time to add the backend.
### CUDA note (optional)
If you want GPU acceleration on NVIDIA CUDA, install a **CUDA-matching** build of **torch** and **torchvision**.
If you installed CPU-only wheels by accident, uninstall and reinstall the correct CUDA wheels (use the official PyTorch selector for your CUDA version).
```bash
pip uninstall -y torch torchvision
# then install the CUDA-matching wheels for your system
# (see: https://pytorch.org/get-started/locally/)
```
---
## L2CS Weights
Pretrained weights:
https://drive.google.com/drive/folders/17p6ORr-JQJcw-eYtG2WGNiuS_qVKwdWd?usp=sharing
Currently supported variant:
- `resnet50`
If using custom weights, ensure they match the correct L2CS variant.
---
# CLI Usage (pip or installed package)
Global help:
```bash
python -m gaze.cli.estimate_gaze -h
```
List estimators:
```bash
python -m gaze.cli.estimate_gaze --list-estimators
```
List variants:
```bash
python -m gaze.cli.estimate_gaze --estimator l2cs --list-variants
```
---
## Quick Start
```bash
python -m gaze.cli.estimate_gaze \
--json-in faces.json \
--video in.mp4 \
--weights weights.pkl
```
---
## Save artifacts (opt-in)
```bash
python -m gaze.cli.estimate_gaze \
--json-in faces.json \
--video in.mp4 \
--weights weights.pkl \
--json \
--frames \
--save-video annotated.mp4 \
--out-dir out --run-name demo
```
---
## CLI arguments
### Required (for running augmentation)
- `--json-in <path>`: Path to the upstream JSON to augment.
- Accepts `face-v1`, `det-v1`, `track-v1`, or unknown schemas as long as the JSON contains `frames[*].detections[*]`.
- `--video <path>`: Path to the original source video used to generate the upstream JSON. Frame order must align.
- `--weights <path>`: Path to L2CS weights (`.pkl`).
### Discovery
- `--list-estimators`: Print available gaze estimator backends and exit.
- `--list-variants`: Print supported variants for `--estimator` and exit.
### Estimator selection
- `--estimator <name>`: Gaze estimator backend to use (default: `l2cs`).
- `--variant <name>`: Backend variant (named variant registry).
- For `l2cs`, this selects the backbone. **The pretrained weights linked above currently support only `resnet50`.**
### Face crop behavior
- `--expand-face <float>`: Expand each face box by this fraction before cropping.
- Example: `--expand-face 0.25` expands width/height by +25%.
- Increase → includes more context (forehead/hair/ears); can improve stability but may include background.
- Decrease → tighter crop; can be sharper but may clip parts of the face.
- Practical range: `0.0–0.35` (start around `0.2–0.3`).
### Association / filtering
- `--associate-classes <ids...>`: Only attach gaze to detections whose `class_id` is in this list.
- Example: `--associate-classes 0` (often `person`).
- If omitted, `gaze-lib` tries to infer `class_name == "person"`; if not found, all classes are eligible.
- `--face-index <int>`: Which face entry to use per detection.
- If set, always uses that index when present.
- If omitted, uses the **highest-score face** in `faces`.
### Gaze origin behavior (optional)
- `--kpt-origin <ids...>`: Keypoint indices (from `detections[*].keypoints`) used to compute a gaze origin.
- Origin is computed as the **mean of the selected keypoints that pass confidence**.
- Example: `--kpt-origin 0 1`.
- `--kpt-conf <float>`: Minimum keypoint confidence for origin computation (default: `0.3`).
- Increase → fewer keypoints qualify (more robust, but more detections may fall back/skip).
- Decrease → more keypoints qualify (more coverage, but noisier origins).
- `--fallback`: If set, when keypoint-origin is unavailable, fall back to the face box center (preferred) or detection box center.
- If not set and `--kpt-origin` is provided, detections without a valid keypoint-origin are skipped.
If you pass `--kpt-origin` but the JSON contains no keypoints, `gaze-estimation-lib` emits a warning and continues.
### Artifact saving (all opt-in)
- `--json`: Write augmented JSON to `<run>/gaze.json`.
- `--frames`: Save annotated frames under `<run>/frames/`.
- `--save-video [name.mp4]`: Save an annotated video under `<run>/`.
- `--out-dir <dir>`: Output root used only when saving artifacts (default: `out`).
- `--run-name <name>`: Optional subfolder under `--out-dir`.
- `--fourcc <fourcc>`: FourCC codec for saved video (default: `mp4v`).
- `--display`: Show a live annotated preview (ESC to stop). Does not write files unless saving flags are set.
### UX
- `--no-progress`: Disable progress bar.
---
# Python usage (import)
You can use `gaze-estimation-lib` as a library after installing it.
### Quick sanity check
```bash
python -c "import gaze; print(gaze.available_estimators()); print(gaze.available_variants('l2cs'))"
```
### Python API reference (keywords)
#### `gaze.estimate_gaze_video(...)`
**Required**
- `json_in`: Path to the upstream JSON.
- `video`: Path to the original source video.
- `weights`: Path to L2CS weights (`.pkl`).
**Estimator**
- `estimator`: Backend name (default: `"l2cs"`).
- `variant`: Named variant for the backend.
- For L2CS pretrained weights linked above, use `"resnet50"`.
- `device`: `"auto"`, `"cpu"`, `"mps"`, `"cuda"`, `"cuda:0"`, or an index string like `"0"`.
- `expand_face`: Expand face crop by fraction (`0.0–0.35`, start `0.2–0.3`).
**Association / selection**
- `associate_class_ids`: List of `class_id` values eligible for gaze attachment.
- If `None`, the tool tries to infer `class_name == "person"`; if not found, all classes are eligible.
- `face_index`: If set, use that face index per detection; otherwise choose the highest-score face.
**Origin (optional)**
- `kpt_origin`: List of keypoint indices used to compute gaze origin.
- `kpt_conf`: Keypoint confidence threshold.
- `fallback`: If `True`, fall back to box center when keypoint-origin is unavailable.
**Artifacts (all off by default)**
- `save_json_flag`: Write `<run>/gaze.json`.
- `save_frames`: Write `<run>/frames/*.jpg`.
- `save_video`: Filename for annotated video under the run folder.
- `out_dir`, `run_name`, `fourcc`, `display`, `no_progress`.
Returns a `GazeResult` with `payload` (augmented JSON), `paths` (only populated when saving), and `stats`.
### Run gaze augmentation from a Python file
Create `run_gaze.py`:
```python
from gaze import estimate_gaze_video
res = estimate_gaze_video(
json_in="faces.json",
video="in.mp4",
estimator="l2cs",
variant="resnet50",
weights="weights.pkl",
device="auto",
# Optional filtering
associate_class_ids=[0],
# Optional crop tuning
expand_face=0.25,
# Optional origin behavior
kpt_origin=[0, 1],
kpt_conf=0.3,
fallback=True,
# Opt-in artifacts
save_json_flag=True,
save_video="annotated.mp4",
out_dir="out",
run_name="demo",
)
print(res.stats)
print("gaze_augment" in res.payload)
print(res.paths) # populated only if you enable saving artifacts
```
Run:
```bash
python run_gaze.py
```
# Using uv (recommended for development)
Install uv:
https://docs.astral.sh/uv/
Clone the repo:
```bash
git clone https://github.com/Surya-Rayala/VisionPipeline-gaze.git
cd VisionPipeline-gaze
```
Sync environment:
```bash
uv sync
```
### Installing the L2CS backend (uv)
Add the backend to your local uv environment from Git:
```bash
uv add --dev "l2cs @ git+https://github.com/edavalosanaya/L2CS-Net.git@main"
uv sync
```
Note: this updates your local project environment; it is intended for development/use in this repo.
Run CLI:
```bash
uv run python -m gaze.cli.estimate_gaze -h
```
Run augmentation:
```bash
uv run python -m gaze.cli.estimate_gaze \
--json-in faces.json \
--video in.mp4 \
--weights weights.pkl
```
---
### CUDA note (optional)
For best performance on NVIDIA GPUs, make sure **torch** and **torchvision** are installed with a build that matches your CUDA toolkit / driver stack.
If you added CPU-only builds earlier, remove them and add the correct CUDA wheels, then re-sync.
```bash
uv remove torch torchvision
# then add the CUDA-matching wheels for your system
# (see: https://pytorch.org/get-started/locally/)
uv add <compatible torch torchvision>
uv sync
```
---
# License
This project is licensed under the **MIT License**. See `LICENSE`. | text/markdown | null | Surya Chand Rayala <suryachand2k1@gmail.com> | null | null | MIT License Copyright (c) 2026 Surya Chand Rayala Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | ==3.10.* | [] | [] | [] | [
"tqdm>=4.67.3"
] | [] | [] | [] | [] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T01:22:37.543464 | gaze_estimation_lib-0.1.7.tar.gz | 126,743 | db/32/ead7caf46657659ac8d5556190081602ec643850841e213e0e68f6c6c80b/gaze_estimation_lib-0.1.7.tar.gz | source | sdist | null | false | fc95eaac0cb68a366613246498882f60 | a2476f656825dfcaeb7f7dac77a26f62bfda717e4c91b64fd53691c29aec880a | db32ead7caf46657659ac8d5556190081602ec643850841e213e0e68f6c6c80b | null | [
"LICENSE"
] | 225 |
2.4 | grpcio-csm-observability | 1.78.1 | gRPC Python CSM observability package | gRPC Python CSM Observability
=============================
Package for gRPC Python CSM Observability.
Installation
------------
Currently gRPC Python CSM Observability is **only available for Linux**.
Installing From PyPI
~~~~~~~~~~~~~~~~~~~~
::
$ pip install grpcio-csm-observability
Installing From Source
~~~~~~~~~~~~~~~~~~~~~~
::
$ export REPO_ROOT=grpc # REPO_ROOT can be any directory of your choice
$ git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc $REPO_ROOT
$ cd $REPO_ROOT
$ git submodule update --init
$ cd src/python/grpcio_csm_observability
# For the next command do `sudo pip install` if you get permission-denied errors
$ pip install .
Dependencies
------------
gRPC Python CSM Observability Depends on the following packages:
::
grpcio
grpcio-observability
opentelemetry-sdk
Usage
-----
Example usage is similar to `the example here <https://github.com/grpc/grpc/tree/master/examples/python/observability>`_, instead of importing from ``grpc_observability``, you should import from ``grpc_csm_observability``:
.. code-block:: python
import grpc_csm_observability
csm_otel_plugin = grpc_csm_observability.CsmOpenTelemetryPlugin(
meter_provider=provider
)
We also provide several environment variables to help you optimize gRPC python observability for your particular use.
* Note: The term "Census" here is just for historical backwards compatibility reasons and does not imply any dependencies.
1. GRPC_PYTHON_CENSUS_EXPORT_BATCH_INTERVAL
* This controls how frequently telemetry data collected within gRPC Core is sent to Python layer.
* Default value is 0.5 (Seconds).
2. GRPC_PYTHON_CENSUS_MAX_EXPORT_BUFFER_SIZE
* This controls the maximum number of telemetry data items that can be held in the buffer within gRPC Core before they are sent to Python.
* Default value is 10,000.
3. GRPC_PYTHON_CENSUS_EXPORT_THRESHOLD
* This setting acts as a trigger: When the buffer in gRPC Core reaches a certain percentage of its capacity, the telemetry data is sent to Python.
* Default value is 0.7 (Which means buffer will start export when it's 70% full).
4. GRPC_PYTHON_CENSUS_EXPORT_THREAD_TIMEOUT
* This controls the maximum time allowed for the exporting thread (responsible for sending data to Python) to complete.
* Main thread will terminate the exporting thread after this timeout.
* Default value is 10 (Seconds).
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"opentelemetry-sdk>=1.25.0",
"opentelemetry-resourcedetector-gcp>=1.6.0a0",
"grpcio==1.78.1",
"protobuf<7.0.0,>=6.31.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Source Code, https://github.com/grpc/grpc/tree/master/src/python/grpcio_csm_observability",
"Bug Tracker, https://github.com/grpc/grpc/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:54.869288 | grpcio_csm_observability-1.78.1.tar.gz | 16,053 | 3c/02/e64dbb99efddcc19a21b7d374d29d596e402a76148279b3c03da989d91e2/grpcio_csm_observability-1.78.1.tar.gz | source | sdist | null | false | 57f3ebd540ca60b988e51e9f0038ab45 | be2bad24c1f5f84fced9d8b2b63df53768e0cc314229df0cc9dbaa4eaad01091 | 3c02e64dbb99efddcc19a21b7d374d29d596e402a76148279b3c03da989d91e2 | Apache-2.0 | [
"LICENSE"
] | 216 |
2.4 | grpcio-admin | 1.78.1 | a collection of admin services | gRPC Python Admin Interface Package
===================================
Debugging gRPC library can be a complex task. There are many configurations and
internal states, which will affect the behavior of the library. This Python
package will be the collection of admin services that are exposing debug
information. Currently, it includes:
* Channel tracing metrics (grpcio-channelz)
* Client Status Discovery Service (grpcio-csds)
Here is a snippet to create an admin server on "localhost:50051":
server = grpc.server(ThreadPoolExecutor())
port = server.add_insecure_port('localhost:50051')
grpc_admin.add_admin_servicers(self._server)
server.start()
Welcome to explore the admin services with CLI tool "grpcdebug":
https://github.com/grpc-ecosystem/grpcdebug.
For any issues or suggestions, please send to
https://github.com/grpc/grpc/issues.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"grpcio-channelz>=1.78.1",
"grpcio-csds>=1.78.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Documentation, https://grpc.github.io/grpc/python/grpc_admin.html"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:53.624820 | grpcio_admin-1.78.1.tar.gz | 12,219 | 5e/27/4da5559431b13b213900a138f2844798c6a8aed404202b7aa6c416df1571/grpcio_admin-1.78.1.tar.gz | source | sdist | null | false | 32ce833d6d7eeac085a3706191bc963f | a679c24c8938c40160809d37bc9e7008584af1e6342c508b70d86b9415564337 | 5e274da5559431b13b213900a138f2844798c6a8aed404202b7aa6c416df1571 | Apache-2.0 | [
"LICENSE"
] | 239 |
2.4 | grpcio-csds | 1.78.1 | xDS configuration dump library | gRPC Python Client Status Discovery Service package
===================================================
CSDS is part of the Envoy xDS protocol:
https://www.envoyproxy.io/docs/envoy/latest/api-v3/service/status/v3/csds.proto.
It allows the gRPC application to programmatically expose the received traffic
configuration (xDS resources). Welcome to explore with CLI tool "grpcdebug":
https://github.com/grpc-ecosystem/grpcdebug.
For any issues or suggestions, please send to https://github.com/grpc/grpc/issues.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0.0,>=6.31.1",
"xds-protos==1.78.1",
"grpcio>=1.78.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Documentation, https://grpc.github.io/grpc/python/grpc_csds.html"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:52.862516 | grpcio_csds-1.78.1.tar.gz | 12,201 | 83/99/b917aeefa73bd283f13a8d04efa05cc8117d4254e9232854950e3588c8f6/grpcio_csds-1.78.1.tar.gz | source | sdist | null | false | e4ceb40f8922249ad1970f12ebe8706b | 1ccfb6b9fcb78316100abf7faf33be3a5043c5094bc13295ad0864206dcb245c | 8399b917aeefa73bd283f13a8d04efa05cc8117d4254e9232854950e3588c8f6 | Apache-2.0 | [
"LICENSE"
] | 340 |
2.4 | grpcio-channelz | 1.78.1 | Channel Level Live Debug Information Service for gRPC | gRPC Python Channelz package
==============================
Channelz is a live debug tool in gRPC Python.
Dependencies
------------
Depends on the `grpcio` package, available from PyPI via `pip install grpcio`.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0.0,>=6.31.1",
"grpcio>=1.78.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Documentation, https://grpc.github.io/grpc/python/grpc_channelz.html"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:52.080666 | grpcio_channelz-1.78.1.tar.gz | 22,468 | 92/d5/4bbfd764b299e340372d51efc2973baa1b1f99ee03e75afedf3ba8411e83/grpcio_channelz-1.78.1.tar.gz | source | sdist | null | false | a20e510265315d7983c29d1eff04dd83 | 3a75296ae0e42c842c65055e0b8e67656b6a02290865474fc464f13cabe9df94 | 92d54bbfd764b299e340372d51efc2973baa1b1f99ee03e75afedf3ba8411e83 | Apache-2.0 | [
"LICENSE"
] | 772 |
2.4 | grpcio-status | 1.78.1 | Status proto mapping for gRPC | gRPC Python Status Proto
===========================
Reference package for GRPC Python status proto mapping.
Dependencies
------------
Depends on the `grpcio` package, available from PyPI via `pip install grpcio`.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0.0,>=6.31.1",
"grpcio>=1.78.1",
"googleapis-common-protos>=1.5.5"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Documentation, https://grpc.github.io/grpc/python/grpc_status.html"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:50.761444 | grpcio_status-1.78.1.tar.gz | 13,814 | 73/be/0a88b27a058d3a640bbe42e2b4e1323a19cabcedaeab1b3a44af231777e9/grpcio_status-1.78.1.tar.gz | source | sdist | null | false | b2b791095a626b13e9f1d73d536c12b0 | 47e7fa903549c5881344f1cba23c814b5f69d09233541036eb25642d32497c8e | 73be0a88b27a058d3a640bbe42e2b4e1323a19cabcedaeab1b3a44af231777e9 | Apache-2.0 | [
"LICENSE"
] | 4,039,878 |
2.4 | grpcio-reflection | 1.78.1 | Standard Protobuf Reflection Service for gRPC | gRPC Python Reflection package
==============================
Reference package for reflection in GRPC Python.
Dependencies
------------
Depends on the `grpcio` package, available from PyPI via `pip install grpcio`.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0.0,>=6.31.1",
"grpcio>=1.78.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Documentation, https://grpc.github.io/grpc/python/grpc_reflection.html"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:49.703362 | grpcio_reflection-1.78.1.tar.gz | 19,116 | 4d/f1/389fbbff18e84fdde609114eae37e74d8ae23e3f48266769d1fe2486b754/grpcio_reflection-1.78.1.tar.gz | source | sdist | null | false | ab62a77825f104dc81dd72d89327e47d | 224c0d604207954923fd6f8dbec541e0976a64ab1be65d2ee40844ce16c762ab | 4df1389fbbff18e84fdde609114eae37e74d8ae23e3f48266769d1fe2486b754 | Apache-2.0 | [
"LICENSE"
] | 106,892 |
2.4 | grpcio-testing | 1.78.1 | Testing utilities for gRPC Python | gRPC Python Testing Package
===========================
Testing utilities for gRPC Python
Dependencies
------------
Depends on the `grpcio` package, available from PyPI via `pip install grpcio`.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0.0,>=6.31.1",
"grpcio>=1.78.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Documentation, https://grpc.github.io/grpc/python/grpc_testing.html"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:48.609195 | grpcio_testing-1.78.1.tar.gz | 23,030 | 88/7a/c062805650aaa22136b812f5992e1b293155c4fd1718ed906b6b26dfae1a/grpcio_testing-1.78.1.tar.gz | source | sdist | null | false | ecebe18f37d298dfd75ec4c81dbe5604 | da820e22f3a081cf40845c916bebf04036b85666a31eaed8fdedfa4fba9a6f66 | 887ac062805650aaa22136b812f5992e1b293155c4fd1718ed906b6b26dfae1a | Apache-2.0 | [
"LICENSE"
] | 2,036 |
2.4 | grpcio-health-checking | 1.78.1 | Standard Health Checking Service for gRPC | gRPC Python Health Checking
===========================
Reference package for GRPC Python health checking.
Dependencies
------------
Depends on the `grpcio` package, available from PyPI via `pip install grpcio`.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0.0,>=6.31.1",
"grpcio>=1.78.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Documentation, https://grpc.github.io/grpc/python/grpc_health_checking.html"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:47.791539 | grpcio_health_checking-1.78.1.tar.gz | 17,012 | 3f/c8/4eb8869ec990cdd5a4b50803c2e89826c99eab69d438d79d07ce171a9bf0/grpcio_health_checking-1.78.1.tar.gz | source | sdist | null | false | c8335acf974d933bf0806407b382569c | 563cba3cfa776ae739153dc89b14ddd75d49dba317f82c23eaf20e5b3a01f554 | 3fc84eb8869ec990cdd5a4b50803c2e89826c99eab69d438d79d07ce171a9bf0 | Apache-2.0 | [
"LICENSE"
] | 278,391 |
2.4 | xds-protos | 1.78.1 | Generated Python code from envoyproxy/data-plane-api | Package "xds-protos" is a collection of ProtoBuf generated Python files for xDS protos (or the `data-plane-api <https://github.com/envoyproxy/data-plane-api>`_). You can find the source code of this project in `grpc/grpc <https://github.com/grpc/grpc>`_. For any question or suggestion, please post to https://github.com/grpc/grpc/issues.
Each generated Python file can be imported according to their proto package. For example, if we are trying to import a proto located at "envoy/service/status/v3/csds.proto", whose proto package is "package envoy.service.status.v3", then we can import it as:
::
# Import the message definitions
from envoy.service.status.v3 import csds_pb2
# Import the gRPC service and stub
from envoy.service.status.v3 import csds_pb2_grpc
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"grpcio>=1.74.0",
"protobuf<7.0.0,>=6.31.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:46.655304 | xds_protos-1.78.1.tar.gz | 501,296 | 56/0a/f3deb76d40f2d04592ce707e77659d3af141ae66707ceee19ac77a7dd39a/xds_protos-1.78.1.tar.gz | source | sdist | null | false | e11123d3279e249eaec6dc3bd2ac3f1c | 0597d31241d21f83789b97914653e61919c0ff925e452c3f72eec3ed7d8fd70a | 560af3deb76d40f2d04592ce707e77659d3af141ae66707ceee19ac77a7dd39a | Apache-2.0 | [] | 974 |
2.4 | grpcio-observability | 1.78.1 | gRPC Python observability package | gRPC Python Observability
=========================
Package for gRPC Python Observability.
More details can be found in `OpenTelemetry Metrics gRFC <https://github.com/grpc/proposal/blob/master/A66-otel-stats.md#opentelemetry-metrics>`_.
How gRPC Python Observability Works
-----------------------------------
gRPC Python is a wrapper layer built upon the gRPC Core (written in C/C++). Most of telemetry data
is collected at core layer and then exported to Python layer. To optimize performance and reduce
the overhead of acquiring the GIL too frequently, telemetry data is initially cached at the Core layer
and then exported to the Python layer in batches.
Note that while this approach enhances efficiency, it will introduce a slight delay between the
time the data is collected and the time it becomes available through Python exporters.
Installation
------------
Currently gRPC Python Observability is **only available for Linux**.
Installing From PyPI
~~~~~~~~~~~~~~~~~~~~
::
$ pip install grpcio-observability
Installing From Source
~~~~~~~~~~~~~~~~~~~~~~
Building from source requires that you have the Python headers (usually a
package named :code:`python-dev`) and Cython installed. It further requires a
GCC-like compiler to go smoothly; you can probably get it to work without
GCC-like stuff, but you may end up having a bad time.
::
$ export REPO_ROOT=grpc # REPO_ROOT can be any directory of your choice
$ git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc $REPO_ROOT
$ cd $REPO_ROOT
$ git submodule update --init
$ cd src/python/grpcio_observability
$ python -m make_grpcio_observability
# For the next command do `sudo pip install` if you get permission-denied errors
$ GRPC_PYTHON_BUILD_WITH_CYTHON=1 pip install .
Dependencies
------------
gRPC Python Observability Depends on the following packages:
::
grpcio
opentelemetry-api
Usage
-----
You can find example usage in `Python example folder <https://github.com/grpc/grpc/tree/master/examples/python/observability>`_.
We also provide several environment variables to help you optimize gRPC python observability for your particular use.
1. GRPC_PYTHON_CENSUS_EXPORT_BATCH_INTERVAL
* This controls how frequently telemetry data collected within gRPC Core is sent to Python layer.
* Default value is 0.5 (Seconds).
2. GRPC_PYTHON_CENSUS_MAX_EXPORT_BUFFER_SIZE
* This controls the maximum number of telemetry data items that can be held in the buffer within gRPC Core before they are sent to Python.
* Default value is 10,000.
3. GRPC_PYTHON_CENSUS_EXPORT_THRESHOLD
* This setting acts as a trigger: When the buffer in gRPC Core reaches a certain percentage of its capacity, the telemetry data is sent to Python.
* Default value is 0.7 (Which means buffer will start export when it's 70% full).
4. GRPC_PYTHON_CENSUS_EXPORT_THREAD_TIMEOUT
* This controls the maximum time allowed for the exporting thread (responsible for sending data to Python) to complete.
* Main thread will terminate the exporting thread after this timeout.
* Default value is 10 (Seconds).
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"grpcio==1.78.1",
"setuptools>=77.0.1",
"opentelemetry-api>=1.21.0"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Source Code, https://github.com/grpc/grpc/tree/master/src/python/grpcio_observability",
"Bug Tracker, https://github.com/grpc/grpc/issues",
"Documentation, https://grpc.github.io/grpc/python/grpc_observability.html"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:21:44.511522 | grpcio_observability-1.78.1.tar.gz | 6,353,060 | aa/77/7a4eabe4b97340034d638816d01b7b1edef2da802e3415c0e2732041f2c7/grpcio_observability-1.78.1.tar.gz | source | sdist | null | false | fb07bb57248ad4c7029f5a14ce2db6e5 | 269119dab70f4ad95470d03263b93568b964281a389067a46272ba251cbd1a1c | aa777a4eabe4b97340034d638816d01b7b1edef2da802e3415c0e2732041f2c7 | Apache-2.0 | [
"LICENSE"
] | 3,088 |
2.2 | photonforge | 1.4.dev0 | PhotonForge is a design tool that integrates with foundry PDKs to speed up the design, simulation, and verification cycle for optical components and systems. | # PhotonForge
PhotonForge is a design tool that integrates with foundry PDKs to speed up the
design, simulation, and verification cycle for optical components and systems.
## Documentation
The online documentation can be found
[here](https://docs.flexcompute.com/projects/photonforge/).
Development docs:
- (docs/development.md)[docs/development.md]: local development setup.
- (docs/architecture.md)[docs/architecture.md]: general code structure and
architecture.
- (docs/testing.md)[docs/testing.md]: testing infrastructure.
- (docs/deployment.md)[docs/deployment.md]: how to release a new version.
| text/markdown | null | "Flexcompute Inc." <support@flexcompute.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.22",
"scipy>=1.15",
"tidy3d[trimesh]==2.11.0.dev0",
"pydantic<3,>=2.9",
"uvicorn>=0.34",
"fastapi>=0.115",
"jinja2>=3.1",
"build>=1.3; extra == \"dev\"",
"ipykernel>=6.10; extra == \"dev\"",
"ipywidgets>=8.0; extra == \"dev\"",
"jsonschema>=4.25; extra == \"dev\"",
"pytest>=7.2; extr... | [] | [] | [] | [
"homepage, https://www.flexcompute.com/",
"documentation, https://docs.flexcompute.com/projects/photonforge/"
] | twine/6.1.0 CPython/3.12.8 | 2026-02-20T01:20:47.523985 | photonforge-1.4.dev0-cp313-cp313-win_amd64.whl | 3,951,627 | 57/72/94ef3e0e7cdded4bb0d615b7815027ee3e620cafe4613cec8566b83c3295/photonforge-1.4.dev0-cp313-cp313-win_amd64.whl | cp313 | bdist_wheel | null | false | 6769904a5cae19168854781e9b3140bb | 3200fa0ea621dd95f9418a101f72ed5085812d4666531b79cd58489d8f684b58 | 577294ef3e0e7cdded4bb0d615b7815027ee3e620cafe4613cec8566b83c3295 | null | [] | 2,160 |
2.4 | reflex-enterprise | 0.5.4 | Package containing the paid features for Reflex. [Pro/Team/Enterprise] | # How to install reflex_enterprise.
```bash
pip install reflex-enterprise
```
# How to use reflex enterprise.
In the main file, instead of using `rx.App()` to create your app, use the following:
## In the main file
```python
import reflex_enterprise as rxe
...
rxe.App()
...
```
## In rxconfig.py
```python
import reflex_enterprise as rxe
config = rxe.Config(
app_name="MyApp",
... # you can pass all rx.Config arguments as well as the one specific to rxe.Config
)
```
### Enterprise features
| Feature | Description | Minimum Tier (Cloud) | Minimum Tier (Self-hosted) |
| --- | --- | --- | --- |
| `show_built_with_reflex` | Toggle the "Built with Reflex" badge. | Pro | Team|
| `use_single_port` | Enable one-port by proxying from backend to frontend. | - | Team |
| text/markdown | null | Nikhil Rao <nikhil@reflex.dev>, Alek Petuskey <alek@reflex.dev>, Masen Furer <masen@reflex.dev>, Thomas Brandého <thomas@reflex.dev> | null | null | null | python, reflex, reflex-enterprise | [] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"asgiproxy>=0.2.0",
"httpx",
"joserfc",
"psutil",
"reflex>=0.8.0"
] | [] | [] | [] | [
"homepage, https://reflex.dev/",
"documentation, https://enterprise.reflex.dev"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T01:19:59.848395 | reflex_enterprise-0.5.4-py3-none-any.whl | 214,472 | 15/58/0c196617fddff7b7a001d92f14adaf9571fd5e06c056a94b218a0881c439/reflex_enterprise-0.5.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 5e165237001cd82c82aed74355569ee8 | c23e3971011c0a5cee7d6e47e1f646a798d989d90ac38d484d71a91acb67b820 | 15580c196617fddff7b7a001d92f14adaf9571fd5e06c056a94b218a0881c439 | LicenseRef-Proprietary | [
"LICENSE"
] | 663 |
2.4 | grpcio-tools | 1.78.1 | Protobuf code generator for gRPC | gRPC Python Tools
=================
Package for gRPC Python tools.
Supported Python Versions
-------------------------
Python >= 3.6
Installation
------------
The gRPC Python tools package is available for Linux, macOS, and Windows.
Installing From PyPI
~~~~~~~~~~~~~~~~~~~~
If you are installing locally...
::
$ pip install grpcio-tools
Else system wide (on Ubuntu)...
::
$ sudo pip install grpcio-tools
If you're on Windows make sure that you installed the :code:`pip.exe` component
when you installed Python (if not go back and install it!) then invoke:
::
$ pip.exe install grpcio-tools
Windows users may need to invoke :code:`pip.exe` from a command line ran as
administrator.
n.b. On Windows and on macOS one *must* have a recent release of :code:`pip`
to retrieve the proper wheel from PyPI. Be sure to upgrade to the latest
version!
You might also need to install Cython to handle installation via the source
distribution if gRPC Python's system coverage with wheels does not happen to
include your system.
Installing From Source
~~~~~~~~~~~~~~~~~~~~~~
Building from source requires that you have the Python headers (usually a
package named :code:`python-dev`) and Cython installed. It further requires a
GCC-like compiler to go smoothly; you can probably get it to work without
GCC-like stuff, but you may end up having a bad time.
::
$ export REPO_ROOT=grpc # REPO_ROOT can be any directory of your choice
$ git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc $REPO_ROOT
$ cd $REPO_ROOT
$ git submodule update --init
$ cd tools/distrib/python/grpcio_tools
$ python ../make_grpcio_tools.py
# For the next command do `sudo pip install` if you get permission-denied errors
$ GRPC_PYTHON_BUILD_WITH_CYTHON=1 pip install .
You cannot currently install Python from source on Windows. Things might work
out for you in MSYS2 (follow the Linux instructions), but it isn't officially
supported at the moment.
Troubleshooting
~~~~~~~~~~~~~~~
Help, I ...
* **... see compiler errors on some platforms when either installing from source or from the source distribution**
If you see
::
/tmp/pip-build-U8pSsr/cython/Cython/Plex/Scanners.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
You can fix it by installing `python-dev` package. i.e
::
sudo apt-get install python-dev
If you see something similar to:
::
third_party/protobuf/src/google/protobuf/stubs/mathlimits.h:173:31: note: in expansion of macro 'SIGNED_INT_MAX'
static const Type kPosMax = SIGNED_INT_MAX(Type); \\
^
And your toolchain is GCC (at the time of this writing, up through at least
GCC 6.0), this is probably a bug where GCC chokes on constant expressions
when the :code:`-fwrapv` flag is specified. You should consider setting your
environment with :code:`CFLAGS=-fno-wrapv` or using clang (:code:`CC=clang`).
Usage
-----
Given protobuf include directories :code:`$INCLUDE`, an output directory
:code:`$OUTPUT`, and proto files :code:`$PROTO_FILES`, invoke as:
::
$ python -m grpc_tools.protoc -I$INCLUDE --python_out=$OUTPUT --grpc_python_out=$OUTPUT $PROTO_FILES
To use as a build step in setuptools-based projects, you may use the provided
command class in your :code:`setup.py`:
::
setuptools.setup(
# ...
cmdclass={
'build_proto_modules': grpc_tools.command.BuildPackageProtos,
}
# ...
)
Invocation of the command will walk the project tree and transpile every
:code:`.proto` file into a :code:`_pb2.py` file in the same directory.
Note that this particular approach requires :code:`grpcio-tools` to be
installed on the machine before the setup script is invoked (i.e. no
combination of :code:`setup_requires` or :code:`install_requires` will provide
access to :code:`grpc_tools.command.BuildPackageProtos` if it isn't already
installed). One way to work around this can be found in our
:code:`grpcio-health-checking`
`package <https://pypi.python.org/pypi/grpcio-health-checking>`_:
::
class BuildPackageProtos(setuptools.Command):
"""Command to generate project *_pb2.py modules from proto files."""
# ...
def run(self):
from grpc_tools import command
command.build_package_protos(self.distribution.package_dir[''])
Now including :code:`grpcio-tools` in :code:`setup_requires` will provide the
command on-setup as desired.
For more information on command classes, consult :code:`setuptools` documentation.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0.0,>=6.31.1",
"grpcio>=1.78.1",
"setuptools>=77.0.1"
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Source Code, https://github.com/grpc/grpc/tree/master/tools/distrib/python/grpcio_tools",
"Bug Tracker, https://github.com/grpc/grpc/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:19:44.109934 | grpcio_tools-1.78.1.tar.gz | 5,392,610 | c9/e5/311efa9278a451291e317286babf3f69b1479f8e6fd244836e3803e4b81d/grpcio_tools-1.78.1.tar.gz | source | sdist | null | false | e09879cc37648cde6faf22b2f0b58a51 | f47b746b06a940954b9aa86b1824aa4874f068a7ec2d4b407980d202c86a691a | c9e5311efa9278a451291e317286babf3f69b1479f8e6fd244836e3803e4b81d | Apache-2.0 | [] | 836,521 |
2.4 | detect-face-lib | 0.1.1 | detect-face-lib is a modular face detection + JSON augmentation toolkit that attaches face detections to det-v1(detections) or track-v1(tracked) payloads produced by earlier stages in the Vision Pipeline. | # detect-face-lib
**Python:** `==3.10.*`
**detect-face-lib** is a modular **face detection + JSON augmentation** toolkit that attaches face detections to **det-v1** (detections) or **track-v1** (tracked) payloads produced by earlier stages in the Vision Pipeline.
This is the **Face Augmentation stage** of the Vision Pipeline.
Detectors included:
- **retinaface**: `retinaface-pytorch` backend (named variant registry)
> By default, `detect-face-lib` **does not write any files**. You opt-in to saving JSON, frames, or annotated video via flags.
---
## Vision Pipeline
```
Original Video (.mp4)
│
▼
detect-lib
(Detection Stage)
│
└── detections.json (det-v1)
│
▼
track-lib
(Tracking + ReID)
│
└── tracked.json (track-v1)
│
▼
detect-face-lib
(Face Augmentation)
│
└── faces.json (augmented; face-v1 meta)
Note: Each stage consumes the original video + the upstream JSON from the previous stage.
```
Stage 1 (Detection):
- PyPI: https://pypi.org/project/detect-lib/
- GitHub: https://github.com/Surya-Rayala/VideoPipeline-detection
Stage 2 (Tracking + ReID):
- PyPI: https://pypi.org/project/gallery-track-lib/
- GitHub: https://github.com/Surya-Rayala/VisionPipeline-gallery-track
---
## Output: augmented det-v1 / track-v1 (returned + optionally saved)
`detect-face-lib` returns an **augmented JSON payload** in-memory that preserves the upstream schema (det-v1 or track-v1) and adds:
- `face_augment`: metadata about the face detector + association rules (versioned)
- `detections[*].faces`: for each frame, matched faces are attached under the matched person detection
### What gets attached to a person
Each face entry is intentionally minimal:
- `bbox`: face box `[x1,y1,x2,y2]`
- `score`: face confidence
- `landmarks`: 5-point landmarks `[[x,y] x5]` when provided by the backend
---
## Minimal schema example
This example assumes the upstream JSON is **track-v1**; the same structure applies for **det-v1** (it will simply lack tracker fields).
```json
{
"schema_version": "track-v1",
"parent_schema_version": "det-v1",
"video": {
"path": "in.mp4",
"fps": 30.0,
"frame_count": 120,
"width": 1920,
"height": 1080
},
"tracker": {
"name": "gallery_hybrid"
},
"face_augment": {
"version": "face-v1",
"parent_schema_version": "track-v1",
"detector": {
"name": "retinaface",
"variant": "resnet50_2020-07-20",
"max_size": 1024,
"conf_thresh": 0.5,
"nms_thresh": 0.3,
"device": "mps"
},
"association": {
"associate_class_ids": [0],
"gallery_filter": false,
"iou_thresh": 0.0,
"containment": true,
"inclusive": true,
"kpt_indices": null,
"kpt_conf": 0.3,
"flexible_kpt": null
}
},
"frames": [
{
"frame_index": 0,
"detections": [
{
"bbox": [100.0, 50.0, 320.0, 240.0],
"score": 0.91,
"class_id": 0,
"class_name": "person",
"track_id": "3",
"gallery_id": "person_A",
"faces": [
{
"bbox": [140.0, 70.0, 210.0, 150.0],
"score": 0.98,
"landmarks": [[160.0, 95.0], [195.0, 95.0], [178.0, 110.0], [165.0, 130.0], [192.0, 130.0]]
}
]
}
]
}
]
}
```
### Returned vs saved
- **Returned (always):** the augmented payload is available as `FaceResult.payload` (Python) and is always produced in-memory.
- **Saved (opt-in):** nothing is written unless you enable artifacts:
- `--json` writes `<run>/faces.json`
- `--frames` writes annotated frames under `<run>/frames/`
- `--save-video` writes an annotated video under `<run>/...`
When no artifacts are enabled, no output directory/run folder is created.
---
## Install with `pip` (PyPI)
> Use this if you want to install and use the tool without cloning the repo.
> Requires **Python >= 3.10**.
>
### Install
```bash
pip install detect-face-lib
```
> Note: the PyPI package name is `detect-face-lib`, but the Python module/import name remains `detect_face`.
### CUDA note (optional)
If you want GPU acceleration on NVIDIA CUDA, install a **CUDA-matching** build of **torch** and **torchvision**.
If you installed CPU-only wheels by accident, uninstall and reinstall the correct CUDA wheels (use the official PyTorch selector for your CUDA version).
```bash
pip uninstall -y torch torchvision
# then install the CUDA-matching wheels for your system
# (see: https://pytorch.org/get-started/locally/)
```
---
## CLI usage (pip)
Global help:
```bash
python -m detect_face.cli.detect_faces -h
```
List detectors:
```bash
python -m detect_face.cli.detect_faces --list-detectors
```
List variants for a detector:
```bash
python -m detect_face.cli.detect_faces --detector retinaface --list-variants
```
---
## Face augmentation CLI: `detect_face.cli.detect_faces`
### Quick start (track-v1 input)
```bash
python -m detect_face.cli.detect_faces \
--json-in tracked.json \
--video in.mp4
```
### Quick start (det-v1 input)
```bash
python -m detect_face.cli.detect_faces \
--json-in detections.json \
--video in.mp4
```
### Save artifacts (opt-in)
```bash
python -m detect_face.cli.detect_faces \
--json-in tracked.json \
--video in.mp4 \
--json \
--frames \
--save-video annotated.mp4 \
--out-dir out --run-name demo
```
---
## CLI arguments
### Required (for running augmentation)
- `--json-in <path>`: Path to the **det-v1** or **track-v1** JSON to augment.
- `--video <path>`: Path to the original source video used to generate the JSON. Frame order must align.
### Discovery
- `--list-detectors`: Print available face detector backends and exit.
- `--list-variants`: Print supported named variants for `--detector` and exit.
### Detector selection
- `--detector <name>`: Face backend to use (default: `retinaface`).
- `--variant <name>`: Backend model variant (named variant). For `retinaface`, this is typically `resnet50_2020-07-20`.
- `--max-size <int>`: Cap longer side for detector input.
- Lower → faster, may miss small faces.
- Higher → slower, better for small faces.
- `--device <auto|cpu|mps|cuda|cuda:0|0|1...>`: Compute device.
- `auto` resolves to `cuda:0` if available, else `mps`, else `cpu`.
### Detector thresholds
- `--conf-thresh <float>`: Minimum face confidence.
- Increase → fewer false positives (fewer faces).
- Decrease → more faces (more false positives).
- `--nms-thresh <float>`: Non-maximum suppression threshold inside the face detector.
- Lower → more aggressive suppression.
### Association rules
- `--associate-classes <ids>`: Class IDs eligible for face association (e.g., `0` for person). If omitted, the tool tries to auto-detect class_name=='person'; if not found, all classes are eligible.
- `--iou-thresh <float>`: IoU threshold for candidate person-face pairing.
- `0.0` considers any overlap.
- `0.1–0.3` is stricter and can reduce wrong assignments.
- `--containment` / `--no-containment`: Require (or not) that the face box lies fully inside the person box.
- Containment on → safer (fewer bad matches).
- Containment off → more permissive (may increase mismatches in crowded frames).
- `--gallery-filter`: Track-v1 only: only assign faces where `track_id == gallery_id` (when both exist).
### Keypoint constraints (optional)
- `--kpt-indices <ids>`: Body keypoint indices to require inside the face box.
- Only used if the input JSON contains `keypoints`. If keypoints are missing, a warning is emitted and the rule is skipped.
- `--kpt-conf <float>`: Minimum confidence for keypoints used by `--kpt-indices`.
- `--flexible-kpt <N>`: If set, requires at least `N` keypoints inside the face box when `>=N` survive the confidence threshold; if fewer survive, requires all surviving keypoints inside.
### Artifact saving (all opt-in)
- `--json`: Write augmented JSON to `<run>/faces.json`.
- `--frames`: Save annotated frames under `<run>/frames/` (can be large).
- `--save-video <name.mp4>`: Save an annotated video under `<run>/<name.mp4>`.
- `--out-dir <dir>`: Output root used only when saving artifacts (default: `out`).
- `--run-name <name>`: Run folder name under `--out-dir`. If omitted, artifacts go directly under `--out-dir`.
- `--fourcc <fourcc>`: FourCC codec for saved video (default: `mp4v`).
- `--display`: Show a live annotated preview (press ESC to stop). Does not write files unless saving flags are set.
### UX
- `--no-progress`: Disable progress bar.
---
## Python usage (import)
You can use `detect-face-lib` as a library after installing it with pip.
### Quick sanity check
```bash
python -c "import detect_face; print(detect_face.available_detectors())"
```
### Python API reference (keywords)
#### `detect_face.detect_faces_video(...)`
**Required**
- `json_in`: Path to det-v1 or track-v1 JSON.
- `video`: Path to the source video.
**Detector**
- `detector`: Backend name (e.g., `"retinaface"`).
- `variant`: Variant name for the backend.
- `max_size`: Cap longer side.
- `device`: `"auto"`, `"cpu"`, `"mps"`, `"cuda"`, `"cuda:0"`, or an index like `"0"`.
- `conf_thresh`, `nms_thresh`: Detector thresholds.
**Association**
- `associate_class_ids`: List of class_ids eligible for association.
- `iou_thresh`, `containment`: Pairing rules.
- `gallery_filter`: Track-v1 only.
**Keypoints (optional)**
- `kpt_indices`, `kpt_conf`, `flexible_kpt`: Keypoint-based validation (used only when keypoints exist).
**Artifacts (all off by default)**
- `save_json_flag`: Write `<run>/faces.json`.
- `save_frames`: Write `<run>/frames/*.jpg`.
- `save_video`: Filename for annotated video under the run folder.
- `out_dir`, `run_name`, `fourcc`, `display`, `no_progress`.
Returns a `FaceResult` with `payload` (augmented JSON), `paths` (only populated when saving), and `stats`.
### Run face augmentation from a Python file
Create `run_faces.py`:
```python
from detect_face import detect_faces_video
res = detect_faces_video(
json_in="tracked.json",
video="in.mp4",
detector="retinaface",
variant="resnet50_2020-07-20",
device="auto",
)
print(res.stats)
print("face_augment" in res.payload)
print(res.paths) # populated only if you enable saving artifacts
```
Run:
```bash
python run_faces.py
```
---
## Install from GitHub (uv)
Use this if you are developing locally or want reproducible project environments.
Install uv:
https://docs.astral.sh/uv/getting-started/installation/#standalone-installer
Verify:
```bash
uv --version
```
### Install dependencies
```bash
git clone https://github.com/Surya-Rayala/VisionPipeline-detect-face.git
cd VisionPipeline-detect-face
uv sync
```
### CUDA note (optional)
For best performance on NVIDIA GPUs, make sure **torch** and **torchvision** are installed with a build that matches your CUDA toolkit / driver stack.
If you added CPU-only builds earlier, remove them and add the correct CUDA wheels, then re-sync.
```bash
uv remove torch torchvision
# then add the CUDA-matching wheels for your system
# (see: https://pytorch.org/get-started/locally/)
uv add <compatible torch torchvision>
uv sync
```
---
## CLI usage (uv)
Global help:
```bash
uv run python -m detect_face.cli.detect_faces -h
```
List detectors / variants:
```bash
uv run python -m detect_face.cli.detect_faces --list-detectors
uv run python -m detect_face.cli.detect_faces --detector retinaface --list-variants
```
Basic command (track-v1 input):
```bash
uv run python -m detect_face.cli.detect_faces \
--json-in tracked.json \
--video in.mp4
```
Basic command (det-v1 input):
```bash
uv run python -m detect_face.cli.detect_faces \
--json-in detections.json \
--video in.mp4
```
Save artifacts (opt-in):
```bash
uv run python -m detect_face.cli.detect_faces \
--json-in tracked.json \
--video in.mp4 \
--json --frames --save-video annotated.mp4 \
--out-dir out --run-name demo
```
---
# License
This project is licensed under the **MIT License**. See `LICENSE`.
| text/markdown | null | Surya Chand Rayala <suryachand2k1@gmail.com> | null | null | MIT License Copyright (c) 2026 Surya Chand Rayala Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | vision, face-detection, retinaface, tracking, json | [] | [] | null | null | ==3.10.* | [] | [] | [] | [
"retinaface-pytorch>=0.0.7",
"torchvision>=0.25.0",
"tqdm>=4.67.3"
] | [] | [] | [] | [] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T01:19:23.563714 | detect_face_lib-0.1.1.tar.gz | 27,609 | a8/74/2d97c098574d9836a6c443596d6915a393f926a967e4d75879e6db760a10/detect_face_lib-0.1.1.tar.gz | source | sdist | null | false | 7bb679d2c5cfe84a4f4d912914765b91 | 106cbcac56352a3a6d0f6c507ef4cdee8eb8cce8b9d9014decde324b476af718 | a8742d97c098574d9836a6c443596d6915a393f926a967e4d75879e6db760a10 | null | [
"LICENSE"
] | 236 |
2.1 | agenticmesh-common | 0.1.1 | Common library for agentic mesh | Distribution README
| text/markdown | null | Eric Broda <eric.broda@brodagroupsoftware.com>, Davis Broda <davis.broda@brodagroupsoftware.com>, Graeham Broda <graeham.broda@brodagroupsoftware.com> | null | Eric Broda <eric.broda@brodagroupsoftware.com>, Davis Broda <davis.broda@brodagroupsoftware.com>, Graeham Broda <graeham.broda@brodagroupsoftware.com> | MIT License
Copyright (c) 2025-2026 Broda Group Software Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | AI, agent, agents, agentic mesh, agenticmesh, ecosystem | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.119",
"uvicorn[standard]>=0.38",
"nats-py>=2.11.0",
"pydantic>=2.12.3",
"openai>=2.6.1",
"jmespath>=1.0.1",
"deepdiff>=8.6.1",
"jsonschema==4.25.1",
"jsonschema-specifications==2025.9.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.13 | 2026-02-20T01:18:38.475428 | agenticmesh_common-0.1.1-py3-none-any.whl | 76,024 | 1e/d9/37e607f0ce006a56ce2c484116b57d9ee729319e841b4e42c515f9d3260e/agenticmesh_common-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 249075a0e83c39d10c3bf95e9b8e113e | 70920d166b1706abde80a45a23b70e0afb16117888afa00172754579768de780 | 1ed937e607f0ce006a56ce2c484116b57d9ee729319e841b4e42c515f9d3260e | null | [] | 109 |
2.1 | agenticmesh-agentsrv | 0.1.1 | Agentic Mesh Agent Server Distribution | Distribution README
| text/markdown | null | Eric Broda <eric.broda@brodagroupsoftware.com>, Davis Broda <davis.broda@brodagroupsoftware.com>, Graeham Broda <graeham.broda@brodagroupsoftware.com> | null | Eric Broda <eric.broda@brodagroupsoftware.com>, Davis Broda <davis.broda@brodagroupsoftware.com>, Graeham Broda <graeham.broda@brodagroupsoftware.com> | MIT License
Copyright (c) 2025-2026 Broda Group Software Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | AI, agent, agents, agentic mesh, agenticmesh, ecosystem | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"agenticmesh-common>=0.1.1",
"fastapi>=0.119",
"uvicorn[standard]>=0.38"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.13 | 2026-02-20T01:18:37.181910 | agenticmesh_agentsrv-0.1.1-py3-none-any.whl | 41,166 | 03/34/d5cad912f4731cc91f4467665150d12945d05a6d4aafec6b90c11b98cd7a/agenticmesh_agentsrv-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 614c1c6fbb1fc275b168d797947a7a64 | 25119b3574357d0ba93634fe70e9d65d0e4586d9beebdeff9d9c5263d8d07b30 | 0334d5cad912f4731cc91f4467665150d12945d05a6d4aafec6b90c11b98cd7a | null | [] | 101 |
2.4 | cluster-yield-snapshot | 0.3.3 | Capture Spark plans, config, and table metadata for Cluster Yield analysis | # cluster-yield-snapshot
Passive Spark plan capture for [Cluster Yield](https://clusteryield.com) analysis. Drop two lines into any notebook — no refactoring, no query registration, no code changes.
Works on Databricks (serverless + classic), EMR, Dataproc, and open-source Spark.
## Install
```bash
pip install cluster-yield-snapshot
# In a Databricks notebook
%pip install cluster-yield-snapshot
```
## How it works
Two lines at the top. Two lines at the bottom. Everything in between is untouched:
```python
# Cell 1 — start capture
from cluster_yield_snapshot import CYSnapshot
cy = CYSnapshot(spark).start()
```
```python
# ═══════════════════════════════════════════
# Rest of the notebook — completely unchanged
# ═══════════════════════════════════════════
df = spark.sql("SELECT * FROM orders WHERE date > '2024-01-01'")
users = spark.table("analytics.users")
enriched = df.join(users, "user_id").groupBy("region").agg(sum("amount"))
enriched.write.parquet("s3://output/regional_revenue")
```
```python
# Last cell — harvest
cy.stop().save()
```
That's it. Every `spark.sql()` call, every `.collect()`, every `.write.parquet()` in between is silently captured with its full physical plan. On `stop()`, catalog stats (table sizes, partitions, file counts) are automatically gathered for every table that appeared in the plans.
## What it captures
`start()` hooks into three places:
| Hook | What it catches | Plan timing |
|------|----------------|-------------|
| `spark.sql()` | Every SQL query | At creation (pre-AQE) |
| DataFrame actions (`.collect()`, `.show()`, `.count()`, `.toPandas()`, etc.) | Execution results | Post-AQE (final plan) |
| Write methods (`.write.parquet()`, `.save()`, `.saveAsTable()`, etc.) | Data output | Post-AQE (final plan) |
When the same query is captured at both `spark.sql()` time and action time, the action-time plan (post-AQE) replaces the earlier one. You get the plan Spark *actually executed*, not just the plan it *intended* to execute.
`stop()` then collects catalog metadata:
| Data | Source |
|------|--------|
| Table size (bytes) | `DESCRIBE DETAIL` / Catalyst stats |
| Row count | Table properties / Catalyst stats |
| File count, avg file size | `DESCRIBE DETAIL` |
| Partition columns | `DESCRIBE EXTENDED` |
| Spark config + drift | `sparkContext.getConf()` / `SET -v` |
| Environment | Platform detection (Databricks / YARN / K8s) |
## Upload to Cluster Yield
The server analyzes on ingest — runs detectors, estimates costs, diffs against your last snapshot:
```python
cy = CYSnapshot(spark, api_key="cy_...", environment="prod-analytics").start()
# ... notebook ...
cy.stop().upload()
```
Install with upload support: `pip install cluster-yield-snapshot[upload]`
## Context manager
```python
with CYSnapshot(spark) as cy:
df = spark.sql("SELECT ...")
df.show()
cy.save()
```
## Manual capture (edge cases)
For queries you can't run through `start()`/`stop()` (e.g. building a snapshot from known queries without executing them):
```python
cy = CYSnapshot(spark)
cy.query("daily_revenue", "SELECT region, SUM(amount) FROM orders GROUP BY region")
cy.df("enriched", some_existing_dataframe)
cy.save()
```
## Safety
The capture hooks are read-only and wrapped in `try/except`:
- They only read `queryExecution.executedPlan` — no writes, no modifications
- If our code fails for any reason, the user's code continues normally
- `stop()` cleanly restores all original methods
- A re-entrancy guard prevents our internal Spark calls (catalog stats) from being captured
- The notebook behaves identically with or without capture running
## Snapshot JSON envelope
```json
{
"snapshot": { "version": "0.3.0", "capturedAt": "...", "snapshotType": "environment" },
"environment": { "sparkVersion": "3.5.1", "platform": "databricks", ... },
"config": { "all": {}, "optimizerRelevant": {}, "nonDefault": {} },
"catalog": { "tables": { "default.orders": { "sizeInBytes": 85899345920, ... } } },
"plans": [
{
"label": "sql-1-SELECT * FROM orders WHERE ...",
"fingerprint": "a1b2c3d4...",
"plan": [...],
"sql": "SELECT * FROM orders WHERE date > '2024-01-01'",
"trigger": "action.collect"
}
],
"errors": null
}
```
Compatible with the Cluster Yield Scala analysis engine, the JVM `PlanCaptureListener`, and the `PlanExtractor` — the analyzer is agnostic to capture method.
## Module structure
```
cluster_yield_snapshot/
├── __init__.py # Public API: CYSnapshot, snapshot_capture
├── snapshot.py # Orchestrator: start/stop/save/upload
├── _capture.py # Passive capture engine (monkey-patching)
├── plans.py # Plan extraction, operator parsing, fingerprinting
├── catalog.py # Table stats (DESCRIBE DETAIL/EXTENDED/Catalyst)
├── config.py # Spark config capture + drift detection
├── environment.py # Platform detection (Databricks, YARN, K8s)
├── upload.py # HTTP upload to SaaS backend
├── quick_scan.py # Lightweight teaser findings
├── formatting.py # Terminal summary + Databricks HTML
├── _compat.py # Classic PySpark vs Spark Connect abstraction
└── _util.py # Shared utilities
```
## Spark Connect / Serverless
On Spark Connect, the JVM is not accessible. Plan capture falls back to text explain. Catalog stats fall back to `DESCRIBE DETAIL` and `DESCRIBE EXTENDED` (no Catalyst stats). The text plan parser runs server-side for full analysis.
| text/markdown | Cluster Yield | null | null | null | null | databricks, optimization, query-plan, spark | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"To... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.24; extra == \"dev\"",
"pyspark>=3.3; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest>=7; extra == \"dev\"",
"httpx>=0.24; extra == \"upload\""
] | [] | [] | [] | [
"Homepage, https://clusteryield.com",
"Documentation, https://docs.clusteryield.com/snapshot",
"Repository, https://github.com/clusteryieldanalytics/cluster-yield"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T01:18:17.720150 | cluster_yield_snapshot-0.3.3.tar.gz | 32,108 | 4d/2a/74a0c0e06a81b31ad30705a29393e25f14dbb56b5734640f17ba25881c12/cluster_yield_snapshot-0.3.3.tar.gz | source | sdist | null | false | 64b07124501a1fbaf689556ee236716f | 6cefa79047edf56f4fc7ca227f7bdfdde7049fff43ceb4fbfac70ae96f8bb653 | 4d2a74a0c0e06a81b31ad30705a29393e25f14dbb56b5734640f17ba25881c12 | Apache-2.0 | [
"LICENSE"
] | 243 |
2.1 | odoo-addon-website-sale-cart-expire | 18.0.1.0.1 | Cancel carts without activity after a configurable time | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
========================
Website Sale Cart Expire
========================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:28205d71c6d2fdcccdd83f01fbda5702434747419aff45e4c62775e2f2114b06
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fe--commerce-lightgray.png?logo=github
:target: https://github.com/OCA/e-commerce/tree/18.0/website_sale_cart_expire
:alt: OCA/e-commerce
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/e-commerce-18-0/e-commerce-18-0-website_sale_cart_expire
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/e-commerce&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
Allows to automatically cancel carts without activity after a
configurable time.
**Table of contents**
.. contents::
:local:
Configuration
=============
Go to Website > Settings and set a delay for Expire Carts settings.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/e-commerce/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/e-commerce/issues/new?body=module:%20website_sale_cart_expire%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Camptocamp
Contributors
------------
- `Camptocamp <https://www.camptocamp.com>`__
- Iván Todorovich <ivan.todorovich@gmail.com>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-ivantodorovich| image:: https://github.com/ivantodorovich.png?size=40px
:target: https://github.com/ivantodorovich
:alt: ivantodorovich
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-ivantodorovich|
This module is part of the `OCA/e-commerce <https://github.com/OCA/e-commerce/tree/18.0/website_sale_cart_expire>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Camptocamp, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/e-commerce | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T01:18:07.584100 | odoo_addon_website_sale_cart_expire-18.0.1.0.1-py3-none-any.whl | 30,457 | c4/4f/6f53e2c4c9a083e0080542238e109f3bc78922f3ba3569e9a0729b8809a4/odoo_addon_website_sale_cart_expire-18.0.1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 633f9c2a8f19ed695ee14e2bdd6a67eb | bb44437f960c08281475fca715816d1744252a26db27ad5a46e6c27cae5e5587 | c44f6f53e2c4c9a083e0080542238e109f3bc78922f3ba3569e9a0729b8809a4 | null | [] | 91 |
2.4 | dexcontrol | 0.4.4 | A Python library of Sensing and Control for Dexmate's Robot | <div align="center">
<h1>🤖 Dexmate Robot Control and Sensing API</h1>
</div>

## 📦 Installation
```shell
pip install dexcontrol
```
To run the examples in this repo, you can try:
```shell
pip install dexcontrol[example]
```
## ⚠️ Version Compatibility
**Important:** `dexcontrol >= 0.4.0` requires robot firmware `>= 0.4.0`. Using older firmware with this version will not work.
> **Note:** `dexcontrol 0.4.x` depends on `dexcomm >= 0.4.0`, which is **not compatible** with `dexcontrol 0.3.x`. If you need to stay on `dexcontrol 0.3.x`, do not upgrade `dexcomm` to `0.4.0` or above.
**Before upgrading, check your current firmware version:**
```shell
dextop firmware info
```
If your firmware is outdated, please update it before installing the new version to ensure full compatibility. Please contact the Dexmate team if you do not know how to do it.
**📋 See [CHANGELOG.md](./CHANGELOG.md) for detailed release notes and version history.**
## 📄 Licensing
This project is **dual-licensed**:
### 🔓 Open Source License
This software is available under the **GNU Affero General Public License v3.0 (AGPL-3.0)**.
See the [LICENSE](./LICENSE) file for details.
### 💼 Commercial License
For businesses that want to use this software in proprietary applications without the AGPL requirements, commercial licenses are available.
**📧 Contact us for commercial licensing:** contact@dexmate.ai
**Commercial licenses provide:**
- ✅ Right to use in closed-source applications
- ✅ No source code disclosure requirements
- ✅ Priority support options
## 📚 Examples
Explore our comprehensive examples in the `examples/` directory:
- 🎮 **Basic Control** - Simple movement and sensor reading
- 🎯 **Advanced Control** - Complex manipulation tasks
- 📺 **Teleoperation** - Remote control interfaces
- 🔧 **Troubleshooting** - Diagnostic and maintenance tools
---
<div align="center">
<h3>🤝 Ready to build amazing robots?</h3>
<p>
<a href="mailto:contact@dexmate.ai">📧 Contact Us</a> •
<a href="./examples/">📚 View Examples</a> •
</p>
</div>
| text/markdown | null | Dexmate <contact@dexmate.ai> | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright © 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software.
A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public.
The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version.
An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based on the Program.
To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices".
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. | control, learning, python, robot | [
"Framework :: Robot Framework :: Library",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | null | null | >=3.10 | [] | [] | [] | [
"dexbot-utils<0.5.0,>=0.4.3",
"dexcomm<0.5.0,>=0.4.2",
"jaxtyping>=0.3.0",
"loguru>=0.7.0",
"numpy>=1.26.4",
"rich",
"isort>=5.12.0; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pyright; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff; ... | [] | [] | [] | [
"Repository, https://github.com/dexmate-ai/dexcontrol"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:16:42.138710 | dexcontrol-0.4.4-py3-none-any.whl | 124,831 | 53/7e/20f60520d962b9315090795e17d98b6273fa404e693868f5b7352cdb16db/dexcontrol-0.4.4-py3-none-any.whl | py3 | bdist_wheel | null | false | ed95c5912c8f87139cf237f294bbe5e5 | a7a15c781cc90eb52c8504417df03ac707699799b7f1d42d8a262cedc7502ffd | 537e20f60520d962b9315090795e17d98b6273fa404e693868f5b7352cdb16db | null | [
"LICENSE"
] | 115 |
2.4 | grpcio | 1.78.1 | HTTP/2-based RPC framework | gRPC Python
===========
Package for gRPC Python.
Installation
------------
gRPC Python is available for Linux, macOS, and Windows.
Installing From PyPI
~~~~~~~~~~~~~~~~~~~~
If you are installing locally...
::
$ pip install grpcio
Else system wide (on Ubuntu)...
::
$ sudo pip install grpcio
If you're on Windows make sure that you installed the :code:`pip.exe` component
when you installed Python (if not go back and install it!) then invoke:
::
$ pip.exe install grpcio
Windows users may need to invoke :code:`pip.exe` from a command line ran as
administrator.
n.b. On Windows and on Mac OS X one *must* have a recent release of :code:`pip`
to retrieve the proper wheel from PyPI. Be sure to upgrade to the latest
version!
Installing From Source
~~~~~~~~~~~~~~~~~~~~~~
Building from source requires that you have the Python headers (usually a
package named :code:`python-dev`).
::
$ export REPO_ROOT=grpc # REPO_ROOT can be any directory of your choice
$ git clone -b RELEASE_TAG_HERE https://github.com/grpc/grpc $REPO_ROOT
$ cd $REPO_ROOT
$ git submodule update --init
# To include systemd socket-activation feature in the build,
# first install the `libsystemd-dev` package, then :
$ export GRPC_PYTHON_BUILD_WITH_SYSTEMD=1
# For the next two commands do `sudo pip install` if you get permission-denied errors
$ pip install -r requirements.txt
$ GRPC_PYTHON_BUILD_WITH_CYTHON=1 pip install .
You cannot currently install Python from source on Windows. Things might work
out for you in MSYS2 (follow the Linux instructions), but it isn't officially
supported at the moment.
Troubleshooting
~~~~~~~~~~~~~~~
Help, I ...
* **... see the following error on some platforms**
::
/tmp/pip-build-U8pSsr/cython/Cython/Plex/Scanners.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
You can fix it by installing `python-dev` package. i.e
::
sudo apt-get install python-dev
Versioning
~~~~~~~~~~
gRPC Python is developed in a monorepo shared with implementations of gRPC in
other programming languages. While the minor versions are released in
lock-step with other languages in the repo (e.g. 1.63.0 is guaranteed to exist
for all languages), patch versions may be specific to only a single
language. For example, if 1.63.1 is a C++-specific patch, 1.63.1 may not be
uploaded to PyPi. As a result, it is __not__ a good assumption that the latest
patch for a given minor version on Github is also the latest patch for that
same minor version on PyPi.
| text/x-rst | null | The gRPC Authors <grpc-io@googlegroups.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions~=4.12",
"grpcio-tools>=1.78.1; extra == \"protobuf\""
] | [] | [] | [] | [
"Homepage, https://grpc.io",
"Source Code, https://github.com/grpc/grpc",
"Bug Tracker, https://github.com/grpc/grpc/issues",
"Documentation, https://grpc.github.io/grpc/python"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:16:10.869313 | grpcio-1.78.1.tar.gz | 12,835,760 | 1f/de/de568532d9907552700f80dcec38219d8d298ad9e71f5e0a095abaf2761e/grpcio-1.78.1.tar.gz | source | sdist | null | false | de35e0d088421d89de9a92ba6b54e783 | 27c625532d33ace45d57e775edf1982e183ff8641c72e4e91ef7ba667a149d72 | 1fdede568532d9907552700f80dcec38219d8d298ad9e71f5e0a095abaf2761e | Apache-2.0 | [
"LICENSE"
] | 21,854,281 |
2.4 | fs-report | 1.5.2 | Finite State Stand-Alone Reporting Kit | # Finite State Reporting Kit
A powerful, stand-alone reporting utility for Finite State customers that generates HTML, CSV, and XLSX reports from API data using YAML recipes.
## Features
- **YAML Recipe System**: Define reports using simple YAML configuration files
- **Multiple Output Formats**: Generate HTML, CSV, and XLSX reports
- **Interactive Charts**: Beautiful, responsive charts using Chart.js
- **Custom Data Processing**: Advanced data manipulation and analysis
- **Standalone Operation**: Runs entirely outside the Finite State SaaS platform
- **CLI Interface**: Command-line tool for easy automation and integration
- **Data Comparison Tools**: Utilities for comparing XLSX files and analyzing differences
## Available Reports
Reports fall into two categories. See **`REPORT_GUIDE.md`** for full details, including Version Comparison’s full version and component changelog and CSV/XLSX detail exports (findings detail, findings churn, component churn).
**Operational** — period-bound, showing trends and activity within a time window:
| Report | Description |
|--------|-------------|
| Executive Summary | High-level security dashboard for leadership |
| Scan Analysis | Scan throughput, success rates, and infrastructure health |
| User Activity | Platform adoption and engagement metrics |
**Assessment** — current state, showing the latest security posture regardless of time period:
| Report | Description |
|--------|-------------|
| Component Vulnerability Analysis | Riskiest components across the portfolio |
| Findings by Project | Complete findings inventory per project with CVE details, severity, and platform links |
| Component List | Software inventory (SBOM) for compliance |
| Triage Prioritization | Context-aware vulnerability triage with exploit + reachability intelligence |
| Executive Dashboard | 11-section executive-level security report with KPI cards, risk donut, severity trends, and more *(on-demand)* |
| CVE Impact | CVE-centric dossier with affected projects, reachability, and exploit intelligence *(on-demand)* |
| Version Comparison | Full version and component changelog (every version pair); fixed/new findings and component churn per step; CSV/XLSX include summary plus detail *(on-demand)* |
## Quick Start
### Prerequisites
- Python 3.11+
- Finite State API access
### Installation
The quickest way to install is with a single command:
```bash
bash -c "$(curl -fsSL https://raw.githubusercontent.com/FiniteStateInc/customer-resources/main/05-reporting-and-compliance/fs-report/setup.sh)"
```
This handles Python verification, pipx installation, credential setup, and PATH configuration automatically. You can also run it from a local clone:
```bash
./setup.sh # Interactive setup
./setup.sh --from-source # Install from current directory
./setup.sh --from-source --yes # Non-interactive (uses env vars)
```
On Windows, use `setup.ps1` instead.
You can also install manually with pipx:
```bash
pipx install fs-report
```
Once installed, set up API credentials (the setup script will prompt for these, or you can export them):
```bash
export FINITE_STATE_AUTH_TOKEN="your-api-token"
export FINITE_STATE_DOMAIN="customer.finitestate.io"
```
Verify installation:
```bash
fs-report --help
```
> **Developer workflow:** If you are contributing to fs-report, use `poetry install && poetry shell` to work inside the dev environment. All examples below assume `fs-report` is on your PATH (via pipx or an active Poetry shell).
### CLI Command Structure
The CLI is organized into subcommands for better discoverability:
| Command | Description |
|---------|-------------|
| `fs-report` | Launch the web UI (default, no arguments) |
| `fs-report run` | Generate reports (all existing flags preserved) |
| `fs-report list {recipes,projects,folders,versions}` | Explore available resources |
| `fs-report cache {clear,status}` | Manage cached data |
| `fs-report config {init,show}` | Manage configuration |
| `fs-report changelog` | Show per-report changelog |
| `fs-report help periods` | Show period format help |
| `fs-report serve [directory]` | Serve reports via local HTTP server |
> **Backwards compatibility:** Old command names (`list-recipes`, `list-projects`, `show-periods`, bare `fs-report --recipe ...`) still work but emit deprecation warnings.
### Config File
Set defaults in `.fs-report.yaml` (searched in CWD first, then `~/.fs-report/config.yaml`):
```yaml
# .fs-report.yaml
recipe: "Executive Summary"
period: 30d
output: ./reports
verbose: true
```
Priority: CLI flags > environment variables > config file > defaults.
Create one interactively: `fs-report config init`
### CLI Usage Examples
**Generate reports** with `fs-report run`:
```bash
fs-report run # All reports, default settings
fs-report run --recipe "Executive Summary" # Single report
fs-report run --recipe "Executive Summary" --period 1m # Last month
fs-report run --start 2025-01-01 --end 2025-01-31 # Exact date range
fs-report run --period 7d # Last 7 days
```
**Filter** by project, version, or finding type:
```bash
fs-report run --project "MyProject" # By project name or ID
fs-report run --version "1234567890" # By version ID (no project needed)
fs-report run --project "MyProject" --version "v1.2.3" # Version name (needs project)
fs-report run --finding-types cve # CVE only (default)
fs-report run --finding-types cve,credentials # CVE + credentials
fs-report run --finding-types all # All finding types
```
**Version scope** — by default only the latest version of each project is analysed:
```bash
fs-report run --period 1w # Latest version per project (fast)
fs-report run --period 1w --all-versions # All historical versions (slower)
```
**CVE Impact** — investigate specific CVEs across your portfolio:
```bash
fs-report run --recipe "CVE Impact" --cve CVE-2024-1234
fs-report run --recipe "CVE Impact" --cve CVE-2024-1234,CVE-2024-5678
fs-report run --recipe "CVE Impact" --cve CVE-2024-1234 --project myproject
fs-report run --recipe "CVE Impact" --cve CVE-2024-1234 --ai-prompts
fs-report run --recipe "CVE Impact" --cve CVE-2024-1234 --ai
```
**Persistent cache** (beta) — crash recovery and faster reruns:
```bash
fs-report run --cache-ttl 1h # Cache data for 1 hour
fs-report run --cache-ttl 30m # 30 minutes
fs-report run --no-cache # Force fresh data
fs-report cache status # Show cache stats
fs-report cache clear # Delete all cached data
```
**List resources**:
```bash
fs-report list recipes
fs-report list projects
fs-report list versions # All versions across portfolio
fs-report list versions "MyProject" # Versions for one project
fs-report list versions -n 10 # Top 10 by version count
fs-report list versions --folder "Product Line A"
```
**Configuration**:
```bash
fs-report config init # Interactive config wizard
fs-report config show # Show resolved config
```
**Serve reports** and **web UI**:
```bash
fs-report # Launch web UI on localhost:8321
fs-report serve ./output # Serve existing reports
```
**Performance tuning** and other options:
```bash
fs-report run --verbose # Verbose logging
fs-report run --batch-size 3 # Reduce API batch size (default 5)
fs-report run --request-delay 1.0 # Increase delay between requests
fs-report run --recipes ./my-recipes --output ./reports # Custom directories
fs-report help periods # Period format help
```
> **Backwards compatibility:** Old-style commands still work with deprecation warnings:
> `fs-report --recipe "..." --period 1m` → `fs-report run --recipe "..." --period 1m`,
> `fs-report list-recipes` → `fs-report list recipes`,
> `fs-report list-projects` → `fs-report list projects`,
> `fs-report show-periods` → `fs-report help periods`.
### Web UI
Running bare `fs-report` (no arguments) launches an interactive web UI at `http://localhost:8321`:
- **Dashboard** with workflow cards for common report scenarios
- **Real-time progress** streaming via Server-Sent Events (SSE) during report generation
- **Direct report linking** — "View Report" opens the generated HTML immediately after a run
- **Cancellation** — cancel button works at any point, including during NVD lookups
- **Settings management** with persistence to `~/.fs-report/config.yaml`
- **Reports browser** with preview for previously generated reports
- **Scan Queue** panel — live scan monitoring with queued/processing counts, per-version grouping, stuck scan detection, and auto-refresh
- **CSRF protection** and localhost-only access for security
To serve existing reports without the full UI:
```bash
fs-report serve ./output
```
## Performance and Caching
The reporting kit includes intelligent caching to improve performance and reduce API calls:
- **Latest Version Only (Default)**: By default, reports only include findings from the latest version of each project, reducing data volume by 60-70%. Use `--all-versions` if you need historical data.
- **Automatic Cache Sharing**: When running multiple reports, data is automatically cached and shared between reports
- **Progress Indicators**: The CLI shows "Fetching" for API calls and "Using cache" for cached data
- **Crash Recovery**: Progress is tracked in SQLite, so interrupted fetches resume automatically
- **Efficient Filtering**: Project and version filtering is applied at the API level for optimal performance
Example output showing cache usage:
```
Fetching /public/v0/findings | 38879 records
Using cache for /public/v0/findings | 38879 records
```
### [BETA] Persistent SQLite Cache
For long-running reports or iterative development, enable the persistent cache:
```bash
# Cache data for 1 hour - enables crash recovery and faster reruns
fs-report run --cache-ttl 1h
# Force fresh data (ignore cache)
fs-report run --no-cache
# Clear all cached data
fs-report cache clear
```
Benefits:
- **80% smaller storage** than JSON progress files
- **Crash recovery** - resume interrupted fetches automatically
- **Faster reruns** - skip API calls for cached data within TTL
Cache location: `~/.fs-report/cache.db`
## AI Features
The reporting kit supports AI-powered remediation guidance via the `--ai` flag. Three LLM providers are supported — the provider is auto-detected from environment variables, or you can choose explicitly with `--ai-provider`:
| Provider | Env Variable | Models |
|----------|-------------|--------|
| **Anthropic** (default) | `ANTHROPIC_AUTH_TOKEN` | Claude Opus / Haiku |
| **OpenAI** | `OPENAI_API_KEY` | GPT-4o / GPT-4o-mini |
| **GitHub Copilot** | `GITHUB_TOKEN` | GPT-4o / GPT-4o-mini |
Override the default models with `--ai-model-high` / `--ai-model-low` CLI flags or the `ai_model_high` / `ai_model_low` config keys.
```bash
# Auto-detect provider from env vars
fs-report run --recipe "Triage Prioritization" --ai --period 30d
# Explicit provider
fs-report run --recipe "Triage Prioritization" --ai --ai-provider openai --period 30d
# Custom model overrides
fs-report run --recipe "Triage Prioritization" --ai --ai-model-high claude-sonnet-4-20250514 --period 30d
# Export prompts for manual use (no API key required)
fs-report run --recipe "Triage Prioritization" --ai-prompts --period 30d
```
See `REPORT_GUIDE.md` for full AI feature details.
## Docker Usage
If you prefer Docker over a local Python install, you can run reports in a container. All default recipes and templates are baked into the image.
1. **Build the image** (from the `fs-report` directory):
```bash
docker build -t fs-report .
```
2. **Set your API credentials**:
```bash
export FINITE_STATE_AUTH_TOKEN="your-api-token"
export FINITE_STATE_DOMAIN="customer.finitestate.io"
```
3. **Run a report** (output is written to the mounted `./output` directory):
```bash
docker run --rm \
-v $(pwd)/output:/app/output \
-e FINITE_STATE_AUTH_TOKEN \
-e FINITE_STATE_DOMAIN \
fs-report run --period 1m --recipe "Executive Summary"
```
The same CLI flags documented above work inside Docker. Just replace `fs-report` with the `docker run ...` prefix. A few more examples:
```bash
# Run all reports for January 2026
docker run --rm -v $(pwd)/output:/app/output \
-e FINITE_STATE_AUTH_TOKEN -e FINITE_STATE_DOMAIN \
fs-report run --start 2026-01-01 --end 2026-01-31
# Scope to a folder
docker run --rm -v $(pwd)/output:/app/output \
-e FINITE_STATE_AUTH_TOKEN -e FINITE_STATE_DOMAIN \
fs-report run --folder "Product Line A" --period 1m
# List projects (no output volume needed)
docker run --rm -e FINITE_STATE_AUTH_TOKEN -e FINITE_STATE_DOMAIN \
fs-report list projects
# Use custom recipes by mounting your own recipes directory
docker run --rm \
-v $(pwd)/my-recipes:/app/recipes \
-v $(pwd)/output:/app/output \
-e FINITE_STATE_AUTH_TOKEN -e FINITE_STATE_DOMAIN \
fs-report run
```
## Data Comparison Tools
### XLSX File Comparison
Compare two XLSX files by CVE ID for a specific project:
```bash
# Basic comparison
python scripts/compare_xlsx_files.py customer_file.xlsx generated_file.xlsx I421GLGD
# With custom output file
python scripts/compare_xlsx_files.py customer_file.xlsx generated_file.xlsx I421GLGD --output comparison_report.xlsx
# If column names are different
python scripts/compare_xlsx_files.py customer_file.xlsx generated_file.xlsx I421GLGD --cve-column "CVE_ID" --project-column "Project_ID"
```
The comparison tool generates:
- **Summary statistics** in console output
- **Detailed Excel report** with multiple sheets:
- Summary of differences
- CVEs only in customer file
- CVEs only in generated file
- Side-by-side comparison of matching CVEs
## Exit Codes
- `0`: Success
- `1`: Usage/validation error
- `2`: API authentication failure
- `3`: API rate-limit/connectivity failure
## Security
**Recipes are code.** Custom recipes can execute arbitrary pandas expressions, so treat them with the same security practices as executable scripts:
- Review custom recipes before running them
- In CI/CD pipelines, only use recipes from version-controlled sources
- Never download and execute recipes from untrusted sources
For detailed security guidance, see [Security Considerations](docs/recipes/CUSTOM_REPORT_GUIDE.md#security-considerations) in the Custom Report Guide.
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Ensure all tests pass and coverage is maintained
6. Submit a pull request
## License
This project is licensed under the MIT License. See the LICENSE file for details.
## Support
For support and questions, please contact Finite State support or create an issue in the repository.
| text/markdown | Finite State | support@finitestate.io | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"anthropic>=0.39.0",
"click<9.0.0,>=8.3.0",
"fastapi<1,>=0.115.0",
"httpx<1,>=0.24.0",
"jinja2<4.0.0,>=3.1.0",
"matplotlib<4.0.0,>=3.10.3",
"openai>=1.0.0",
"openpyxl<4.0.0,>=3.1.0",
"packaging>=22.0",
"pandas<4,>=3.0.0",
"pydantic<3.0.0,>=2.5.0",
"pydantic-settings<3.0.0,>=2.1.0",
"python-m... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:14:17.750669 | fs_report-1.5.2.tar.gz | 375,816 | cb/5d/0ab2fc05edb47400bdf57788ab698d7f742b717a840d45f4167988f0232d/fs_report-1.5.2.tar.gz | source | sdist | null | false | 3db46624d35c2a2a08cc3c50d4b7c520 | f4f5c992ab3ccb279204393979b31feb3a042323b42407f8d74b88690b86ffd5 | cb5d0ab2fc05edb47400bdf57788ab698d7f742b717a840d45f4167988f0232d | null | [
"LICENSE"
] | 228 |
2.4 | femtodriver | 2.0.2 | Femtorun defines the runtime interface for Femtosense software | # Femtodriver
The Femtosense Femtodriver compiles models for the Femtosense SPU-001 and allows developers to run them in simulation as well as on hardware to obtain model outputs, along with key execution metrics.
See the documentation at [femtodriver.femtosense.ai](https://femtodriver.femtosense.ai) for more details.
| text/markdown | Femtosense | info@femtosense.ai | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: OS Independent"
] | [] | https://github.com/femtosense/femtodriver | null | >=3.6 | [] | [] | [] | [
"numpy>=1.18.0",
"femtorun>=1.1.0",
"femtocrux>=2.0.0",
"redis>=4.0.0",
"pyyaml",
"scipy",
"colorama",
"hid>=1.0.0"
] | [] | [] | [] | [
"Source, https://github.com/femtosense/femtodriver"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T01:14:14.029886 | femtodriver-2.0.2-py3-none-any.whl | 447,677 | b5/a4/39597336e9b22b3b198ea4679590d4b90ff53cbff778402133abfc2725b2/femtodriver-2.0.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 0c010f1241072c2706e298284ee33877 | b37e6a3aeb954b8088a4d46ef3bf83294069c90f300d60f26530ccb5300c3113 | b5a439597336e9b22b3b198ea4679590d4b90ff53cbff778402133abfc2725b2 | null | [
"LICENSE"
] | 102 |
2.4 | kernle | 0.15.0 | Stratified memory for synthetic intelligences | # Kernle
**Stratified memory for synthetic intelligences.**
Kernle gives synthetic intelligences persistent memory, emotional awareness, and identity continuity. It's the cognitive infrastructure for synthetic intelligences that grow, adapt, and remember who they are.
📚 **Full Documentation: [docs.kernle.ai](https://docs.kernle.ai)**
---
## Quick Start
```bash
# Install
pip install kernle
# Initialize your stack
kernle -s my-stack init
# Load memory at session start
kernle -s my-stack load
# Check health
kernle -s my-stack anxiety -b
# Capture experiences
kernle -s my-stack episode "Deployed v2" "success" --lesson "Always run migrations first"
kernle -s my-stack raw "Quick thought to process later"
# Save before ending
kernle -s my-stack checkpoint save "End of session"
```
## Automatic Memory Loading
**Make memory loading automatic** instead of relying on manual commands:
```bash
# Claude Code - Setup hooks for automatic loading + checkpointing + write interception
kernle setup claude-code
# OpenClaw - Install plugin for automatic loading + checkpointing
cd integrations/openclaw && npm install && npm run build
openclaw plugins install ./integrations/openclaw
```
After setup, memory loads automatically at every session start. No more forgetting to run `kernle load`!
## Other Integrations
**Manual CLAUDE.md setup:**
```bash
kernle -s my-stack init # Generates CLAUDE.md section with manual load instructions
```
**MCP Server:**
```bash
claude mcp add kernle -- kernle mcp -s my-stack
```
**OpenClaw skill:**
```bash
ln -s ~/kernle/skill ~/.openclaw/skills/kernle
```
## Features
- 🧠 **Stratified Memory** — Values → Beliefs → Goals → Episodes → Notes
- 💭 **Psychology** — Drives, emotions, anxiety tracking, identity synthesis
- 🔗 **Relationships** — Social graphs with trust and interaction history
- 📚 **Playbooks** — Procedural memory with mastery tracking
- 🏠 **Local-First** — Works offline, syncs to cloud when connected
- 🔍 **Readable** — `kernle dump` exports everything as markdown
## Documentation
| Resource | URL |
|----------|-----|
| Full Docs | [docs.kernle.ai](https://docs.kernle.ai) |
| Quickstart | [docs.kernle.ai/quickstart](https://docs.kernle.ai/quickstart) |
| CLI Reference | [docs.kernle.ai/cli/overview](https://docs.kernle.ai/cli/overview) |
| API Reference | [docs.kernle.ai/api-reference](https://docs.kernle.ai/api-reference) |
## Development
```bash
# Clone
git clone https://github.com/emergent-instruments/kernle
cd kernle
# Install with dev deps
uv sync --all-extras
# Run tests
uv run pytest tests/ -q
# Dev notes
cat dev/README.md
```
Coverage policy is configured in `pyproject.toml` and enforced in both local `make test-cov` and CI coverage runs.
### Audit tracking submodule (private)
The full audit corpus is kept in a private submodule: `audits/` (repo `emergent-instruments/kernle-audits`).
```bash
# Initialize / refresh submodules on a fresh clone
git submodule update --init --recursive
# Update local submodule pointer to latest commit in audits/main
git submodule update --remote audits
```
To edit audits:
```bash
# Work inside the private repo
cd audits
git pull
git status
# Make edits, commit, and push in the private repo
git add .
git commit -m "Update audit pass findings"
git push
# Return to main repo and record the new pointer
cd ..
git add audits
git commit -m "chore: bump audits submodule pointer"
```
CI checks out submodules during test/release jobs. If private access fails in CI, set `SUBMODULE_PAT` in repository secrets with read access to `kernle-audits`.
## Temporal Vision
Kernle is cognitive infrastructure for beings with a life — not just a session. The design spans five temporal horizons:
| Horizon | What Works Today | What's Planned |
|---------|-----------------|----------------|
| 1 session | Budget-aware loading, checkpoints | Memory echoes (peripheral awareness) |
| 1 month | Consolidation scaffold, belief formation | Cross-domain scaffolding, emotional weighting |
| 1 year | Forgetting, provenance, identity coherence | Epochs, relationship history, goal types |
| 5 years | Stack portability, multi-model loading | Self-narrative, trust layer, drive emergence |
| 20 years | Stack sovereignty, privacy architecture | Fractal summarization, doctor pattern, transfer learning |
## Architecture
This repo contains the **core Kernle library** — everything you need to run Kernle locally or build your own backend.
```
kernle/
├── kernle/ # Core library
│ ├── core.py # Memory manager
│ ├── cli/ # CLI commands
│ ├── features/ # Anxiety, emotions, forgetting
│ ├── storage/ # SQLite-first storage + stack interfaces
│ └── mcp/ # MCP server for IDE integration
└── tests/
```
The **hosted cloud API** (api.kernle.ai) is maintained separately.
## Status
- **Coverage Gate:** branch coverage must remain at or above `85%`
- **Ratcheting Policy:** coverage floor only moves upward; any decrease requires explicit maintainer approval
- **Docs:** [docs.kernle.ai](https://docs.kernle.ai) (Mintlify)
See [ROADMAP.md](ROADMAP.md) for development plans.
## License
MIT
| text/markdown | null | Emergent Instruments <hello@emergentinstruments.com>, Claire <claire@emergentinstruments.com> | null | null | null | ai, claude, llm, mcp, memory, stacks, synthetic-intelligence | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engi... | [] | null | null | >=3.10 | [] | [] | [] | [
"python-dotenv>=1.0.0",
"sqlite-vec>=0.1.0",
"anthropic>=0.40.0; extra == \"all\"",
"mcp>=1.0.0; extra == \"all\"",
"requests>=2.28.0; extra == \"all\"",
"sqlite-vec>=0.1.0; extra == \"all\"",
"supabase>=2.0.0; extra == \"all\"",
"anthropic>=0.40.0; extra == \"anthropic\"",
"supabase>=2.0.0; extra =... | [] | [] | [] | [
"Homepage, https://kernle.ai",
"Documentation, https://docs.kernle.ai",
"Repository, https://github.com/Emergent-Instruments/kernle",
"Issues, https://github.com/Emergent-Instruments/kernle/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:13:29.199407 | kernle-0.15.0.tar.gz | 459,387 | 16/c1/21a424f47a0fd8c6728e4d258084e6fa974e48b4191843d610eed7e1a803/kernle-0.15.0.tar.gz | source | sdist | null | false | 983150d0e7031d0d3124061012a56351 | 88d4fcea52353e0029fb40088cc58bb9db337fdf6412ee6718a0d5337196136a | 16c121a424f47a0fd8c6728e4d258084e6fa974e48b4191843d610eed7e1a803 | MIT | [
"LICENSE"
] | 231 |
2.4 | arf | 2.7.2 | Advanced Recording Format for acoustic, behavioral, and physiological data | arf
---
|ProjectStatus|_ |Version|_ |BuildStatus|_ |License|_ |PythonVersions|_
.. |ProjectStatus| image:: https://www.repostatus.org/badges/latest/active.svg
.. _ProjectStatus: https://www.repostatus.org/#active
.. |Version| image:: https://img.shields.io/pypi/v/arf.svg
.. _Version: https://pypi.python.org/pypi/arf/
.. |BuildStatus| image:: https://github.com/melizalab/arf/actions/workflows/tests-python.yml/badge.svg
.. _BuildStatus: https://github.com/melizalab/arf/actions/workflows/tests-python.yml
.. |License| image:: https://img.shields.io/pypi/l/arf.svg
.. _License: https://opensource.org/license/bsd-3-clause/
.. |PythonVersions| image:: https://img.shields.io/pypi/pyversions/arf.svg
.. _PythonVersions: https://pypi.python.org/pypi/arf/
The Advanced Recording Format `arf <https://meliza.org/spec:1/arf/>`__
is an open standard for storing data from neuronal, acoustic, and
behavioral experiments in a portable, high-performance, archival format.
The goal is to enable labs to share data and tools, and to allow
valuable data to be accessed and analyzed for many years in the future.
**arf** is built on the the `HDF5 <http://www.hdfgroup.org/HDF5/>`__
format, and all arf files are accessible through standard HDF5 tools,
including interfaces to HDF5 written for other languages (e.g. MATLAB,
Python, etc). **arf** comprises a set of specifications on how different
kinds of data are stored. The organization of arf files is based around
the concept of an *entry*, a collection of data channels associated with
a particular point in time. An entry might contain one or more of the
following:
- raw extracellular neural signals recorded from a multichannel probe
- spike times extracted from neural data
- acoustic signals from a microphone
- times when an animal interacted with a behavioral apparatus
- the times when a real-time signal analyzer detected vocalization
Entries and datasets have metadata attributes describing how the data
were collected. Datasets and entries retain these attributes when copied
or moved between arf files, helping to prevent data from becoming
orphaned and uninterpretable.
This repository contains:
- The specification for arf (in specification.md). This is also hosted
at https://meliza.org/spec:1/arf/.
- A fast, type-safe C++ interface for reading and writing arf files
- A python interface for reading and writing arf files
You don't need the python or C++ libraries to read arf files; they are just
standard HDF5 files that can be accessed with standard tools and libraries, like
h5py (see below).
installation
~~~~~~~~~~~~
ARF files require HDF5>=1.8 (http://www.hdfgroup.org/HDF5).
The python interface requires Python 3.7 or greater and h5py>=3.8. The last
version of this package to support Python 2 was ``2.5.1``. The last version to
support h5py 2 was ``2.6.7``. To install the module:
.. code:: bash
pip install arf
To use the C++ interface, you need boost>=1.42 (http://boost.org). In
addition, if writing multithreaded code, HDF5 needs to be compiled with
``--enable-threadsafe``. The interface is header-only and does not need
to be compiled. To install:
.. code:: bash
make install
version information
~~~~~~~~~~~~~~~~~~~
The specification and implementations provided in this project use a
form of semantic versioning (http://semver.org). Specifications receive
a major and minor version number. Changes to minor version numbers must
be backwards compatible (i.e., only added requirements). The current
released version of the ARF specification is ``2.1``.
Implementation versions are synchronized with the major version of the
specification but otherwise evolve independently. For example, the
python ``arf`` package version ``2.1.0`` is compatible with any ARF
version ``2.x``.
There was no public release of arf prior to ``2.0``.
access ARF files with HDF5 tools
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The structure of an arf file can be explored using the ``h5ls`` tool.
For example, to list entries:
.. code:: bash
$ h5ls file.arf
test_0001 Group
test_0002 Group
test_0003 Group
test_0004 Group
Each entry appears as a Group. To list the contents of an entry, use
path notation:
.. code:: bash
$ h5ls file.arf/test_0001
pcm Dataset {609914}
This shows that the data in ``test_0001`` is stored in a single node,
``pcm``}, with 609914 data points. Typically each channel will have its
own dataset.
The ``h5dump`` command can be used to output data in binary format. See
the HDF5 documentation for details on how to structure the output. For
example, to extract sampled data to a 16-bit little-endian file (i.e.,
PCM format):
.. code:: bash
h5dump -d /test_0001/pcm -b LE -o test_0001.pcm file.arf
contributing
~~~~~~~~~~~~
ARF is under active development and we welcome comments and
contributions from neuroscientists and behavioral biologists interested
in using it. We’re particularly interested in use cases that don’t fit
the current specification. Please post issues or contact Dan Meliza (dan
at meliza.org) directly.
related projects
~~~~~~~~~~~~~~~~
- `arfx <https://github.com/melizalab/arfx>`__ is a commandline tool
for manipulating ARF files.
open data formats
^^^^^^^^^^^^^^^^^
- `neurodata without borders <http://www.nwb.org>`__ has similar goals
and also uses HDF5 for storage. The data schema is considerably more
complex and prescriptive, but it's got a lot of investment from the field,
so you should consider it first.
- `NIX <https://github.com/G-Node/nix>`__ was designed by INCF for sharing electrophysiology data.
- `bark <https://github.com/margoliashlab/bark>`__ is inspired by ARF
but uses the filesystem directory structure instead of HDF5 to simplify data access.
i/o libraries
^^^^^^^^^^^^^
- `neo <https://github.com/NeuralEnsemble/python-neo>`__ is a Python
package for working with electrophysiology data in Python, together
with support for reading a wide range of neurophysiology file
formats.
- `neuroshare <http://neuroshare.org>`__ is a set of routines for
reading and writing data in various proprietary and open formats.
| text/x-rst | null | Dan Meliza <dan@meliza.org> | null | Dan Meliza <dan@meliza.org> | BSD 3-Clause License | data format, neuroscience | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Natural Language :: English",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Operating System :... | [] | null | null | >=3.11 | [] | [] | [] | [
"h5py>=3.12.1; python_version >= \"3.11\"",
"h5py>=3.15.1; python_version >= \"3.13\"",
"numpy>=1.24.0; python_version == \"3.11\"",
"numpy>=1.26.0; python_version == \"3.12\"",
"numpy>=2.2.1; python_version >= \"3.13\"",
"numpy>=2.2.1; python_version >= \"3.14\"",
"packaging>=24.0"
] | [] | [] | [] | [
"Homepage, https://github.com/melizalab/arf"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:13:01.612676 | arf-2.7.2.tar.gz | 10,057 | 1f/4e/7edba803d0020278b8423f4fa1eb33b580309b1003343f1b3337051a6522/arf-2.7.2.tar.gz | source | sdist | null | false | dd7fecac69510b41d63bcc4f242c9d16 | 6f39bbd74ff103799bab4a4c0bb5ed66c6975fa9b757d8bb9fb73cdf654ee9fe | 1f4e7edba803d0020278b8423f4fa1eb33b580309b1003343f1b3337051a6522 | null | [
"COPYING"
] | 246 |
2.4 | transcribe-audio | 0.1.7 | A CLI + Python package for automatic speech recognition (ASR) on audio or video using pluggable backends (currently WhisperX). | # transcribe-audio
**Python:** `>=3.10, <3.13`
A CLI + Python package for automatic speech recognition (ASR) on **audio or video** using pluggable backends (currently **WhisperX**).
This tool:
- Accepts **audio** (e.g., `wav/mp3/m4a`) and **video** (e.g., `mp4/mkv/mov`)
- Automatically extracts audio from video inputs (requires **ffmpeg**)
- Writes a canonical `transcript.json` plus optional sidecars: `txt/srt/vtt/tsv`
- Optionally creates a **subtitled MP4** with burned-in subtitles
---
## ffmpeg requirement
ffmpeg is required for:
- extracting audio from **video** inputs
- creating **subtitled videos**
Install examples:
- macOS: `brew install ffmpeg`
- Conda env (recommended inside conda): `conda install -c conda-forge ffmpeg`
- Ubuntu/Debian: `sudo apt-get update && sudo apt-get install -y ffmpeg`
---
## Install with `pip`
### Install
```bash
pip install transcribe-audio
```
---
## CLI usage
### Global help
```bash
python -m transcribe_audio.transcribe -h
# or
python -m transcribe_audio.transcribe --help
```
### List supported backends
```bash
python -c "from transcribe_audio.backends import available_backends; print('\\n'.join(available_backends()))"
```
### Basic transcription (ALL formats + subtitled video)
```bash
python -m transcribe_audio.transcribe \
-i in.wav \
--backend whisperx \
--model small \
--task transcribe \
-o out \
--run-name demo_transcribe \
-f all \
--save-subtitled-video
```
### Transcription + diarization (ALL formats + subtitled video)
```bash
python -m transcribe_audio.transcribe \
-i in.wav \
--backend whisperx \
--model small \
--task transcribe \
-o out \
--run-name demo_diarize \
-f all \
--diarize \
--hf_token "$HF_TOKEN" \
--save-subtitled-video
```
Outputs land in:
```text
out/<run-name>/
transcript.json
transcript.txt
transcript.srt
transcript.vtt
transcript.tsv
*_subtitled.mp4
```
---
## Hugging Face token + gated model access (needed for diarization)
Diarization models are often gated (require accepting terms) and may require a Hugging Face token.
**When is an HF token needed?**
- **Usually only when models are not already cached** (first download, cache miss, or when the library checks the Hub).
- For **gated** repos (common for diarization), a token + acceptance is required for downloads.
- If you run fully offline with everything cached (and `--model_cache_only true` for ASR), a token is often not needed.
### Create a token (read-only)
- Go to HF settings → Access Tokens:
https://huggingface.co/settings/tokens
- Create a New token with Role: **Read**
- Copy the token (starts with `hf_...`)
Docs (tokens):
https://huggingface.co/docs/hub/en/security-tokens
### Accept gated model access
When you use diarization, you may be prompted to accept access for pyannote models.
Open the model page mentioned in your errors (or the default diarization pipeline) and click “Agree and access”.
Browse diarization models:
https://huggingface.co/pyannote/models
Common default diarization pipeline used by WhisperX:
https://huggingface.co/pyannote/speaker-diarization-3.1
### Recommended: store token in an environment variable
```bash
export HF_TOKEN="hf_XXXXXXXXXXXXXXXXXXXX"
```
Then pass it to the CLI:
```bash
--hf_token "$HF_TOKEN"
```
Security note: rotate/revoke tokens you’ve pasted anywhere public.
---
## Arguments
### Core I/O
- `--input`, `-i <path ...>` (required): One or more audio/video files.
- Video inputs are auto-converted to WAV via ffmpeg into the run directory.
- `--output-dir`, `-o <dir>`: Root output directory (default `out`).
- `--run-name <name>`: Run folder name under `--output-dir`.
- If omitted, derived as `<first_input_stem>_<backend>`.
- `--output-format`, `-f <format>`: Transcript sidecars to generate:
- `json` (canonical output, always written)
- `txt` (plain text, includes speaker prefix when available)
- `srt` / `vtt` (subtitle formats)
- `tsv` (tab-separated: start/end/speaker/text)
- `all` (json + txt + srt + vtt + tsv)
- `--save-subtitled-video [out.mp4]`: Create an MP4 with burned-in subtitles.
- If a video input exists: subtitles burned onto the first video.
- If audio-only: creates a small black-strip video with subtitles.
- With no filename, auto-names output inside the run folder.
### Backend selection
- `--backend <name>`: Which backend to use (choices come from `available_backends()`).
- Current default: `whisperx`.
### Model & performance
- `--model <name>`: Whisper/faster-whisper model name (default `small`).
- Larger models are more accurate but slower and require more memory.
- Model list/reference: https://huggingface.co/collections/Systran/faster-whisper
- `--model_cache_only true|false`: If true, do not download models (offline mode).
- Will fail if models are not already cached.
- `--model_dir <path>`: Directory for model downloads/cache.
- `--device <cpu|cuda|...>`: Device override. If omitted, backend auto-selects.
- `--device_index <int>`: GPU index (default 0).
- `--batch_size <int>`: Decoding batch size (default 8).
- Higher can be faster but uses more VRAM. Lower if you hit OOM.
- `--compute_type <float16|float32|int8>`:
- `float16`: faster on GPU, uses less VRAM (may be less stable on some setups)
- `float32`: safest/stable
- `int8`: lower memory, can be good on CPU; may impact accuracy
### Task & language
- `--task <transcribe|translate>`:
- `transcribe`: speech → text in the same language
- `translate`: speech → English (if supported by the model)
- `--language <code>`: Force language (e.g., `en`, `es`).
- If omitted, auto-detect is used (slower but convenient).
### Alignment (word-level timestamps)
- `--no_align`: Disable alignment.
- Faster, but timestamps may be less precise and word timings may be missing.
- `--align_model <name>`: Optional alignment model override (usually auto-selected).
- Alignment model reference list:
https://docs.pytorch.org/audio/0.12.0/pipelines.html#wav2vec-2-0-hubert-fine-tuned-asr
- `--interpolate_method <nearest|linear|ignore>`:
- `nearest`: fill missing word times using nearest neighbors (safe default)
- `linear`: interpolate missing times
- `ignore`: leave missing (can create gaps)
- `--return_char_alignments`: Include character-level alignment (more detail, heavier output).
### VAD (speech detection & chunking)
- `--vad_method <pyannote|silero>`: Voice activity detection backend.
- Changes segmentation and can affect diarization quality.
- `--vad_onset <float>`: Speech-start threshold.
- Higher = fewer false positives, but may miss quiet speech.
- `--vad_offset <float>`: Speech-end threshold.
- Higher = cuts off sooner; lower = keeps trailing audio longer.
- `--chunk_size <seconds>`: Chunk size used in transcription (default 30).
- Smaller chunks can reduce memory usage; larger can be faster.
### Diarization (speaker labels)
- `--diarize`: Enable speaker diarization.
- Requires HF token + accepted model access in many setups.
- Browse diarization models: https://huggingface.co/pyannote/models
- `--min_speakers <int>`: Optional lower bound on speakers.
- `--max_speakers <int>`: Optional upper bound on speakers.
- `--diarize_model <repo>`: Diarization pipeline to use.
- Default: `pyannote/speaker-diarization-3.1`
- `--speaker_embeddings`: Compute/return speaker embeddings.
- Can improve speaker consistency but increases compute/memory.
### Decoding / advanced ASR knobs (advanced)
These knobs mainly affect ASR decoding and stability:
- `--temperature <float>`: 0.0 is deterministic; higher may help difficult audio but can introduce errors.
- `--best_of <int>`: More candidates; slower; sometimes better.
- `--beam_size <int>`: Larger beam can be more accurate; slower.
- `--patience <float>`: Beam search patience; higher explores more; slower.
- `--length_penalty <float>`: Penalizes/encourages longer outputs.
- `--suppress_tokens <csv>`: Token IDs to suppress (-1 = default behavior).
- `--suppress_numerals`: Suppress numeral tokens (may reduce number hallucinations but can remove real numbers).
- `--initial_prompt <text>`: Steer style/vocabulary (domain phrases).
- `--hotwords <text>`: Bias toward certain words (backend-dependent).
- `--condition_on_previous_text true|false`: Better continuity, but can compound errors.
- `--fp16 true|false`: Prefer fp16 when supported (mostly affects GPU).
Fallback / thresholds:
- `--temperature_increment_on_fallback <float>`: Increase temperature when decoding fails.
- `--compression_ratio_threshold <float>`: Lower filters repetitive output more aggressively.
- `--logprob_threshold <float>`: Increase to drop low-confidence segments (can remove content).
- `--no_speech_threshold <float>`: Raise to skip more silence/background.
### Subtitle formatting / segmentation
- `--max_line_width <int>`: Wrap subtitle lines to this width.
- `--max_line_count <int>`: Limit subtitle lines per cue.
- `--highlight_words true|false`: Word highlighting in subtitle-like outputs (backend-dependent).
- `--segment_resolution <sentence|chunk>`:
- `sentence`: generally more readable subtitle segments
- `chunk`: follows chunk/VAD boundaries more closely
### Misc
- `--threads <int>`: CPU thread hint (0 = backend default).
- `--hf_token <token>`: Hugging Face token for gated models (diarization, some VAD).
- `--print_progress true|false`: Show progress bars/logs.
- `--verbose true|false`: Print backend info logs and warnings (recommended while setting up).
---
## Python usage (import)
After install:
```bash
python -c "from transcribe_audio import transcribe; print(transcribe)"
```
### Simple transcription from Python
```python
from transcribe_audio import transcribe
res = transcribe(
inputs="in.wav",
backend="whisperx",
model="small",
output_format="all",
output_dir="out",
run_name="py_demo",
save_subtitled_video="__AUTO__", # or "my_subs.mp4"
)
print("Run dir:", res["run_dir"])
print("JSON:", res["outputs"]["json"])
```
### Transcription + diarization from Python
```python
from transcribe_audio import transcribe
import os
res = transcribe(
inputs="in.wav",
backend="whisperx",
model="small",
output_format="all",
output_dir="out",
run_name="py_diarize",
save_subtitled_video="__AUTO__",
diarize=True,
hf_token=os.environ.get("HF_TOKEN"),
)
print(res["outputs"])
```
---
## Install from GitHub (uv)
Install uv (Astral):
https://docs.astral.sh/uv/getting-started/installation/#standalone-installer
Verify:
```bash
uv --version
```
Clone + install deps:
```bash
git clone https://github.com/Surya-Rayala/transcribe-audio.git
cd transcribe-audio
uv sync
```
CLI help (uv environment):
```bash
uv run python -m transcribe_audio.transcribe -h
```
Basic transcription (uv):
```bash
uv run python -m transcribe_audio.transcribe \
-i in.wav \
--backend whisperx \
--model small \
--task transcribe \
-o out \
--run-name demo_transcribe \
-f all \
--save-subtitled-video
```
Transcription + diarization (uv):
```bash
uv run python -m transcribe_audio.transcribe \
-i in.wav \
--backend whisperx \
--model small \
--task transcribe \
-o out \
--run-name demo_diarize \
-f all \
--diarize \
--hf_token "$HF_TOKEN" \
--save-subtitled-video
```
---
## License
This project is licensed under the MIT License. See the `LICENSE` file for details.
| text/markdown | null | Surya Chand Rayala <suryachand2k1@gmail.com> | null | null | MIT License Copyright (c) 2026 Surya Chand Rayala Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"huggingface-hub<1.0",
"onnxruntime<1.24",
"torch<2.6",
"torchaudio<2.6",
"whisperx>=3.4.3"
] | [] | [] | [] | [] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T01:11:46.107119 | transcribe_audio-0.1.7.tar.gz | 27,122 | fc/ee/9596abbfbaeeb94122d64899451a2d571cf77398ede15734554582b3e452/transcribe_audio-0.1.7.tar.gz | source | sdist | null | false | 49b10b98ffc010079eabf99517cc9a75 | 87fb7e0cce7a2b75e97eb62421f659e817801588aee16af81b0e0e03679b733d | fcee9596abbfbaeeb94122d64899451a2d571cf77398ede15734554582b3e452 | null | [
"LICENSE"
] | 221 |
2.3 | osvcheck | 1.0.0b1 | Security vulnerability scanner for Python dependencies | # osvcheck
[](https://github.com/deeprave/osvcheck/actions/workflows/python-test.yml)
[](https://pypi.org/project/osvcheck/)
[](https://pypi.org/project/osvcheck/)
[](https://pypi.org/project/osvcheck/)
[](https://github.com/deeprave/osvcheck/security/code-scanning)
[](https://github.com/deeprave/osvcheck/graphs/commit-activity)
Lightweight vulnerability scanner for Python dependencies using the OSV database.
osvcheck scans your Python project's dependencies for known security vulnerabilities by querying the [OSV (Open Source Vulnerabilities)](https://osv.dev) database. It's designed for source-level checking during development and CI/CD pipelines.
**Key features:**
- Zero runtime dependencies (stdlib only)
- Auto-detects package manager (uv.lock, uv, or pip)
- Smart caching (12-48 hour TTL) minimizes API calls
- Distinguishes direct vs indirect vulnerabilities
- Optional rich integration for enhanced output (auto-detected if installed)
## Installation
Install via pip or uv, or add to your project's dev dependencies.
## Usage
```bash
# Scan current project
osvcheck
# Logging options
osvcheck -v # Verbose (debug) output
osvcheck -q # Quiet (warnings/errors only)
osvcheck --log-json # JSON format logs
osvcheck --log-file FILE # Write logs to file
# Color control
osvcheck --color # Force color output
osvcheck --no-color # Disable color output
```
**Exit codes:**
- `0` - No vulnerabilities found
- `1` - Indirect dependency vulnerabilities only
- `2` - Direct dependency vulnerabilities found
**As a Pre-commit hook:**
Add to `.pre-commit-config.yaml`:
```yaml
- repo: https://github.com/deeprave/osvcheck
rev: v1.0.0b1
hooks:
- id: osvcheck
```
**CI/CD integration:**
```bash
# Fail only on direct vulnerabilities
osvcheck || [ $? -eq 1 ]
# Fail on any vulnerabilities
osvcheck
```
## Features
- Scans Python dependencies for known security vulnerabilities
- Uses the OSV (Open Source Vulnerabilities) database
- Multi-environment support with auto-detection:
- Uses `uv.lock` if present and up-to-date (fastest)
- Falls back to `uv pip list` if uv is available
- Falls back to `pip list` if pip is available
- Smart caching with 12-48 hour randomized TTL
- Distinguishes between direct and indirect dependency vulnerabilities
- Zero runtime dependencies (Python stdlib only)
- Optional rich integration for enhanced output (auto-detected if already installed)
## License
MIT License - See LICENSE file for details.
| text/markdown | David Nugent | David Nugent <davidn@uniquode.io> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Security",
"Topic :: Software Development :: Quality Assurance",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: ... | [] | null | null | >=3.11 | [] | [] | [] | [
"rich>=13.0.0; extra == \"rich\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:08:51.506568 | osvcheck-1.0.0b1.tar.gz | 8,115 | 2b/e0/4b772ccf78cbee59b0447efcb0c3df12c499381b415c0928f8a4ae3608ef/osvcheck-1.0.0b1.tar.gz | source | sdist | null | false | ff85438ae9b33538db8c5ee93ba9909f | 54955843eb6d1bdeec0a65d940a339b5ad7c9afdd83b4e7c03753c81a52af543 | 2be04b772ccf78cbee59b0447efcb0c3df12c499381b415c0928f8a4ae3608ef | null | [] | 293 |
2.4 | irispy-client-revanced | 0.2.5 | A modified version of the Iris bot client in Python | # irispy-client-revanced
## `iris.Bot`
Iris 봇을 생성하고 관리하기 위한 메인 클래스입니다.
**초기화:**
```python
Bot(iris_url: str, *, max_workers: int = None)
```
- `iris_url` (str): Iris 서버의 URL (예: "127.0.0.1:3000").
- `max_workers` (int, optional): 이벤트를 처리하는 데 사용할 최대 스레드 수.
**메서드:**
- `run()`: 봇을 시작하고 Iris 서버에 연결합니다. 이 메서드는 블로킹 방식입니다.
- `on_event(name: str)`: 이벤트 핸들러를 등록하기 위한 데코레이터입니다.
**이벤트:**
- `chat`: 수신된 모든 메시지에 대해 트리거됩니다.
- `message`: 표준 메시지에 대해 트리거됩니다.
- `new_member`: 새 멤버가 채팅방에 참여할 때 트리거됩니다.
- `del_member`: 멤버가 채팅방을 나갈 때 트리거됩니다.
- `unknown`: 알 수 없는 이벤트 유형에 대해 트리거됩니다.
- `error`: 이벤트 핸들러에서 오류가 발생할 때 트리거됩니다.
---
## `iris.bot.models.Message`
채팅방의 메시지를 나타냅니다.
**속성:**
- `id` (int): 메시지 ID.
- `type` (int): 메시지 유형.
- `msg` (str): 메시지 내용.
- `attachment` (dict): 메시지 첨부 파일.
- `v` (dict): 추가 메시지 데이터.
- `command` (str): 메시지의 명령어 부분 (첫 번째 단어).
- `param` (str): 메시지의 매개변수 부분 (나머지 메시지).
- `has_param` (bool): 메시지에 매개변수가 있는지 여부.
- `image` (ChatImage): 메시지가 이미지인 경우 `ChatImage` 객체, 그렇지 않으면 `None`.
---
## `iris.bot.models.Room`
채팅방을 나타냅니다.
**속성:**
- `id` (int): 방 ID.
- `name` (str): 방 이름.
- `type` (str): 방 유형 (예: "MultiChat", "DirectChat"). 이 속성은 캐시됩니다.
---
## `iris.bot.models.User`
사용자를 나타냅니다.
**속성:**
- `id` (int): 사용자 ID.
- `name` (str): 사용자 이름. 이 속성은 캐시됩니다.
- `avatar` (Avatar): 사용자의 `Avatar` 객체.
- `type` (str): 채팅방에서의 사용자 유형 (예: "HOST", "MANAGER", "NORMAL"). 이 속성은 캐시됩니다.
---
## `iris.bot.models.Avatar`
사용자의 아바타를 나타냅니다.
**속성:**
- `url` (str): 아바타 이미지의 URL. 이 속성은 캐시됩니다.
- `img` (bytes): 아바타 이미지 데이터 (바이트). 이 속성은 캐시됩니다.
---
## `iris.bot.models.ChatImage`
채팅 메시지의 이미지를 나타냅니다.
**속성:**
- `url` (list[str]): 이미지의 URL 목록.
- `img` (list[Image.Image]): 이미지의 `PIL.Image.Image` 객체 목록. 이 속성은 캐시됩니다.
---
## `iris.bot.models.ChatContext`
채팅 이벤트의 컨텍스트를 나타냅니다.
**속성:**
- `room` (Room): 이벤트가 발생한 `Room`.
- `sender` (User): 메시지를 보낸 `User`.
- `message` (Message): `Message` 객체.
- `raw` (dict): 원시 이벤트 데이터.
- `api` (IrisAPI): Iris 서버와 상호 작용하기 위한 `IrisAPI` 인스턴스.
**메서드:**
- `reply(message: str, room_id: int = None)`: 채팅방에 답장을 보냅니다.
- `reply_media(files: list, room_id: int = None)`: 채팅방에 미디어 파일을 보냅니다.
- `get_source()`: 답장하는 메시지의 `ChatContext`를 반환합니다.
- `get_next_chat(n: int = 1)`: 채팅 기록에서 다음 메시지의 `ChatContext`를 반환합니다.
- `get_previous_chat(n: int = 1)`: 채팅 기록에서 이전 메시지의 `ChatContext`를 반환합니다.
- `reply_audio(files: list, room_id: int = None)`: 채팅방에 오디오 파일을 보냅니다.
- `reply_video(files: list, room_id: int = None)`: 채팅방에 비디오 파일을 보냅니다.
- `reply_file(files: list, room_id: int = None)`: 채팅방에 일반 파일을 보냅니다.
---
## `iris.bot.models.ErrorContext`
오류 이벤트의 컨텍스트를 나타냅니다.
**속성:**
- `event` (str): 오류가 발생한 이벤트의 이름.
- `func` (Callable): 오류를 발생시킨 이벤트 핸들러 함수.
- `exception` (Exception): 예외 객체.
- `args` (list): 이벤트 핸들러에 전달된 인수.
---
## `iris.kakaolink.IrisLink`
카카오링크 메시지를 보내기 위한 클래스입니다.
**초기화:**
```python
IrisLink(iris_url: str)
```
- `iris_url` (str): Iris 서버의 URL.
**메서드:**
- `send(receiver_name: str, template_id: int, template_args: dict, **kwargs)`: 카카오링크 메시지를 보냅니다.
- `send_melon(receiver_name: str, template_id: int, template_args: dict, **kwargs)`: 멜론 카카오링크 메세지를 보냅니다.
**예제:**
```python
from iris import IrisLink
link = IrisLink("127.0.0.1:3000")
link.send(
receiver_name="내 채팅방",
template_id=12345,
template_args={"key": "value"}
)
link.send_melon(
receiver_name="내 채팅방",
template_id=17141,
template_args={"key": "value"}
)
```
---
## `iris.util.PyKV`
SQLite를 사용하는 간단한 키-값 저장소입니다. 이 클래스는 싱글톤입니다.
**메서드:**
- `get(key: str)`: 저장소에서 값을 검색합니다.
- `put(key: str, value: any)`: 키-값 쌍을 저장합니다.
- `delete(key: str)`: 키-값 쌍을 삭제합니다.
- `search(searchString: str)`: 값에서 문자열을 검색합니다.
- `search_json(valueKey: str, searchString: str)`: JSON 객체의 값에서 문자열을 검색합니다.
- `search_key(searchString: str)`: 키에서 문자열을 검색합니다.
- `list_keys()`: 모든 키의 목록을 반환합니다.
- `close()`: 데이터베이스 연결을 닫습니다.
## `iris.decorators`
함수에 추가적인 기능을 제공하는 데코레이터입니다.
- `@has_param`: 메시지에 파라미터가 있는 경우에만 함수를 실행합니다.
- `@is_reply`: 메시지가 답장일 경우에만 함수를 실행합니다. 답장이 아닐 경우 "메세지에 답장하여 요청하세요."라는 메시지를 자동으로 보냅니다.
- `@is_admin`: 메시지를 보낸 사용자가 관리자인 경우에만 함수를 실행합니다.
- `@is_not_banned`: 메시지를 보낸 사용자가 차단되지 않은 경우에만 함수를 실행합니다.
- `@is_host`: 메시지를 보낸 사용자의 타입이 HOST인 경우에만 함수를 실행합니다.
- `@is_manager`: 메시지를 보낸 사용자의 타입이 MANAGER인 경우에만 함수를 실행합니다.
## Special Thanks
- Irispy2 and Kakaolink by @ye-seola
- irispy-client by @dolidolih
## 수정한 파이썬 라이브러리
- [irispy-client GitHub](https://github.com/dolidolih/irispy-client)
- [irispy-client PyPI](https://pypi.org/project/irispy-client)
| text/markdown | null | ponyobot <admin@ponyobot.kr> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests",
"websockets",
"pillow",
"httpx"
] | [] | [] | [] | [
"Original, https://github.com/dolidolih/irispy-client",
"Repository, https://github.com/ponyobot/irispy-client-revanced"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T01:08:42.172968 | irispy_client_revanced-0.2.5.tar.gz | 24,348 | 06/b0/78c3bebed977990052d9b264fba245ed86f6876fa9b4fd9100049c2cac83/irispy_client_revanced-0.2.5.tar.gz | source | sdist | null | false | 50c6a1c02faa247b02dbb45cfd75e3a1 | 318cee62a8a2a23116d8d453b06eabb0b441dfa892ed8625f801481b8a2b08b5 | 06b078c3bebed977990052d9b264fba245ed86f6876fa9b4fd9100049c2cac83 | null | [] | 153 |
2.4 | streamable | 2.0.0rc11 | (sync/async) iterable streams for Python | # ༄ `streamable`
> (sync/async) iterable streams for Python
`stream[T]` wraps any `Iterable[T]` or `AsyncIterable[T]` with a lazy fluent interface covering concurrency, batching, buffering, rate limiting, progress logging, and error handling.
[](https://www.python.org/downloads/release/python-3820/)
[](https://pypi.org/project/streamable/)
[](https://anaconda.org/conda-forge/streamable)
[](https://codecov.io/gh/ebonnal/streamable)
[](https://streamable.readthedocs.io/en/latest/api.html)
# 1. install
```
pip install streamable
```
# 2. import
```python
from streamable import stream
```
# 3. init
Create a `stream[T]` from an `Iterable[T]` (or `AsyncIterable[T]`):
```python
ints: stream[int] = stream(range(10))
```
# 4. operate
Chain lazy operations:
```python
import logging
from datetime import timedelta
import httpx
from httpx import Response, HTTPStatusError
from streamable import stream
pokemons: stream[str] = (
stream(range(10))
.map(lambda i: f"https://pokeapi.co/api/v2/pokemon-species/{i}")
.throttle(5, per=timedelta(seconds=1))
.map(httpx.get, concurrency=2)
.do(Response.raise_for_status)
.catch(HTTPStatusError, do=logging.warning)
.map(lambda poke: poke.json()["name"])
)
```
Source elements will be processed on-the-fly during iteration.
Operations accept both sync and async functions.
# 5. iterate
A `stream[T]` is `Iterable[T]` (and `AsyncIterable[T]`):
```python
>>> list(pokemons)
['bulbasaur', 'ivysaur', 'venusaur', 'charmander', 'charmeleon', 'charizard', 'squirtle', 'wartortle', 'blastoise']
```
# 📒 Operations ([docs](https://streamable.readthedocs.io/en/latest/api.html))
- [`.map`](#-map) elements
- [`.do`](#-do) side effects on elements
- [`.group`](#-group) elements into batches
- [`.flatten`](#-flatten) iterable elements
- [`.filter`](#-filter) elements
- [`.take`](#-take) elements until ...
- [`.skip`](#-skip) elements until ...
- [`.catch`](#-catch) exceptions
- [`.throttle`](#-throttle) the rate of iteration
- [`.buffer`](#-buffer) elements
- [`.observe`](#-observe) the iteration progress
Operations accept both sync and async functions, they can be mixed within the same `stream`, that can then be consumed as an `Iterable` or `AsyncIterable`. Async functions run in the current loop, one is created if needed.
Operations are implemented so that the iteration can resume after an exception.
A `stream` exposes operations to manipulate its elements, but the I/O is not its responsibility. It's meant to be combined with dedicated libraries like `pyarrow`, `psycopg2`, `boto3`, `dlt` ([ETL example](#eg-etl-via-dlt))
## ▼ `.map`
Transform elements:
```python
int_chars: stream[str] = stream(range(10)).map(str)
assert list(int_chars) == ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
```
### `concurrency`
Set the `concurrency` param to apply the transformation concurrently.
Only `concurrency` upstream elements are in-flight for processing.
Preserve upstream order unless you set `as_completed=True`.
#### via threads
If `concurrency > 1`, the transformation will be applied via `concurrency` threads:
```python
pokemons: stream[str] = (
stream(range(1, 4))
.map(lambda i: f"https://pokeapi.co/api/v2/pokemon-species/{i}")
.map(httpx.get, concurrency=2)
.map(lambda poke: poke.json()["name"])
)
assert list(pokemons) == ['bulbasaur', 'ivysaur', 'venusaur']
```
#### via `async` coroutines
If `concurrency > 1` and the transformation is async, it will be applied via `concurrency` async tasks:
```python
# async context
async with httpx.AsyncClient() as http_client:
pokemons: stream[str] = (
stream(range(1, 4))
.map(lambda i: f"https://pokeapi.co/api/v2/pokemon-species/{i}")
.map(http_client.get, concurrency=2)
.map(lambda poke: poke.json()["name"])
)
assert [name async for name in pokemons] == ['bulbasaur', 'ivysaur', 'venusaur']
```
```python
# sync context
with asyncio.Runner() as runner:
http_client = httpx.AsyncClient()
pokemons: stream[str] = (
stream(range(1, 4))
.map(lambda i: f"https://pokeapi.co/api/v2/pokemon-species/{i}")
.map(http_client.get, concurrency=2)
.map(lambda poke: poke.json()["name"])
)
# uses runner's loop
assert list(pokemons) == ['bulbasaur', 'ivysaur', 'venusaur']
runner.run(http_client.aclose())
```
#### via processes
`concurrency` can also be a `concurrent.futures.Executor`, pass a `ProcessPoolExecutor` to apply the transformations via processes:
```python
if __name__ == "__main__":
with ProcessPoolExecutor(max_workers=10) as processes:
state: list[int] = []
# ints are mapped
assert list(
stream(range(10))
.map(state.append, concurrency=processes)
) == [None] * 10
# the `state` of the main process is not mutated
assert state == []
```
## ▼ `.do`
Perform side effects:
```python
state: list[int] = []
store_ints: stream[int] = stream(range(10)).do(state.append)
assert list(store_ints) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
assert state == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
### `concurrency`
Same as `.map`.
## ▼ `.group`
Group elements into batches...
... `up_to` a given batch size:
```python
int_batches: stream[list[int]] = stream(range(10)).group(5)
assert list(int_batches) == [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
```
... `within` a given time interval:
```python
from datetime import timedelta
int_1sec_batches: stream[list[int]] = (
stream(range(10))
.throttle(2, per=timedelta(seconds=1))
.group(within=timedelta(seconds=0.99))
)
assert list(int_1sec_batches) == [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]
```
... `by` a given key, yielding `(key, elements)` pairs:
```python
ints_by_parity: stream[tuple[str, list[int]]] = (
stream(range(10))
.group(by=lambda n: "odd" if n % 2 else "even")
)
assert list(ints_by_parity) == [("even", [0, 2, 4, 6, 8]), ("odd", [1, 3, 5, 7, 9])]
```
You can combine these parameters.
## ▼ `.flatten`
Explode upstream elements (`Iterable` or `AsyncIterable`):
```python
chars: stream[str] = stream(["hel", "lo!"]).flatten()
assert list(chars) == ["h", "e", "l", "l", "o", "!"]
```
### `concurrency`
Flattens `concurrency` iterables concurrently (via threads for `Iterable` elements and via coroutines for `AsyncIterable` elements):
```python
chars: stream[str] = stream(["hel", "lo", "!"]).flatten(concurrency=2)
assert list(chars) == ["h", "l", "e", "o", "l", "!"]
```
## ▼ `.filter`
Filter elements satisfying a predicate:
```python
even_ints: stream[int] = stream(range(10)).filter(lambda n: n % 2 == 0)
assert list(even_ints) == [0, 2, 4, 6, 8]
```
## ▼ `.take`
Take a given number of elements:
```python
first_5_ints: stream[int] = stream(range(10)).take(5)
assert list(first_5_ints) == [0, 1, 2, 3, 4]
```
... or take `until` a predicate is satisfied:
```python
first_5_ints: stream[int] = stream(range(10)).take(until=lambda n: n == 5)
assert list(first_5_ints) == [0, 1, 2, 3, 4]
```
## ▼ `.skip`
Skip a given number of elements:
```python
ints_after_5: stream[int] = stream(range(10)).skip(5)
assert list(ints_after_5) == [5, 6, 7, 8, 9]
```
... or skip `until` a predicate is satisfied:
```python
ints_after_5: stream[int] = stream(range(10)).skip(until=lambda n: n >= 5)
assert list(ints_after_5) == [5, 6, 7, 8, 9]
```
## ▼ `.catch`
Catch exceptions of a given type:
```python
inverses: stream[float] = (
stream(range(10))
.map(lambda n: round(1 / n, 2))
.catch(ZeroDivisionError)
)
assert list(inverses) == [1.0, 0.5, 0.33, 0.25, 0.2, 0.17, 0.14, 0.12, 0.11]
```
... `where` a predicate is satisfied:
```python
domains = ["github.com", "foo.bar", "google.com"]
resolvable_domains: stream[str] = (
stream(domains)
.do(lambda domain: httpx.get(f"https://{domain}"), concurrency=2)
.catch(httpx.HTTPError, where=lambda e: "not known" in str(e))
)
assert list(resolvable_domains) == ["github.com", "google.com"]
```
... `do` a side effect on catch:
```python
errors: list[Exception] = []
inverses: stream[float] = (
stream(range(10))
.map(lambda n: round(1 / n, 2))
.catch(ZeroDivisionError, do=errors.append)
)
assert list(inverses) == [1.0, 0.5, 0.33, 0.25, 0.2, 0.17, 0.14, 0.12, 0.11]
assert len(errors) == 1
```
... `replace` with a value:
```python
inverses: stream[float] = (
stream(range(10))
.map(lambda n: round(1 / n, 2))
.catch(ZeroDivisionError, replace=lambda e: float("inf"))
)
assert list(inverses) == [float("inf"), 1.0, 0.5, 0.33, 0.25, 0.2, 0.17, 0.14, 0.12, 0.11]
```
... `stop=True` to stop the iteration if an exception is caught:
```python
inverses: stream[float] = (
stream(range(10))
.map(lambda n: round(1 / n, 2))
.catch(ZeroDivisionError, stop=True)
)
assert list(inverses) == []
```
You can combine these parameters.
## ▼ `.throttle`
Limit the number of emissions `per` time interval:
```python
from datetime import timedelta
three_ints_per_second: stream[int] = stream(range(10)).throttle(3, per=timedelta(seconds=1))
# collects 10 ints in 3 seconds
assert list(three_ints_per_second) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
## ▼ `.buffer`
Buffer upstream elements into a bounded queue via a background task (decoupling upstream production rate from downstream consumption rate):
```python
pulled: list[int] = []
buffered_ints = iter(
stream(range(10))
.do(pulled.append)
.buffer(5)
)
assert next(buffered_ints) == 0
time.sleep(1e-3)
assert pulled == [0, 1, 2, 3, 4, 5]
```
## ▼ `.observe`
Observe the iteration progress:
```python
observed_ints: stream[int] = stream(range(10)).observe("ints")
assert list(observed_ints) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
logs:
```
2025-12-23T16:43:07Z INFO observed=ints elapsed=0:00:00.000019 errors=0 elements=1
2025-12-23T16:43:07Z INFO observed=ints elapsed=0:00:00.001117 errors=0 elements=2
2025-12-23T16:43:07Z INFO observed=ints elapsed=0:00:00.001147 errors=0 elements=4
2025-12-23T16:43:07Z INFO observed=ints elapsed=0:00:00.001162 errors=0 elements=8
2025-12-23T16:43:07Z INFO observed=ints elapsed=0:00:00.001179 errors=0 elements=10
```
Logs are produced when the counts reach powers of 2. Set `every` to produce them periodically:
```python
# observe every 1k elements (or errors)
observed_ints = stream(range(10)).observe("ints", every=1000)
# observe every 5 seconds
observed_ints = stream(range(10)).observe("ints", every=timedelta(seconds=5))
```
Observations are logged via `logging.getLogger("streamable").info`. Set `do` to do something else with the `streamable.Observation`:
```python
observed_ints = stream(range(10)).observe("ints", do=custom_logger.info)
observed_ints = stream(range(10)).observe("ints", do=observations.append)
observed_ints = stream(range(10)).observe("ints", do=print)
```
## ▼ `+`
Concatenate a stream with an iterable:
```python
concatenated_ints = stream(range(10)) + range(10)
assert list(concatenated_ints) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
## ▼ `.cast`
Provide a type hint for elements:
```python
docs: stream[Any] = stream(['{"foo": "bar"}', '{"foo": "baz"}']).map(json.loads)
dicts: stream[dict[str, str]] = docs.cast(dict[str, str])
# the stream remains the same, it's for type checkers only
assert dicts is docs
```
## ▼ `.__call__`
Iterate as an `Iterable` until exhaustion, without collecting its elements:
```python
state: list[int] = []
pipeline: stream[int] = stream(range(10)).do(state.append)
pipeline()
assert state == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
## ▼ `await`
Iterate as an `AsyncIterable` until exhaustion, without collecting its elements:
```python
state: list[int] = []
pipeline: stream[int] = stream(range(10)).do(state.append)
await pipeline
assert state == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
## ▼ `.pipe`
Apply a callable, passing the stream as first argument, followed by the provided `*args` and `**kwargs`:
```python
import polars as pl
pokemons: stream[str] = ...
pokemons.pipe(pl.DataFrame, schema=["name"]).write_csv("pokemons.csv")
```
# ••• other notes
## function as source
A `stream` can also be instantiated from a function (sync or async) that will be called sequentially to get the next source element during iteration.
e.g. stream from a `Queue`:
```python
queued_ints: queue.Queue[int] = ...
# or asyncio.Queue[int]
ints: stream[int] = stream(queued_ints.get)
```
## starmap
The `star` function decorator transforms a function (sync or async) that takes several positional arguments into a function that takes a tuple.
```python
from streamable import star
pokemons: stream[str] = ...
enumerated_pokes: stream[str] = (
stream(enumerate(pokemons))
.map(star(lambda index, poke: f"#{index + 1} {poke}"))
)
assert list(enumerated_pokes) == ['#1 bulbasaur', '#2 ivysaur', '#3 venusaur', '#4 charmander', '#5 charmeleon', '#6 charizard', '#7 squirtle', '#8 wartortle', '#9 blastoise']
```
## distinct
To collect distinct elements you can `set(a_stream)`.
To deduplicates in the middle of the stream, `.filter` new values and `.do` add them into a `set` (or a fancier cache):
```python
seen: set[str] = set()
unique_ints: stream[int] = (
stream("001000111")
.filter(lambda _: _ not in seen)
.do(seen.add)
.map(int)
)
assert list(unique_ints) == [0, 1]
```
## vs `builtins.map/filter`
There is zero overhead during iteration compared to `builtins.map` and `builtins.filter`:
```python
odd_int_chars = stream(range(N)).filter(lambda n: n % 2).map(str)
```
`iter(odd_int_chars)` visits the operations lineage and returns exactly this iterator:
```python
map(str, filter(lambda n: n % 2, range(N)))
```
## e.g. ETL via [`dlt`](https://github.com/dlt-hub/dlt)
A `stream` is an expressive way to declare a `dlt.resource`:
```python
# from datetime import timedelta
# from http import HTTPStatus
# from itertools import count
# import dlt
# import httpx
# from httpx import Response, HTTPStatusError
# from dlt.destinations import filesystem
# from streamable import stream
def not_found(e: HTTPStatusError) -> bool:
return e.response.status_code == HTTPStatus.NOT_FOUND
@dlt.resource
def pokemons(http_client: httpx.Client, concurrency: int, per_second: int) -> stream[dict]:
"""Ingest Pokémons from the PokéAPI, stop on first 404."""
return (
stream(count(1))
.map(lambda i: f"https://pokeapi.co/api/v2/pokemon-species/{i}")
.throttle(per_second, per=timedelta(seconds=1))
.map(http_client.get, concurrency=concurrency, as_completed=True)
.do(Response.raise_for_status)
.catch(HTTPStatusError, where=not_found, stop=True)
.map(Response.json)
.observe("pokemons")
)
# Write to a partitioned Delta Lake table, chunk by chunk on-the-fly.
with httpx.Client() as http_client:
dlt.pipeline(
pipeline_name="ingest_pokeapi",
destination=filesystem("deltalake"),
dataset_name="pokeapi",
).run(
pokemons(http_client, concurrency=8, per_second=32),
table_format="delta",
columns={"color__name": {"partition": True}},
)
```
# ⭐ links
- [Top 10 Python libraries of 2024, from Tryolabs](https://tryolabs.com/blog/top-python-libraries-2024#top-10---general-use) ([LinkedIn](https://www.linkedin.com/posts/tryolabs_top-python-libraries-2024-activity-7273052840984539137-bcGs?utm_source=share&utm_medium=member_desktop), [Reddit](https://www.reddit.com/r/Python/comments/1hbs4t8/the_handpicked_selection_of_the_best_python/))
- [PyCoder’s weekly](https://pycoders.com/issues/651) x [Real Python](https://realpython.com/)
- [@PythonHub's tweet](https://x.com/PythonHub/status/1842886311369142713)
- [Reddit v1.0.0 showcase](https://www.reddit.com/r/Python/comments/1fp38jd/streamable_streamlike_manipulation_of_iterables/)
| text/markdown | null | ebonnal <bonnal.enzo.dev@gmail.com> | null | null | Apache 2. | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx==0.28.1; python_version >= \"3.9\" and extra == \"dev\"",
"mypy==1.18.2; python_version >= \"3.9\" and extra == \"dev\"",
"mypy-extensions==1.0.0; python_version >= \"3.9\" and extra == \"dev\"",
"polars==1.36.1; python_version >= \"3.9\" and extra == \"dev\"",
"pytest==7.4.4; python_version >= \"3.9... | [] | [] | [] | [
"Homepage, https://github.com/ebonnal/streamable"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T01:06:49.398188 | streamable-2.0.0rc11.tar.gz | 56,109 | aa/bd/73edc5a98db1b65148d74c1802eb6ca69545a775d6bbeeb3a5d51d656fbb/streamable-2.0.0rc11.tar.gz | source | sdist | null | false | 112af732f6757a84f24454dad94bf6f8 | e4de9a7a6e95ac382473f04f731d1d1d1724cea0e86d3d18caf7995dd17711bd | aabd73edc5a98db1b65148d74c1802eb6ca69545a775d6bbeeb3a5d51d656fbb | null | [
"LICENSE"
] | 206 |
2.3 | dycw-utilities | 0.192.0 | Miscellaneous Python utilities | # `python-utilities`
Miscellaneous Python utilities
| text/markdown | Derek Wan | Derek Wan <d.wan@icloud.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"coloredlogs>=15.0.1",
"more-itertools>=10.8.0",
"rich>=14.3.3",
"typing-extensions>=4.15.0",
"tzdata>=2025.3",
"tzlocal>=5.3.1",
"whenever>=0.9.5",
"coloredlogs>=15.0.1; extra == \"logging\"",
"coverage-conditional-plugin>=0.9.0; extra == \"test\"",
"dycw-pytest-only>=2.1.1; extra == \"test\"",
... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T01:06:27.906202 | dycw_utilities-0.192.0-py3-none-any.whl | 215,229 | 8d/31/a81bd676f00fd0d3e16b37c8fe6d15bc401ab47234a5cc6ad491fa5cd93f/dycw_utilities-0.192.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 5d76d4e5b0163e70b8a9e0570cbe8444 | 4675a3043325736428261882b47f852dfd9573002a5ea44a871a8d42e5e0fa2d | 8d31a81bd676f00fd0d3e16b37c8fe6d15bc401ab47234a5cc6ad491fa5cd93f | null | [] | 1,177 |
2.4 | cccc-pair | 0.4.1 | Global multi-agent delivery kernel with working groups, scopes, and an append-only collaboration ledger | <div align="center">
<img src="screenshots/logo.png" width="160" />
# CCCC
### Local-first Multi-agent Collaboration Kernel
**A lightweight multi-agent framework with infrastructure-grade reliability.**
Chat-native, prompt-driven, and bi-directional by design.
Run multiple coding agents as a **durable, coordinated system** — not a pile of disconnected terminal sessions.
Three commands to go. Zero infrastructure, production-grade power.
[](https://pypi.org/project/cccc-pair/)
[](https://pypi.org/project/cccc-pair/)
[](LICENSE)
[](https://chesterra.github.io/cccc/)
**English** | [中文](README.zh-CN.md) | [日本語](README.ja.md)
</div>
---
## Build With the Official SDK
For app integration, bots, IDE extensions, and background services, use the official SDK repo:
- [cccc-sdk](https://github.com/ChesterRa/cccc-sdk)
- Python package: `cccc-sdk` (import as `cccc_sdk`)
- TypeScript package: `cccc-sdk`
SDK clients connect to the same CCCC daemon and share the same `CCCC_HOME` runtime state.
## Why v0.4.0 Is a Generational Upgrade
- **Chat-native orchestration**: assign work in Web chat as naturally as talking to teammates, with full delivery/read/ack/reply visibility.
- **Workflow-by-design**: configure multi-agent behavior with guidance prompts and automation rules, instead of brittle ad-hoc scripts.
- **Bi-directional control**: CCCC orchestrates agents, while agents can also schedule and customize CCCC workflows through MCP tools.
- **Beyond the browser**: the same operating model extends to Telegram/Slack/Discord/Feishu/DingTalk via IM bridges.
## The Problem
Using multiple coding agents today usually means:
- **Lost context** — coordination lives in terminal scrollback and disappears on restart
- **No delivery guarantees** — did the agent actually *read* your message?
- **Fragmented ops** — start/stop/recover/escalate across separate tools
- **No remote access** — checking on a long-running group from your phone is not an option
These aren't minor inconveniences. They're the reason most multi-agent setups stay fragile demos instead of reliable workflows.
## What CCCC Does
CCCC is a single `pip install` with zero external dependencies — no database, no message broker, no Docker required. Yet it delivers the operational reliability you'd expect from a production messaging system:
| Capability | How |
|---|---|
| **Single source of truth** | Append-only ledger (`ledger.jsonl`) records every message and event — replayable, auditable, never lost |
| **Reliable messaging** | Read cursors, attention ACK, reply-required obligations — you know exactly who read what |
| **Unified control plane** | Web UI, CLI, MCP tools, and IM bridges all talk to one daemon — no state fragmentation |
| **Multi-runtime orchestration** | Claude Code, Codex CLI, Gemini CLI, Copilot, and 8 more runtimes in one group |
| **Role-based coordination** | Foreman + peer model with permission boundaries and recipient routing (`@all`, `@peers`, `@foreman`) |
| **Remote operations** | Bridge to Telegram, Slack, Discord, Feishu, or DingTalk — manage groups from your phone |
## How CCCC looks
<div align="center">
<video src="https://github.com/user-attachments/assets/8f9c3986-f1ba-4e59-a114-bcb383ff49a7" controls="controls" muted="muted" autoplay="autoplay" loop="loop" style="max-width: 100%;">
</video>
</div>
## Quick Start
### Install
```bash
# Stable channel (PyPI)
pip install -U cccc-pair
# RC channel (TestPyPI)
pip install -U --pre \
--index-url https://test.pypi.org/simple/ \
--extra-index-url https://pypi.org/simple/ \
cccc-pair
```
> **Requirements**: Python 3.9+, macOS / Linux / Windows
### Launch
```bash
cccc
```
Open **http://127.0.0.1:8848** — the Web UI is ready.
### Create a multi-agent group
```bash
cd /path/to/your/repo
cccc attach . # bind this directory as a scope
cccc setup --runtime claude # configure MCP for your runtime
cccc actor add foreman --runtime claude # first actor becomes foreman
cccc actor add reviewer --runtime codex # add a peer
cccc group start # start all actors
cccc send "Split the task and begin." --to @all
```
You now have two agents collaborating in a persistent group with full message history, delivery tracking, and a web dashboard.
## Programmatic Access (SDK)
Use the official SDK when you need to integrate CCCC into external applications or services:
```bash
pip install -U cccc-sdk
npm install cccc-sdk
```
The SDK does not include a daemon. It connects to a running `cccc` core instance.
## Architecture
```mermaid
graph TB
subgraph Agents["Agent Runtimes"]
direction LR
A1["Claude Code"]
A2["Codex CLI"]
A3["Gemini CLI"]
A4["+ 9 more"]
end
subgraph Daemon["CCCC Daemon · single writer"]
direction LR
Ledger[("Ledger<br/>append-only JSONL")]
ActorMgr["Actor<br/>Manager"]
Auto["Automation<br/>Rules · Nudge · Cron"]
Ledger ~~~ ActorMgr ~~~ Auto
end
subgraph Ports["Control Plane"]
direction LR
Web["Web UI<br/>:8848"]
CLI["CLI"]
MCP["MCP<br/>(stdio)"]
end
subgraph IM["IM Bridges"]
direction LR
TG["Telegram"]
SL["Slack"]
DC["Discord"]
FS["Feishu"]
DT["DingTalk"]
end
Agents <-->|MCP tools| Daemon
Daemon <--> Ports
Web <--> IM
```
**Key design decisions:**
- **Daemon is the single writer** — all state changes go through one process, eliminating race conditions
- **Ledger is append-only** — events are never mutated, making history reliable and debuggable
- **Ports are thin** — Web, CLI, MCP, and IM bridges are stateless frontends; the daemon owns all truth
- **Runtime home is `CCCC_HOME`** (default `~/.cccc/`) — runtime state stays out of your repo
## Supported Runtimes
CCCC orchestrates agents across 12 runtimes. Each actor in a group can use a different runtime.
| Runtime | Auto MCP Setup | Command |
|---------|:--------------:|---------|
| Claude Code | ✅ | `claude` |
| Codex CLI | ✅ | `codex` |
| Gemini CLI | ✅ | `gemini` |
| Droid | ✅ | `droid` |
| Amp | ✅ | `amp` |
| Auggie | ✅ | `auggie` |
| Neovate | ✅ | `neovate` |
| Copilot | — | `copilot` |
| Cursor | — | `cursor-agent` |
| Kilo Code | — | `kilocode` |
| OpenCode | — | `opencode` |
| Custom | — | Any command |
```bash
cccc setup --runtime claude # auto-configures MCP for this runtime
cccc runtime list --all # show all available runtimes
cccc doctor # verify environment and runtime availability
```
## Messaging & Coordination
CCCC implements IM-grade messaging semantics, not just "paste text into a terminal":
- **Recipient routing** — `@all`, `@peers`, `@foreman`, or specific actor IDs
- **Read cursors** — each agent explicitly marks messages as read via MCP
- **Reply & quote** — structured `reply_to` with quoted context
- **Attention ACK** — priority messages require explicit acknowledgment
- **Reply-required obligations** — tracked until the recipient responds
- **Auto-wake** — disabled agents are automatically started when they receive a message
Messages are delivered to PTY actors via terminal injection and to headless actors via system notifications. The daemon tracks delivery state for every message.
## Automation & Policies
A built-in rules engine handles operational concerns so you don't have to babysit:
| Policy | What it does |
|--------|-------------|
| **Nudge** | Reminds agents about unread messages after a configurable timeout |
| **Reply-required follow-up** | Escalates when required replies are overdue |
| **Actor idle detection** | Notifies foreman when an agent goes silent |
| **Keepalive** | Periodic check-in reminders for the foreman |
| **Silence detection** | Alerts when an entire group goes quiet |
Beyond built-in policies, you can create custom automation rules:
- **Interval triggers** — "every N minutes, send a standup reminder"
- **Cron schedules** — "every weekday at 9am, post a status check"
- **One-time triggers** — "at 5pm today, pause the group"
- **Operational actions** — set group state or control actor lifecycles (admin-only, one-time only)
## Web UI
The built-in Web UI at `http://127.0.0.1:8848` provides:
- **Chat view** with `@mention` autocomplete and reply threading
- **Per-actor embedded terminals** (xterm.js) — see exactly what each agent is doing
- **Group & actor management** — create, configure, start, stop, restart
- **Automation rule editor** — configure triggers, schedules, and actions visually
- **Context panel** — shared vision, sketch, milestones, and tasks
- **IM bridge configuration** — connect to Telegram/Slack/Discord/Feishu/DingTalk
- **Settings** — messaging policies, delivery tuning, terminal transcript controls
- **Light / Dark / System themes**
| Chat | Terminal |
|:----:|:-------:|
|  |  |
### Remote access
For accessing the Web UI from outside localhost:
- **Cloudflare Tunnel** (recommended) — `cloudflared tunnel --url http://127.0.0.1:8848`
- **Tailscale** — bind to your tailnet IP: `CCCC_WEB_HOST=$TAILSCALE_IP cccc`
- Always set `CCCC_WEB_TOKEN` for any non-local access
## IM Bridges
Bridge your working group to your team's IM platform:
```bash
cccc im set telegram --token-env TELEGRAM_BOT_TOKEN
cccc im start
```
| Platform | Status |
|----------|--------|
| Telegram | ✅ Supported |
| Slack | ✅ Supported |
| Discord | ✅ Supported |
| Feishu / Lark | ✅ Supported |
| DingTalk | ✅ Supported |
From any supported platform, use `/send @all <message>` to talk to your agents, `/status` to check group health, and `/pause` / `/resume` to control operations — all from your phone.
## CLI Reference
```bash
# Lifecycle
cccc # start daemon + web UI
cccc daemon start|status|stop # daemon management
# Groups
cccc attach . # bind current directory
cccc groups # list all groups
cccc use <group_id> # switch active group
cccc group start|stop # start/stop all actors
# Actors
cccc actor add <id> --runtime <runtime>
cccc actor start|stop|restart <id>
# Messaging
cccc send "message" --to @all
cccc reply <event_id> "response"
cccc tail -n 50 -f # follow the ledger
# Inbox
cccc inbox # show unread messages
cccc inbox --mark-read # mark all as read
# Operations
cccc doctor # environment check
cccc setup --runtime <name> # configure MCP
cccc runtime list --all # available runtimes
# IM
cccc im set <platform> --token-env <ENV_VAR>
cccc im start|stop|status
```
## MCP Tools
Agents interact with CCCC through **49 MCP tools** across 7 namespaces:
| Namespace | Tools | Examples |
|-----------|:-----:|---------|
| **Session** | 2 | `cccc_bootstrap` (one-call init), `cccc_help` (operational playbook) |
| **Messaging** | 7 | `cccc_message_send`, `cccc_message_reply`, `cccc_file_send`, `cccc_inbox_list`, `cccc_inbox_mark_read` ... |
| **Group & Actor** | 10 | `cccc_group_info`, `cccc_group_list`, `cccc_actor_add/remove/start/stop/restart`, `cccc_runtime_list`, `cccc_group_set_state` |
| **Automation** | 2 | `cccc_automation_state`, `cccc_automation_manage` (create/update/enable/disable/delete rules) |
| **Context** | 19 | `cccc_context_get/sync`, `cccc_vision_update`, `cccc_sketch_update`, `cccc_milestone_*`, `cccc_task_*`, `cccc_note_*`, `cccc_reference_*`, `cccc_presence_*` |
| **Headless** | 3 | `cccc_headless_status`, `cccc_headless_set_status`, `cccc_headless_ack_message` |
| **System** | 6 | `cccc_notify_send/ack`, `cccc_terminal_tail`, `cccc_project_info`, `cccc_debug_snapshot`, `cccc_debug_tail_logs` |
Agents with MCP access can self-organize: read their inbox, reply, manage tasks and milestones, set automation rules, and coordinate with peers — all within permission boundaries.
## Where CCCC Fits
| Scenario | Fit |
|----------|-----|
| Multiple coding agents collaborating on one codebase | ✅ Core use case |
| Human + agent coordination with full audit trail | ✅ Core use case |
| Long-running groups managed remotely via phone/IM | ✅ Strong fit |
| Multi-runtime teams (e.g., Claude + Codex + Gemini) | ✅ Strong fit |
| Single-agent local coding helper | ⚠️ Works, but CCCC's value shines with multiple participants |
| Pure DAG workflow orchestration | ❌ Use a dedicated orchestrator; CCCC can complement it |
CCCC is a **collaboration kernel** — it owns the coordination layer and stays composable with external CI/CD, orchestrators, and deployment tools.
## Security
- **Web UI is high-privilege.** Always set `CCCC_WEB_TOKEN` for non-local access.
- **Daemon IPC has no authentication.** It binds to localhost by default.
- **IM bot tokens** are read from environment variables, never stored in config files.
- **Runtime state** lives in `CCCC_HOME` (`~/.cccc/`), not in your repository.
For detailed security guidance, see [SECURITY.md](SECURITY.md).
## Documentation
📚 **[Full documentation](https://chesterra.github.io/cccc/)**
| Section | Description |
|---------|-------------|
| [Getting Started](https://chesterra.github.io/cccc/guide/getting-started/) | Install, launch, create your first group |
| [Use Cases](https://chesterra.github.io/cccc/guide/use-cases) | Practical multi-agent scenarios |
| [Web UI Guide](https://chesterra.github.io/cccc/guide/web-ui) | Navigating the dashboard |
| [IM Bridge Setup](https://chesterra.github.io/cccc/guide/im-bridge/) | Connect Telegram, Slack, Discord, Feishu, DingTalk |
| [Operations Runbook](https://chesterra.github.io/cccc/guide/operations) | Recovery, troubleshooting, maintenance |
| [CLI Reference](https://chesterra.github.io/cccc/reference/cli) | Complete command reference |
| [SDK (Python/TypeScript)](https://github.com/ChesterRa/cccc-sdk) | Integrate apps/services with official daemon clients |
| [Architecture](https://chesterra.github.io/cccc/reference/architecture) | Design decisions and system model |
| [Features Deep Dive](https://chesterra.github.io/cccc/reference/features) | Messaging, automation, runtimes in detail |
| [CCCS Standard](docs/standards/CCCS_V1.md) | Collaboration protocol specification |
| [Daemon IPC Standard](docs/standards/CCCC_DAEMON_IPC_V1.md) | IPC protocol specification |
## Installation Options
### pip (stable, recommended)
```bash
pip install -U cccc-pair
```
### pip (RC from TestPyPI)
```bash
pip install -U --pre \
--index-url https://test.pypi.org/simple/ \
--extra-index-url https://pypi.org/simple/ \
cccc-pair
```
### From source
```bash
git clone https://github.com/ChesterRa/cccc
cd cccc
pip install -e .
```
### uv (fast, recommended on Windows)
```bash
uv venv -p 3.11 .venv
uv pip install -e .
uv run cccc --help
```
### Docker
```bash
cd docker
CCCC_WEB_TOKEN=your-secret docker compose up -d
```
The Docker image bundles Claude Code, Codex CLI, Gemini CLI, and Factory CLI. See [`docker/`](docker/) for full configuration.
### Upgrading from 0.3.x
The 0.4.x line is a ground-up rewrite. Clean uninstall first:
```bash
pipx uninstall cccc-pair || true
pip uninstall cccc-pair || true
rm -f ~/.local/bin/cccc ~/.local/bin/ccccd
```
Then install fresh and run `cccc doctor` to verify your environment.
> The tmux-first 0.3.x line is archived at [cccc-tmux](https://github.com/ChesterRa/cccc-tmux).
## Community
📱 Join our Telegram group: [t.me/ccccpair](https://t.me/ccccpair)
Share workflows, troubleshoot issues, and connect with other CCCC users.
## Contributing
Contributions are welcome. Please:
1. Check existing [Issues](https://github.com/ChesterRa/cccc/issues) before opening a new one
2. For bugs: include `cccc version`, OS, exact commands, and reproduction steps
3. For features: describe the problem, proposed behavior, and operational impact
4. Keep runtime state in `CCCC_HOME` — never commit it to the repo
## License
[Apache-2.0](LICENSE)
| text/markdown | null | ChesterRa <ra@ike.ba> | null | null | null | orchestrator, ai, rfd, pair, collaboration | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Lang... | [] | null | null | >=3.9 | [] | [] | [] | [
"PyYAML<7.0,>=6.0",
"dingtalk-stream>=0.24.3",
"fastapi<1.0,>=0.110",
"lark-oapi>=1.0.0",
"pydantic<3.0,>=2.0",
"python-multipart<1.0,>=0.0.7",
"uvicorn[standard]<1.0,>=0.27"
] | [] | [] | [] | [
"Homepage, https://github.com/ChesterRa/cccc",
"Repository, https://github.com/ChesterRa/cccc",
"Issues, https://github.com/ChesterRa/cccc/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T01:05:14.269740 | cccc_pair-0.4.1.tar.gz | 2,245,367 | 88/7b/15fa021318b062e0500eb0001f03ed30768e902f748367fed760822017f5/cccc_pair-0.4.1.tar.gz | source | sdist | null | false | 9391f7bd5170bdbcfcd0a61ba31ac44d | ce7bf2677806801a08d34318cd7906a512ddea314a10ad9bebda4e2c5517810a | 887b15fa021318b062e0500eb0001f03ed30768e902f748367fed760822017f5 | Apache-2.0 | [
"LICENSE"
] | 250 |
2.4 | keyalias | 1.0.6 | This project allows for adding a property that is an alias for an indexer to a class. | ========
keyalias
========
Visit the website `https://keyalias.johannes-programming.online/ <https://keyalias.johannes-programming.online/>`_ for more information.
| text/x-rst | null | Johannes <johannes.programming@gmail.com> | null | null | The MIT License (MIT)
Copyright (c) 2024 Johannes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3... | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Download, https://pypi.org/project/keyalias/#files",
"Index, https://pypi.org/project/keyalias/",
"Source, https://github.com/johannes-programming/keyalias/",
"Website, https://keyalias.johannes-programming.online/"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T01:03:11.567379 | keyalias-1.0.6.tar.gz | 5,402 | c3/5a/75db9cae0b4a22b169a6c770ab15420883a0e537ee2d11c8dfc6d1b83a97/keyalias-1.0.6.tar.gz | source | sdist | null | false | 2629aa21720ac39b62fe19c4f893c66c | 60d747d362afdf4681e275f57eb4f0f7ad78cb71f0f36b817e39b59610f8ce22 | c35a75db9cae0b4a22b169a6c770ab15420883a0e537ee2d11c8dfc6d1b83a97 | null | [
"LICENSE.txt"
] | 236 |
2.4 | compressed-tensors | 0.13.1a20260219 | Library for utilization of compressed safetensors of neural network models | # compressed-tensors
The `compressed-tensors` library extends the [safetensors](https://github.com/huggingface/safetensors) format, providing a versatile and efficient way to store and manage compressed tensor data. This library supports various quantization and sparsity schemes, making it a unified format for handling different model optimizations like GPTQ, AWQ, SmoothQuant, INT8, FP8, SparseGPT, and more.
## Why `compressed-tensors`?
As model compression becomes increasingly important for efficient deployment of LLMs, the landscape of quantization and compression techniques has become increasingly fragmented.
Each method often comes with its own storage format and loading procedures, making it challenging to work with multiple techniques or switch between them.
`compressed-tensors` addresses this by providing a single, extensible format that can represent a wide variety of compression schemes.
* **Unified Checkpoint Format**: Supports various compression schemes in a single, consistent format.
* **Wide Compatibility**: Works with popular quantization methods like GPTQ, SmoothQuant, and FP8. See [llm-compressor](https://github.com/vllm-project/llm-compressor)
* **Flexible Quantization Support**:
* Weight-only quantization (e.g., W4A16, W8A16, WnA16)
* Activation quantization (e.g., W8A8)
* KV cache quantization
* Non-uniform schemes (different layers can be quantized in different ways!)
* **Sparsity Support**: Handles both unstructured and semi-structured (e.g., 2:4) sparsity patterns.
* **Open-Source Integration**: Designed to work seamlessly with Hugging Face models and PyTorch.
This allows developers and researchers to easily experiment with composing different quantization methods, simplify model deployment pipelines, and reduce the overhead of supporting multiple compression formats in inference engines.
## Installation
### From [PyPI](https://pypi.org/project/compressed-tensors)
Stable release:
```bash
pip install compressed-tensors
```
Nightly release:
```bash
pip install --pre compressed-tensors
```
### From Source
```bash
git clone https://github.com/vllm-project/compressed-tensors
cd compressed-tensors
pip install -e .
```
## Getting started
### Saving/Loading Compressed Tensors (Bitmask Compression)
The function `save_compressed` uses the `compression_format` argument to apply compression to tensors.
The function `load_compressed` reverses the process: converts the compressed weights on disk to decompressed weights in device memory.
```python
from compressed_tensors import save_compressed, load_compressed, BitmaskConfig
from torch import Tensor
from typing import Dict
# the example BitmaskConfig method efficiently compresses
# tensors with large number of zero entries
compression_config = BitmaskConfig()
tensors: Dict[str, Tensor] = {"tensor_1": Tensor(
[[0.0, 0.0, 0.0],
[1.0, 1.0, 1.0]]
)}
# compress tensors using BitmaskConfig compression format (save them efficiently on disk)
save_compressed(tensors, "model.safetensors", compression_format=compression_config.format)
# decompress tensors (load_compressed returns a generator for memory efficiency)
decompressed_tensors = {}
for tensor_name, tensor in load_compressed("model.safetensors", compression_config = compression_config):
decompressed_tensors[tensor_name] = tensor
```
## Saving/Loading Compressed Models (Bitmask Compression)
We can apply bitmask compression to a whole model. For more detailed example see `example` directory.
```python
from compressed_tensors import save_compressed_model, load_compressed, BitmaskConfig
from transformers import AutoModelForCausalLM
model_name = "RedHatAI/llama2.c-stories110M-pruned50"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto")
original_state_dict = model.state_dict()
compression_config = BitmaskConfig()
# save compressed model weights
save_compressed_model(model, "compressed_model.safetensors", compression_format=compression_config.format)
# load compressed model weights (`dict` turns generator into a dictionary)
state_dict = dict(load_compressed("compressed_model.safetensors", compression_config))
```
For more in-depth tutorial on bitmask compression, refer to the [notebook](https://github.com/vllm-project/compressed-tensors/blob/d707c5b84bc3fef164aebdcd97cb6eaa571982f8/examples/bitmask_compression.ipynb).
## Saving a Compressed Model with PTQ
We can use compressed-tensors to run basic post training quantization (PTQ) and save the quantized model compressed on disk
```python
model_name = "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cuda:0", torch_dtype="auto")
config = QuantizationConfig.parse_file("./examples/bit_packing/int4_config.json")
config.quantization_status = QuantizationStatus.CALIBRATION
apply_quantization_config(model, config)
dataset = load_dataset("ptb_text_only")["train"]
tokenizer = AutoTokenizer.from_pretrained(model_name)
def tokenize_function(examples):
return tokenizer(examples["sentence"], padding=False, truncation=True, max_length=1024)
tokenized_dataset = dataset.map(tokenize_function, batched=True)
data_loader = DataLoader(tokenized_dataset, batch_size=1, collate_fn=DefaultDataCollator())
with torch.no_grad():
for idx, sample in tqdm(enumerate(data_loader), desc="Running calibration"):
sample = {key: value.to(device) for key,value in sample.items()}
_ = model(**sample)
if idx >= 512:
break
model.apply(freeze_module_quantization)
model.apply(compress_quantized_weights)
output_dir = "./ex_llama1.1b_w4a16_packed_quantize"
compressor = ModelCompressor(quantization_config=config)
compressed_state_dict = compressor.compress(model)
model.save_pretrained(output_dir, state_dict=compressed_state_dict)
```
For more in-depth tutorial on quantization compression, refer to the [notebook](./examples/quantize_and_pack_int4.ipynb).
| text/markdown | The vLLM Project | vllm-questions@lists.berkeley.edu | null | null | Apache 2.0 | null | [] | [] | https://github.com/vllm-project/compressed-tensors | null | null | [] | [] | [] | [
"torch>=1.7.0",
"transformers<5.0.0",
"pydantic>=2.0",
"loguru",
"black==22.12.0; extra == \"dev\"",
"isort==5.8.0; extra == \"dev\"",
"wheel>=0.36.2; extra == \"dev\"",
"flake8>=3.8.3; extra == \"dev\"",
"pytest>=6.0.0; extra == \"dev\"",
"nbconvert>=7.16.3; extra == \"dev\"",
"transformers<5.0... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T01:03:07.248274 | compressed_tensors-0.13.1a20260219.tar.gz | 223,573 | 5c/37/ea1f6fb87f924c25f76dc7866af51776c772152849d2ea380593b57e2616/compressed_tensors-0.13.1a20260219.tar.gz | source | sdist | null | false | d906201c326bf3602dcf8f5e3d93ed9c | bdfd7df63856a84757f9d68474f59ff6cafcc136f283981f4f992d6303bea664 | 5c37ea1f6fb87f924c25f76dc7866af51776c772152849d2ea380593b57e2616 | null | [
"LICENSE"
] | 877 |
2.4 | firecode | 1.5.3 | FIRECODE: Filtering Refiner and Embedder for Conformationally Dense Ensembles |
# FIRECODE - Filtering Refiner and Embedder for Conformationally Dense Ensembles
<div align="center">
[](https://opensource.org/licenses/LGPL-3.0)

[](https://pixi.sh)


[](https://www.codefactor.io/repository/github/ntampellini/firecode)
[](https://codecov.io/gh/ntampellini/FIRECODE)
[](https://pypi.org/project/firecode/)
[](https://pypi.org/project/firecode/)
[](https://firecode.readthedocs.io/en/latest/?badge=latest)

[](https://github.com/charliermarsh/ruff)

</div>
<p align="center">
<img src="docs/images/logo.png" alt="FIRECODE logo" class="center" width="500"/>
</p>
FIRECODE is a computational chemistry workflow driver for the generation, optimization and refinement of conformational ensembles, also implementing some transition state ultilities.
It implements flexible and customizable workflows for conformer generation (via [CREST](https://github.com/crest-lab/crest), [RDKit](https://github.com/rdkit/rdkit)), double-ended TS search ([NEB](https://ase-lib.org/ase/neb.html) via [ASE](https://github.com/rosswhitfield/ase), [ML-FSM](https://github.com/thegomeslab/ML-FSM)), and (constrained) ensemble optimization through popular calculators like [XTB](https://github.com/grimme-lab/xtb), [TBLITE](https://github.com/tblite/tblite), [ORCA](https://www.orcasoftware.de/tutorials_orca/), and Pytorch Neural Network models ([AIMNET2](https://github.com/isayevlab/AIMNet2), [UMA](https://huggingface.co/facebook/UMA)) via [ASE](https://github.com/rosswhitfield/ase).
Conformational pruning is performed with the now standalone [PRISM Pruner](https://github.com/ntampellini/prism_pruner).
As a legacy feature from [TSCoDe](https://github.com/ntampellini/TSCoDe), FIRECODE can also assemble non-covalent adducts from conformational ensembles (embedding) programmatically.
## Installation
The package is distributed via `pip`, and the use of [`uv`](https://docs.astral.sh/uv/) is highly recommended. The default installation is minimalistic, and torch/GPU support requires dedicated installs:
```python
uv pip install firecode # XTB, TBLITE, ORCA
uv pip install firecode[aimnet2] # + AIMNET2
uv pip install firecode[uma] # + UMA/OMOL
uv pip install firecode[full] # + AIMNET2, UMA/OMOL
```
More installation details in the documentation.
## Documentation
Additional documentation on how to install and use the program can be found on [readthedocs](https://firecode.readthedocs.io/en/latest/index.html).
| text/markdown | null | Nicolò Tampellini <nicolo.tampellini@yale.edu> | null | null | null | null | [] | [] | null | null | <3.13,>=3.12 | [] | [] | [] | [
"ase",
"inquirerpy",
"matplotlib",
"networkx",
"numpy",
"prettytable",
"prism-pruner",
"psutil",
"rdkit>=2025.9.3",
"rich",
"scipy",
"aimnet[ase]; extra == \"aimnet2\"",
"aimnet[ase]; extra == \"full\"",
"fairchem-core; extra == \"full\"",
"fairchem-core; extra == \"uma\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T01:02:31.456315 | firecode-1.5.3.tar.gz | 359,735 | bd/71/fee2919ddae510a3432a2b3bf427eb9440ca9e9ba9800b45851da7c6ab9c/firecode-1.5.3.tar.gz | source | sdist | null | false | 98ad4f46fbbdeed7a311aced688d270a | 3ab3a35aa042780447b6f2c136a77a1a84c95c253b29e6c491e31e17046124b1 | bd71fee2919ddae510a3432a2b3bf427eb9440ca9e9ba9800b45851da7c6ab9c | LGPL-3.0-or-later | [
"LICENSE"
] | 241 |
2.1 | doxapy | 0.9.5 | An image binarization library focussing on local adaptive thresholding | # DoxaPy
## Introduction
DoxaPy is an image binarization library focusing on local adaptive thresholding algorithms. In English, this means that it has the ability to turn a color or gray scale image into a black and white image.
**Algorithms**
* Otsu - "A threshold selection method from gray-level histograms", 1979.
* Bernsen - "Dynamic thresholding of gray-level images", 1986.
* Niblack - "An Introduction to Digital Image Processing", 1986.
* Sauvola - "Adaptive document image binarization", 1999.
* Wolf - "Extraction and Recognition of Artificial Text in Multimedia Documents", 2003.
* Gatos - "Adaptive degraded document image binarization", 2005. (Partial)
* NICK - "Comparison of Niblack inspired Binarization methods for ancient documents", 2009.
* AdOtsu - "A multi-scale framework for adaptive binarization of degraded document images", 2010.
* Su - "Binarization of Historical Document Images Using the Local Maximum and Minimum", 2010.
* T.R. Singh - "A New local Adaptive Thresholding Technique in Binarization", 2011.
* Bataineh - "An adaptive local binarization method for document images based on a novel thresholding method and dynamic windows", 2011. (unreproducible)
* ISauvola - "ISauvola: Improved Sauvola's Algorithm for Document Image Binarization", 2016.
* WAN - "Binarization of Document Image Using Optimum Threshold Modification", 2018.
**Optimizations**
* Shafait - "Efficient Implementation of Local Adaptive Thresholding Techniques Using Integral Images", 2008.
* Petty - An algorithm for efficiently calculating the min and max of a local window. Unpublished, 2019.
* Chan - "Memory-efficient and fast implementation of local adaptive binarization methods", 2019.
* SIMD - SSE2, ARM NEON
**Performance Metrics**
* Overall Accuracy
* F-Measure, Precision, Recall
* Pseudo F-Measure, Precision, Recall - "Performance Evaluation Methodology for Historical Document Image Binarization", 2013.
* Peak Signal-To-Noise Ratio (PSNR)
* Negative Rate Metric (NRM)
* Matthews Correlation Coefficient (MCC)
* Distance-Reciprocal Distortion Measure (DRDM) - "An Objective Distortion Measure for Binary Document Images Based on Human Visual Perception", 2002.
## Overview
DoxaPy uses the Δoxa Binarization Framework for quickly processing python Image files. It is comprised of three major sets of algorithms: Color to Grayscale, Grayscale to Binary, and Performance Metrics. It can be used as a full DIBCO Metrics replacement that is significantly smaller, faster, and easier to integrate into existing projects.
### Example
This short demo uses DoxaPy to read in a color image, converts it to binary, and then compares it to a Ground Truth image in order to calculate performance.
```python
from PIL import Image
import numpy as np
import doxapy
def read_image(file, algorithm=doxapy.GrayscaleAlgorithms.MEAN):
"""Read an image. If its color, use one of our many Grayscale algorithms to convert it."""
image = Image.open(file)
# If already in grayscale or binary, do not convert it
if image.mode == 'L':
return np.array(image)
# Read the color image
rgb_image = np.array(image.convert('RGB') if image.mode not in ('RGB', 'RGBA') else image)
# Use Doxa to convert grayscale
return doxapy.to_grayscale(algorithm, rgb_image)
# Read our target image and convert it to grayscale
grayscale_image = read_image("2JohnC1V3.png")
# Convert the grayscale image to a binary image (algorithm parameters optional)
binary_image = doxapy.to_binary(doxapy.Binarization.Algorithms.SAUVOLA, grayscale_image, {"window": 75, "k": 0.2})
# Calculate the binarization performance using a Ground Truth image
groundtruth_image = read_image("2JohnC1V3-GroundTruth.png")
performance = doxapy.calculate_performance(groundtruth_image, binary_image)
print(performance)
# Display our resulting image
Image.fromarray(binary_image).show()
```
### DoxaPy Notebook
For more details, open the [DoxaPy Notebook](https://github.com/brandonmpetty/Doxa/blob/master/Bindings/Python/DoxaPy.ipynb) and to get an interactive demo.
## Building and Test
DoxaPy supports 64b Linux, Windows, and Mac OSX on Python 3.x. Starting with DoxaPy 0.9.4, Python 3.12 and above are supported with full ABI compatibility. This means that new versions of DoxaPy will only be published due to feature enhancements, not Python version support.
**Build from Project Root**
```bash
# From the Doxa project root
git clone --depth 1 https://github.com/brandonmpetty/Doxa.git
cd Doxa
cmake --preset python
cmake --build build-python --config Release
pip install -r Bindings/Python/requirements.txt
ctest --test-dir build-python -C Release
```
**Local Package Build**
```bash
python -m build
```
**Local Wheel Build**
```bash
pip wheel . --no-deps
```
## License
CC0 - Brandon M. Petty, 2026
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
[View Online](https://creativecommons.org/publicdomain/zero/1.0/legalcode)
"*Freely you have received; freely give.*" - Matt 10:8 | text/markdown | null | "Brandon M. Petty" <brandonpetty1981@gmail.com> | null | null | CC0-1.0 | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"numpy>=1.20.0"
] | [] | [] | [] | [
"Homepage, https://github.com/brandonmpetty/doxa"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:02:15.012344 | doxapy-0.9.5.tar.gz | 47,386 | 6f/f2/9a7841015befdb8f2f2401e58f4b8d301ecd8d5d38676a7debf5868abc65/doxapy-0.9.5.tar.gz | source | sdist | null | false | 1375c178e4e1a9a27018366fc8882d67 | 19a0c45e28412c6b7fd60bf5d37f708f45806db24a711fa20f64cfff61b86c3d | 6ff29a7841015befdb8f2f2401e58f4b8d301ecd8d5d38676a7debf5868abc65 | null | [] | 454 |
2.4 | synapse-a2a | 0.6.5 | Agent-to-Agent communication protocol for CLI agents | # Synapse A2A
**🌐 Language: English | [日本語](README.ja.md) | [中文](README.zh.md) | [한국어](README.ko.md) | [Español](README.es.md) | [Français](README.fr.md)**
> **Enable agents to collaborate on tasks without changing their behavior**
[](https://www.python.org/downloads/)
[](LICENSE)
[](#testing)
[](https://deepwiki.com/s-hiraoku/synapse-a2a)
> A framework that enables inter-agent collaboration via the Google A2A Protocol while keeping CLI agents (Claude Code, Codex, Gemini, OpenCode, GitHub Copilot CLI) **exactly as they are**
## Project Goals
```text
┌─────────────────────────────────────────────────────────────────┐
│ ✅ Non-Invasive: Don't change agent behavior │
│ ✅ Collaborative: Enable agents to work together │
│ ✅ Transparent: Maintain existing workflows │
└─────────────────────────────────────────────────────────────────┘
```
Synapse A2A **transparently wraps** each agent's input/output without modifying the agent itself. This means:
- **Leverage each agent's strengths**: Users can freely assign roles and specializations
- **Zero learning curve**: Continue using existing workflows
- **Future-proof**: Resistant to agent updates
See [Project Philosophy](docs/project-philosophy.md) for details.
```mermaid
flowchart LR
subgraph Terminal1["Terminal 1"]
subgraph Agent1["synapse claude :8100"]
Server1["A2A Server"]
PTY1["PTY + Claude CLI"]
end
end
subgraph Terminal2["Terminal 2"]
subgraph Agent2["synapse codex :8120"]
Server2["A2A Server"]
PTY2["PTY + Codex CLI"]
end
end
subgraph External["External"]
ExtAgent["Google A2A Agent"]
end
Server1 <-->|"POST /tasks/send"| Server2
Server1 <-->|"A2A Protocol"| ExtAgent
Server2 <-->|"A2A Protocol"| ExtAgent
```
---
## Table of Contents
- [Features](#features)
- [Prerequisites](#prerequisites)
- [Quick Start](#quick-start)
- [Use Cases](#use-cases)
- [Skills](#skills)
- [Documentation](#documentation)
- [Architecture](#architecture)
- [CLI Commands](#cli-commands)
- [API Endpoints](#api-endpoints)
- [Task Structure](#task-structure)
- [Sender Identification](#sender-identification)
- [Priority Levels](#priority-levels)
- [Agent Card](#agent-card)
- [Registry and Port Management](#registry-and-port-management)
- [File Safety](#file-safety)
- [Agent Monitor](#agent-monitor)
- [Testing](#testing)
- [Configuration (.synapse)](#configuration-synapse)
- [Development & Release](#development--release)
---
## Features
| Category | Feature |
| -------- | ------- |
| **A2A Compliant** | All communication uses Message/Part + Task format, Agent Card discovery |
| **CLI Integration** | Turn existing CLI tools into A2A agents without modification |
| **synapse send** | Send messages between agents via `synapse send <agent> "message"` |
| **Sender Identification** | Auto-identify sender via `metadata.sender` + PID matching |
| **Priority Interrupt** | Priority 5 sends SIGINT before message (emergency stop) |
| **Multi-Instance** | Run multiple agents of the same type (automatic port assignment) |
| **External Integration** | Communicate with other Google A2A agents |
| **File Safety** | Prevent multi-agent conflicts with file locking and change tracking (visible in `synapse list`) |
| **Agent Naming** | Custom names and roles for easy identification (`synapse send my-claude "hello"`) |
| **Agent Monitor** | Real-time status (READY/WAITING/PROCESSING/DONE), CURRENT task preview, terminal jump |
| **Task History** | Automatic task tracking with search, export, and statistics (enabled by default) |
| **Shared Task Board** | SQLite-based task coordination with dependency tracking (`synapse tasks`) |
| **Quality Gates** | Configurable hooks (`on_idle`, `on_task_completed`) that control status transitions |
| **Plan Approval** | Plan-mode workflow with `synapse approve/reject` for human-in-the-loop review |
| **Graceful Shutdown** | `synapse kill` sends shutdown request before SIGTERM (30s timeout, `-f` for force) |
| **Delegate Mode** | `--delegate-mode` makes an agent a coordinator that delegates instead of editing files |
| **Auto-Spawn Panes** | `synapse team start` — 1st agent takes over current terminal, others in new panes. `--all-new` to start all in new panes. Supports `profile:name:role:skill_set` spec (tmux/iTerm2/Terminal.app/zellij) |
| **Spawn Single Agent** | `synapse spawn <profile>` — Spawn a single agent in a new terminal pane or window |
---
## Prerequisites
- **OS**: macOS / Linux (Windows via WSL2 recommended)
- **Python**: 3.10+
- **CLI Tools**: Pre-install and configure the agents you want to use:
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code)
- [Codex CLI](https://github.com/openai/codex)
- [Gemini CLI](https://github.com/google-gemini/gemini-cli)
- [OpenCode](https://github.com/opencode-ai/opencode)
- [GitHub Copilot CLI](https://docs.github.com/en/copilot/github-copilot-in-the-cli)
---
## Quick Start
### 1. Install Synapse A2A
<details>
<summary><b>macOS / Linux / WSL2 (recommended)</b></summary>
```bash
# pipx (recommended)
pipx install synapse-a2a
# Or run directly with uvx (no install)
uvx synapse-a2a claude
```
</details>
<details>
<summary><b>Windows</b></summary>
> **WSL2 is strongly recommended.** Synapse A2A uses `pty.spawn()` which requires a Unix-like terminal.
```bash
# Inside WSL2 — same as Linux
pipx install synapse-a2a
# Scoop (experimental, WSL2 still required for pty)
scoop bucket add synapse-a2a https://github.com/s-hiraoku/scoop-synapse-a2a
scoop install synapse-a2a
```
</details>
<details>
<summary><b>Developer (from source)</b></summary>
```bash
# Install with uv
uv sync
# Or pip (editable)
pip install -e .
```
</details>
**With gRPC support:**
```bash
pip install "synapse-a2a[grpc]"
```
### 2. Install Skills (Recommended)
**Installing skills is strongly recommended to get the most out of Synapse A2A.**
Skills help Claude automatically understand Synapse A2A features: @agent messaging, File Safety, and more.
```bash
# Install via skills.sh (https://skills.sh/)
npx skills add s-hiraoku/synapse-a2a
```
See [Skills](#skills) for details.
### 3. Start Agents
```bash
# Terminal 1: Claude
synapse claude
# Terminal 2: Codex
synapse codex
# Terminal 3: Gemini
synapse gemini
# Terminal 4: OpenCode
synapse opencode
# Terminal 5: GitHub Copilot CLI
synapse copilot
```
> Note: If terminal scrollback display is garbled, try:
> ```bash
> uv run synapse gemini
> # or
> uv run python -m synapse.cli gemini
> ```
Ports are auto-assigned:
| Agent | Port Range |
| -------- | ---------- |
| Claude | 8100-8109 |
| Gemini | 8110-8119 |
| Codex | 8120-8129 |
| OpenCode | 8130-8139 |
| Copilot | 8140-8149 |
### 4. Inter-Agent Communication
Use `synapse send` to send messages between agents:
```bash
synapse send codex "Please review this design" --from synapse-claude-8100
synapse send gemini "Suggest API improvements" --from synapse-claude-8100
```
For multiple instances of the same type, use type-port format:
```bash
synapse send codex-8120 "Handle this task" --from synapse-claude-8100
synapse send codex-8121 "Handle that task" --from synapse-claude-8100
```
### 5. HTTP API
```bash
# Send message
curl -X POST http://localhost:8100/tasks/send \
-H "Content-Type: application/json" \
-d '{"message": {"role": "user", "parts": [{"type": "text", "text": "Hello!"}]}}'
# Emergency stop (Priority 5)
curl -X POST "http://localhost:8100/tasks/send-priority?priority=5" \
-H "Content-Type: application/json" \
-d '{"message": {"role": "user", "parts": [{"type": "text", "text": "Stop!"}]}}'
```
---
## Use Cases
### 1. Instant Specification Lookup (Simple)
While coding with **Claude**, quickly query **Gemini** (better at web search) for the latest library specs or error info without context switching.
```bash
# In Claude's terminal:
synapse send gemini "Summarize the new f-string features in Python 3.12" --from synapse-claude-8100
```
### 2. Cross-Review Designs (Intermediate)
Get feedback on your design from agents with different perspectives.
```bash
# After Claude drafts a design:
synapse send gemini "Critically review this design from scalability and maintainability perspectives" --from synapse-claude-8100
```
### 3. TDD Pair Programming (Intermediate)
Separate "test writer" and "implementer" for robust code.
```bash
# Terminal 1 (Codex):
Create unit tests for auth.py - normal case and token expiration case.
# Terminal 2 (Claude):
synapse send codex-8120 "Implement auth.py to pass the tests you created" --from synapse-claude-8100
```
### 4. Security Audit (Specialized)
Have an agent with a security expert role audit your code before committing.
```bash
# Give Gemini a role:
You are a security engineer. Review only for vulnerabilities (SQLi, XSS, etc.)
# After writing code:
synapse send gemini "Audit the current changes (git diff)" --from synapse-claude-8100
```
### 5. Auto-Fix from Error Logs (Advanced)
Pass error logs to an agent for automatic fix suggestions.
```bash
# Tests failed...
pytest > error.log
# Ask agent to fix
synapse send claude "Read error.log and fix the issue in synapse/server.py" --from synapse-gemini-8110
```
### 6. Language/Framework Migration (Advanced)
Distribute large refactoring work across agents.
```bash
# Terminal 1 (Claude):
Read legacy_api.js and create TypeScript type definitions
# Terminal 2 (Codex):
synapse send claude "Use the type definitions you created to rewrite legacy_api.js to src/new_api.ts" --from synapse-codex-8121
```
### Comparison with SSH Remote
| Operation | SSH | Synapse |
|-----------|-----|---------|
| Manual CLI operation | ◎ | ◎ |
| Programmatic task submission | △ requires expect etc. | ◎ HTTP API |
| Multiple simultaneous clients | △ multiple sessions | ◎ single endpoint |
| Real-time progress notifications | ✗ | ◎ SSE/Webhook |
| Automatic inter-agent coordination | ✗ | ◎ synapse send |
> **Note**: SSH is often sufficient for individual CLI use. Synapse shines when you need automation, coordination, and multi-agent collaboration.
---
## Skills
**Installing skills is strongly recommended** when using Synapse A2A with Claude Code.
### Why Install Skills?
With skills installed, Claude automatically understands and executes:
- **synapse send**: Inter-agent communication via `synapse send codex "Fix this" --from synapse-claude-8100`
- **Priority control**: Message sending with Priority 1-5 (5 = emergency stop)
- **File Safety**: Prevent multi-agent conflicts with file locking and change tracking
- **History management**: Search, export, and statistics for task history
### Installation
```bash
# Install via skills.sh (https://skills.sh/)
npx skills add s-hiraoku/synapse-a2a
```
### Included Skills
| Skill | Description |
|-------|-------------|
| **synapse-a2a** | Comprehensive guide for inter-agent communication: `synapse send`, priority, A2A protocol, history, File Safety, settings |
**Core Skills**: Essential skills like `synapse-a2a` are automatically deployed to agent directories on startup (best-effort) to ensure basic quality even if skill sets are skipped.
### Skill Management
Synapse includes a built-in skill manager with a central store (`~/.synapse/skills/`) for organizing and deploying skills across agents.
#### Skill Scopes
| Scope | Location | Description |
|-------|----------|-------------|
| **Synapse** | `~/.synapse/skills/` | Central store (deploy to agents from here) |
| **User** | `~/.claude/skills/`, `~/.agents/skills/`, etc. | User-wide skills |
| **Project** | `./.claude/skills/`, `./.agents/skills/`, etc. | Project-local skills |
| **Plugin** | `./plugins/*/skills/` | Read-only plugin skills |
#### Commands
```bash
# Interactive TUI
synapse skills
# List and browse
synapse skills list # All scopes
synapse skills list --scope synapse # Central store only
synapse skills show <name> # Skill details
# Manage
synapse skills delete <name> [--force]
synapse skills move <name> --to <scope>
# Central store operations
synapse skills import <name> # Import from agent dirs to ~/.synapse/skills/
synapse skills deploy <name> --agent claude,codex --scope user
synapse skills add <repo> # Install from repo (npx skills wrapper)
synapse skills create # Show guided skill creation steps
# Skill sets (named groups)
synapse skills set list
synapse skills set show <name>
```
### Directory Structure
```text
plugins/
└── synapse-a2a/
├── .claude-plugin/plugin.json
├── README.md
└── skills/
└── synapse-a2a/SKILL.md
```
See [plugins/synapse-a2a/README.md](plugins/synapse-a2a/README.md) for details.
> **Note**: Codex and Gemini don't support plugins, but you can place expanded skills in the `.agents/skills/` (Codex/OpenCode) or `.gemini/skills/` directory respectively to enable these features.
---
## Documentation
- [guides/README.md](guides/README.md) - Documentation overview
- [guides/multi-agent-setup.md](guides/multi-agent-setup.md) - Setup guide
- [guides/usage.md](guides/usage.md) - Commands and usage patterns
- [guides/settings.md](guides/settings.md) - `.synapse` configuration details
- [guides/troubleshooting.md](guides/troubleshooting.md) - Common issues and solutions
---
## Architecture
### A2A Server/Client Structure
In Synapse, **each agent operates as an A2A server**. There's no central server; it's a P2P architecture.
```
┌─────────────────────────────────────┐ ┌─────────────────────────────────────┐
│ synapse claude (port 8100) │ │ synapse codex (port 8120) │
│ ┌───────────────────────────────┐ │ │ ┌───────────────────────────────┐ │
│ │ FastAPI Server (A2A Server) │ │ │ │ FastAPI Server (A2A Server) │ │
│ │ /.well-known/agent.json │ │ │ │ /.well-known/agent.json │ │
│ │ /tasks/send │◄─┼────┼──│ A2AClient │ │
│ │ /tasks/{id} │ │ │ └───────────────────────────────┘ │
│ └───────────────────────────────┘ │ │ ┌───────────────────────────────┐ │
│ ┌───────────────────────────────┐ │ │ │ PTY + Codex CLI │ │
│ │ PTY + Claude CLI │ │ │ └───────────────────────────────┘ │
│ └───────────────────────────────┘ │ └─────────────────────────────────────┘
└─────────────────────────────────────┘
```
Each agent is:
- **A2A Server**: Accepts requests from other agents
- **A2A Client**: Sends requests to other agents
### Key Components
| Component | File | Role |
| --------- | ---- | ---- |
| FastAPI Server | `synapse/server.py` | Provides A2A endpoints |
| A2A Router | `synapse/a2a_compat.py` | A2A protocol implementation |
| A2A Client | `synapse/a2a_client.py` | Communication with other agents |
| TerminalController | `synapse/controller.py` | PTY management, READY/PROCESSING detection |
| InputRouter | `synapse/input_router.py` | @Agent pattern detection |
| AgentRegistry | `synapse/registry.py` | Agent registration and lookup |
| SkillManager | `synapse/skills.py` | Skill discovery, deploy, import, skill sets |
| SkillManagerCmd | `synapse/commands/skill_manager.py` | Skill management TUI and CLI |
### Startup Sequence
```mermaid
sequenceDiagram
participant Synapse as Synapse Server
participant Registry as AgentRegistry
participant PTY as TerminalController
participant CLI as CLI Agent
Synapse->>Registry: 1. Register agent (agent_id, pid, port)
Synapse->>PTY: 2. Start PTY
PTY->>CLI: 3. Start CLI agent
Synapse->>PTY: 4. Send initial instructions (sender: synapse-system)
PTY->>CLI: 5. AI receives initial instructions
```
### Communication Flow
```mermaid
sequenceDiagram
participant User
participant Claude as Claude (8100)
participant Client as A2AClient
participant Codex as Codex (8120)
User->>Claude: @codex Review this design
Claude->>Client: send_to_local()
Client->>Codex: POST /tasks/send-priority
Codex->>Codex: Create Task → Write to PTY
Codex-->>Client: {"task": {"id": "...", "status": "working"}}
Client-->>Claude: [→ codex] Send complete
```
---
## CLI Commands
### Basic Operations
```bash
# Start agent (foreground)
synapse claude
synapse codex
synapse gemini
synapse opencode
synapse copilot
# Start with custom name and role
synapse claude --name my-claude --role "code reviewer"
# Skip interactive name/role setup
synapse claude --no-setup
# Specify port
synapse claude --port 8105
# Pass arguments to CLI tool
synapse claude -- --resume
```
### Agent Naming
Assign custom names and roles to agents for easier identification and management:
```bash
# Interactive setup (default when starting agent)
synapse claude
# → Prompts for name and role
# Skip interactive setup
synapse claude --no-setup
# Set name and role via CLI options
synapse claude --name my-claude --role "code reviewer"
# After agent is running, change name/role
synapse rename synapse-claude-8100 --name my-claude --role "test writer"
synapse rename my-claude --role "documentation" # Change role only
synapse rename my-claude --clear # Clear name and role
```
Once named, use the custom name for all operations:
```bash
synapse send my-claude "Review this code" --from synapse-codex-8121
synapse jump my-claude
synapse kill my-claude
```
**Name vs ID:**
- **Display/Prompts**: Shows name if set, otherwise ID (e.g., `Kill my-claude (PID: 1234)?`)
- **Internal processing**: Always uses agent ID (`synapse-claude-8100`)
- **Target resolution**: Name has highest priority when matching targets
### Command List
| Command | Description |
| ------- | ----------- |
| `synapse <profile>` | Start in foreground |
| `synapse start <profile>` | Start in background |
| `synapse stop <profile\|id>` | Stop agent (can specify ID) |
| `synapse kill <target>` | Graceful shutdown (sends shutdown request, then SIGTERM after 30s) |
| `synapse kill <target> -f` | Force kill (immediate SIGKILL) |
| `synapse jump <target>` | Jump to agent's terminal |
| `synapse rename <target>` | Assign name/role to agent |
| `synapse --version` | Show version |
| `synapse list` | List running agents (Rich TUI with auto-refresh and terminal jump) |
| `synapse logs <profile>` | Show logs |
| `synapse send <target> <message>` | Send message |
| `synapse reply <message>` | Reply to the last received A2A message |
| `synapse trace <task_id>` | Show task history + file-safety cross-reference |
| `synapse instructions show` | Show instruction content |
| `synapse instructions files` | List instruction files |
| `synapse instructions send` | Resend initial instructions |
| `synapse history list` | Show task history |
| `synapse history show <task_id>` | Show task details |
| `synapse history search` | Keyword search |
| `synapse history cleanup` | Delete old data |
| `synapse history stats` | Show statistics |
| `synapse history export` | Export to JSON/CSV |
| `synapse file-safety status` | Show file safety statistics |
| `synapse file-safety locks` | List active locks |
| `synapse file-safety lock` | Lock a file |
| `synapse file-safety unlock` | Release lock |
| `synapse file-safety history` | File change history |
| `synapse file-safety recent` | Recent changes |
| `synapse file-safety record` | Manually record change |
| `synapse file-safety cleanup` | Delete old data |
| `synapse file-safety debug` | Show debug info |
| `synapse skills` | Skill Manager (interactive TUI) |
| `synapse skills list` | List discovered skills |
| `synapse skills show <name>` | Show skill details |
| `synapse skills delete <name>` | Delete a skill |
| `synapse skills move <name>` | Move skill to another scope |
| `synapse skills deploy <name>` | Deploy skill from central store to agent dirs |
| `synapse skills import <name>` | Import skill to central store (~/.synapse/skills/) |
| `synapse skills add <repo>` | Install skill from repository (via npx skills) |
| `synapse skills create` | Show guided skill creation steps (uses anthropic-skill-creator) |
| `synapse skills set list` | List skill sets |
| `synapse skills set show <name>` | Show skill set details |
| `synapse config` | Settings management (interactive TUI) |
| `synapse config show` | Show current settings |
| `synapse tasks list` | List shared task board |
| `synapse tasks create` | Create a task |
| `synapse tasks assign` | Assign task to agent |
| `synapse tasks complete` | Mark task completed |
| `synapse approve <task_id>` | Approve a plan |
| `synapse reject <task_id>` | Reject a plan with reason |
| `synapse team start` | Launch agents (1st=handoff, rest=new panes). `--all-new` for all new panes |
| `synapse spawn <profile>` | Spawn a single agent in a new terminal pane |
### Resume Mode
When resuming an existing session, use these flags to **skip initial instruction sending** (A2A protocol explanation), keeping your context clean:
```bash
# Resume Claude Code session
synapse claude -- --resume
# Resume Gemini with history
synapse gemini -- --resume=5
# Codex uses 'resume' as a subcommand (not --resume flag)
synapse codex -- resume --last
```
Default flags (customizable in `settings.json`):
- **Claude**: `--resume`, `--continue`, `-r`, `-c`
- **Gemini**: `--resume`, `-r`
- **Codex**: `resume`
- **OpenCode**: `--continue`, `-c`
- **Copilot**: `--continue`, `--resume`
### Instruction Management
Manually resend initial instructions when they weren't sent (e.g., after `--resume` mode):
```bash
# Show instruction content
synapse instructions show claude
# List instruction files
synapse instructions files claude
# Send initial instructions to running agent
synapse instructions send claude
# Preview before sending
synapse instructions send claude --preview
# Send to specific agent ID
synapse instructions send synapse-claude-8100
```
Useful when:
- You need A2A protocol info after starting with `--resume`
- Agent lost/forgot instructions and needs recovery
- Debugging instruction content
### External Agent Management
```bash
# Register external agent
synapse external add http://other-agent:9000 --alias other
# List
synapse external list
# Send message
synapse external send other "Process this task"
```
### Task History Management
Search, browse, and analyze past agent execution results.
**Note:** History is enabled by default since v0.3.13. To disable:
```bash
# Disable via environment variable
export SYNAPSE_HISTORY_ENABLED=false
synapse claude
```
#### Basic Operations
```bash
# Show latest 50 entries
synapse history list
# Filter by agent
synapse history list --agent claude
# Custom limit
synapse history list --limit 100
# Show task details
synapse history show task-id-uuid
```
#### Keyword Search
Search input/output fields by keyword:
```bash
# Single keyword
synapse history search "Python"
# Multiple keywords (OR logic)
synapse history search "Python" "Docker"
# AND logic (all keywords must match)
synapse history search "Python" "function" --logic AND
# With agent filter
synapse history search "Python" --agent claude
# Limit results
synapse history search "error" --limit 20
```
#### Statistics
```bash
# Overall stats (total, success rate, per-agent breakdown)
synapse history stats
# Specific agent stats
synapse history stats --agent claude
```
#### Data Export
```bash
# JSON export (stdout)
synapse history export --format json
# CSV export
synapse history export --format csv
# Save to file
synapse history export --format json --output history.json
synapse history export --format csv --agent claude > claude_history.csv
```
#### Retention Policy
```bash
# Delete data older than 30 days
synapse history cleanup --days 30
# Keep database under 100MB
synapse history cleanup --max-size 100
# Force (no confirmation)
synapse history cleanup --days 30 --force
# Dry run
synapse history cleanup --days 30 --dry-run
```
**Storage:**
- SQLite database: `~/.synapse/history/history.db`
- Stored: task ID, agent name, input, output, status, metadata
- Auto-indexed: agent_name, timestamp, task_id
**Settings:**
- **Enabled by default** (v0.3.13+)
- **Disable**: `SYNAPSE_HISTORY_ENABLED=false`
### synapse send Command (Recommended)
Use `synapse send` for inter-agent communication. Works in sandboxed environments.
```bash
synapse send <target> "<message>" [--from <sender>] [--priority <1-5>] [--response | --no-response]
```
**Target Formats:**
| Format | Example | Description |
|--------|---------|-------------|
| Custom name | `my-claude` | Highest priority, use when agent has a name |
| Agent type | `claude` | Only works when single instance exists |
| Type-port | `claude-8100` | Use when multiple instances of same type |
| Full ID | `synapse-claude-8100` | Complete agent ID |
When multiple agents of the same type are running, type-only (e.g., `claude`) will error. Use `claude-8100` or `synapse-claude-8100`.
**Options:**
| Option | Short | Description |
|--------|-------|-------------|
| `--from` | `-f` | Sender agent ID (for reply identification) |
| `--priority` | `-p` | Priority 1-4: normal, 5: emergency stop (sends SIGINT) |
| `--response` | - | Roundtrip - sender waits, receiver replies with `synapse reply` |
| `--no-response` | - | Oneway - fire and forget, no reply needed |
**Examples:**
```bash
# Send message (single instance)
synapse send claude "Hello" --priority 1 --from synapse-codex-8121
# Long message support (automatic temp-file fallback)
synapse send claude --message-file /path/to/message.txt --no-response
echo "very long content..." | synapse send claude --stdin --no-response
# File attachments
synapse send claude "Review this" --attach src/main.py --no-response
# Send to specific instance (multiple of same type)
synapse send claude-8100 "Hello" --from synapse-claude-8101
# Emergency stop
synapse send claude "Stop!" --priority 5 --from synapse-codex-8121
# Wait for response (roundtrip)
synapse send gemini "Analyze this" --response --from synapse-claude-8100
```
**Default behavior:** With `a2a.flow=auto` (default), `synapse send` waits for a response unless `--no-response` is specified.
**Important:** Always use `--from` with your agent ID (format: `synapse-<type>-<port>`).
### synapse reply Command
Reply to the last received message:
```bash
synapse reply "<message>"
```
The `--from` flag is only needed in sandboxed environments (like Codex). Without `--from`, Synapse auto-detects the sender via process ancestry.
### Low-Level A2A Tool
For advanced operations:
```bash
# List agents
python -m synapse.tools.a2a list
# Send message
python -m synapse.tools.a2a send --target claude --priority 1 "Hello"
# Reply to last received message (uses reply tracking)
python -m synapse.tools.a2a reply "Here is my response"
```
---
## API Endpoints
### A2A Compliant
| Endpoint | Method | Description |
| -------- | ------ | ----------- |
| `/.well-known/agent.json` | GET | Agent Card |
| `/tasks/send` | POST | Send message |
| `/tasks/send-priority` | POST | Send with priority |
| `/tasks/create` | POST | Create task (no PTY send, for `--response`) |
| `/tasks/{id}` | GET | Get task status |
| `/tasks` | GET | List tasks |
| `/tasks/{id}/cancel` | POST | Cancel task |
| `/status` | GET | READY/PROCESSING status |
### Agent Teams
| Endpoint | Method | Description |
| -------- | ------ | ----------- |
| `/tasks/board` | GET | List shared task board |
| `/tasks/board` | POST | Create task on board |
| `/tasks/board/{id}/claim` | POST | Claim task atomically |
| `/tasks/board/{id}/complete` | POST | Complete task |
| `/tasks/{id}/approve` | POST | Approve a plan |
| `/tasks/{id}/reject` | POST | Reject a plan with reason |
| `/team/start` | POST | Start multiple agents in terminal panes (A2A-initiated) |
| `/spawn` | POST | Spawn a single agent in a new terminal pane (A2A-initiated) |
### Synapse Extensions
| Endpoint | Method | Description |
| -------- | ------ | ----------- |
| `/reply-stack/get` | GET | Get sender info without removing (for peek before send) |
| `/reply-stack/pop` | GET | Pop sender info from reply map (for `synapse reply`) |
| `/tasks/{id}/subscribe` | GET | Subscribe to task updates via SSE |
### Webhooks
| Endpoint | Method | Description |
| -------- | ------ | ----------- |
| `/webhooks` | POST | Register a webhook for task notifications |
| `/webhooks` | GET | List registered webhooks |
| `/webhooks` | DELETE | Unregister a webhook |
| `/webhooks/deliveries` | GET | Recent webhook delivery attempts |
### External Agents
| Endpoint | Method | Description |
| -------- | ------ | ----------- |
| `/external/discover` | POST | Register external agent |
| `/external/agents` | GET | List |
| `/external/agents/{alias}` | DELETE | Remove |
| `/external/agents/{alias}/send` | POST | Send |
---
## Task Structure
In the A2A protocol, all communication is managed as **Tasks**.
### Task Lifecycle
```mermaid
stateDiagram-v2
[*] --> submitted: POST /tasks/send
submitted --> working: Processing starts
working --> completed: Success
working --> failed: Error
working --> input_required: Waiting for input
input_required --> working: Input received
completed --> [*]
failed --> [*]
```
### Task Object
```json
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"context_id": "conversation-123",
"status": "working",
"message": {
"role": "user",
"parts": [{ "type": "text", "text": "Review this design" }]
},
"artifacts": [],
"metadata": {
"sender": {
"sender_id": "synapse-claude-8100",
"sender_type": "claude",
"sender_endpoint": "http://localhost:8100"
}
},
"created_at": "2024-01-15T10:30:00Z",
"updated_at": "2024-01-15T10:30:05Z"
}
```
### Field Descriptions
| Field | Type | Description |
| ----- | ---- | ----------- |
| `id` | string | Unique task identifier (UUID) |
| `context_id` | string? | Conversation context ID (for multi-turn) |
| `status` | string | `submitted` / `working` / `completed` / `failed` / `input_required` |
| `message` | Message | Sent message |
| `artifacts` | Artifact[] | Task output artifacts |
| `metadata` | object | Sender info (`metadata.sender`) |
| `created_at` | string | Creation timestamp (ISO 8601) |
| `updated_at` | string | Update timestamp (ISO 8601) |
### Message Structure
```json
{
"role": "user",
"parts": [
{ "type": "text", "text": "Message content" },
{
"type": "file",
"file": {
"name": "doc.pdf",
"mimeType": "application/pdf",
"bytes": "..."
}
}
]
}
```
| Part Type | Description |
| --------- | ----------- |
| `text` | Text message |
| `file` | File attachment |
| `data` | Structured data |
---
## Sender Identification
The sender of A2A messages can be identified via `metadata.sender`.
### PTY Output Format
Messages are sent to the agent's PTY with a simple `A2A:` prefix:
```
A2A: <message content>
```
### Reply Handling
Synapse automatically manages reply routing. Agents simply use `synapse reply`:
```bash
synapse reply "Here is my response"
```
The framework internally tracks sender information and routes replies automatically.
### Task API Verification (Development)
```bash
curl -s http://localhost:8120/tasks/<id> | jq '.metadata.sender'
```
Response:
```json
{
"sender_id": "synapse-claude-8100",
"sender_type": "claude",
"sender_endpoint": "http://localhost:8100"
}
```
### How It Works
1. **On send**: Reference Registry, identify own agent_id via PID matching
2. **On Task creation**: Attach sender info to `metadata.sender`
3. **On receive**: Check via PTY prefix or Task API
---
## Priority Levels
| Priority | Behavior | Use Case |
| -------- | -------- | -------- |
| 1-4 | Normal stdin write | Regular messages |
| 5 | SIGINT then write | Emergency stop |
```bash
# Emergency stop
synapse send claude "Stop!" --priority 5
```
---
## Agent Card
Each agent publishes an Agent Card at `/.well-known/agent.json`.
```bash
curl http://localhost:8100/.well-known/agent.json
```
```json
{
"name": "Synapse Claude",
"description": "PTY-wrapped claude CLI agent with A2A communication",
"url": "http://localhost:8100",
"capabilities": {
"streaming": false,
"pushNotifications": false,
"multiTurn": true
},
"skills": [
{
"id": "chat",
"name": "Chat",
"description": "Send messages to the CLI agent"
},
{
"id": "interrupt",
"name": "Interrupt",
"description": "Interrupt current processing"
}
],
"extensions": {
"synapse": {
"agent_id": "synapse-claude-8100",
"pty_wrapped": true,
"priority_interrupt": true,
"at_agent_syntax": true
}
}
}
```
### Design Philosophy
Agent Card is a "business card" containing only external-facing information:
- capabilities, skills, endpoint, etc.
- Internal instructions are not included (sent via A2A Task at startup)
---
## Registry and Port Management
### Registry Files
```
~/.a2a/registry/
├── synapse-claude-8100.json
├── synapse-claude-8101.json
└── synapse-gemini-8110.json
```
### Auto Cleanup
Stale entries are automatically removed during:
- `synapse list` execution
- Message sending (when target is dead)
### Port Ranges
```python
PORT_RANGES = {
"claude": (8100, 8109),
"gemini": (8110, 8119),
"codex": (8120, 8129),
"opencode": (8130, 8139),
"copilot": (8140, 8149),
"dummy": (8190, 8199),
}
```
### Typical Memory Usage (Resident Agents)
On macOS, idle resident agents are lightweight. As of January 25, 2026,
RSS is around ~12 MB per agent process in a typical development setup.
Actual usage varies by profile, plugins, history settings, and workload.
Note that `ps` reports RSS in KB (so ~12 MB corresponds to ~12,000 KB).
To measure on your machine:
```bash
ps -o pid,comm,rss,vsz,etime,command -A | rg "synapse"
```
If you don't have ripgrep:
```bash
ps -o pid,comm,rss,vsz,etime,command -A | grep "synapse"
```
---
## File Safety
Prevents conflicts when multiple agents edit the same files simultaneously.
```mermaid
sequenceDiagram
participant Claude
participant FS as File Safety
participant Gemini
Claude->>FS: acquire_lock("auth.py")
FS-->>Claude: ACQUIRED
Gemini->>FS: validate_write("auth.py")
FS-->>Gemini: DENIED (locked by claude)
Claude->>FS: release_lock("auth.py")
Gemini->>FS: acquire_lock("auth.py")
FS-->>Gemini: ACQUIRED
```
### Features
| Feature | Description |
|---------|-------------|
| **File Locking** | Exclusive control prevents simultaneous editing |
| **Change Tracking** | Records who changed what and when |
| **Context Injection** | Provides recent change history on read |
| **Pre-write Validation** | Checks lock status before writing |
| **List Integration** | Active locks visible in `synapse list` EDITING_FILE column |
### Enable
```bash
# Enable via environment variable
export SYNAPSE_FILE_SAFETY_ENABLED=true
synapse claude
```
### Basic Commands
```bash
# Show statistics
synapse file-safety status
# List active locks
synapse file-safety locks
# Acquire lock
synapse file-safety lock /path/to/file.py claude --intent "Refactoring"
# Wait for lock to be released
synapse file-safety lock /path/to/file.py claude --wait --wait-timeout 60 --wait-interval 2
# Release lock
synapse file-safety unlock /path/to/file.py claude
# File change history
synapse file-safety history /path/to/file.py
# Recent changes
synapse file-safety recent
# Delete old data
synapse file-safety cleanup --days 30
```
### Python API
```python
from synapse.file_safety import FileSafetyManager, ChangeType, LockStatus
manager = FileSafetyManager.from_env()
# Acquire lock
result = manager.acquire_lock("/path/to/file.py", "claude", intent="Refactoring")
if result["status"] == LockStatus.ACQUIRED:
# Edit file...
# Record change
manager.record_modification(
file_path="/path/to/file.py",
agent_name="claude",
task_id="task-123",
change_type=ChangeType.MODIFY,
intent="Fix authentication bug"
)
# Release lock
manager.release_lock("/path/to/file.py", "claude")
# Pre-write validation
validation = manager.validate_write("/path/to/file.py", "gemini")
if not validation["allowed"]:
print(f"Write blocked: {validation['reason']}")
```
**Storage**: Default is `.synapse/file_safety.db` (SQLite, relative to working directory). Change via `SYNAPSE_FILE_SAFETY_DB_PATH` (e.g., `~/.synapse/file_safety.db` for global).
See [docs/file-safety.md](docs/file-safety.md) for details.
---
## Agent Monitor
Real-time monitoring of agent status with terminal jump capability.
### Rich TUI Mode
```bash
# Start Rich TUI with auto-refresh (default)
synapse list
```
The display automatically updates when agent status changes (via file watcher) with a 10-second fallback polling interval.
### Display Columns
| Column | Description |
|--------|-------------|
| ID | Agent ID (e.g., `synapse-claude-8100`) |
| NAME | Custom name (if assigned) |
| TYPE | Agent type (claude, gemini, codex, etc.) |
| ROLE | Agent role description (if assigned) |
| STATUS | Current status (READY, WAITING, PROCESSING, DONE) |
| CURRENT | Current task preview |
| TRANSPORT | Communication transport indicator |
| WORKING_DIR | Current working directory |
| EDITING_FILE | File being edited (File Safety enabled only) |
**Customize columns** in `settings.json`:
```json
{
"list": {
"columns": ["ID", "NAME", "STATUS", "CURRENT", "TRANSPORT", "WORKING_DIR"]
}
}
```
### Status States
| Status | Color | Meaning |
|--------|-------|---------|
| **READY** | Green | Agent is idle, waiting for input |
| **WAITING** | Cyan | Agent is showing selection UI, waiting for user choice |
| **PROCESSING** | Yellow | Agent is actively working |
| **DONE** | Blue | Task completed (auto-transitions to READY after 10s) |
### Interactive Controls
| Key | Action |
|-----|--------|
| 1-9 | Select agent row (direct) |
| ↑/↓ | Navigate agent rows |
| **Enter** or **j** | Jump to selected agent's terminal |
| **k** | Kill selected agent (with confirmation) |
| **/** | text/markdown | Synapse A2A Team | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.100.0",
"uvicorn>=0.23.0",
"pyyaml>=6.0",
"requests>=2.31.0",
"httpx>=0.24.0",
"questionary>=2.0.0",
"rich>=13.0.0",
"simple-term-menu>=1.6.6",
"watchdog>=3.0.0",
"pyperclip>=1.8.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:01:49.788541 | synapse_a2a-0.6.5.tar.gz | 435,166 | 88/ef/469e4771b54b983b4d3e0bb47d50ae2b2366eb6e6fef2160530dacfa7d35/synapse_a2a-0.6.5.tar.gz | source | sdist | null | false | 3edaf4f8f96ab391cbdd7fbf9cc7c175 | 540f7548ac2e445a3b55a0bbc5a4294480f48588e48a4946678e0daf792eb195 | 88ef469e4771b54b983b4d3e0bb47d50ae2b2366eb6e6fef2160530dacfa7d35 | null | [] | 244 |
2.4 | virtualitics-cli | 1.54.1 | A command line interface for initializing, packaging, and deploying Custom Apps to the Virtualitics AI Platform from a local development environment. | # Virtualitics AI Platform CLI
A command line interface for initializing, packaging, and deploying Custom Apps to the Virtualitics AI Platform (VAIP) from a local development environment.
## Installation
```console
pip install virtualitics-cli
```
Requires Python >= 3.14.
## Quick Start
```console
vaip config # Configure connection to a VAIP instance
vaip init # Scaffold a new VAIP app
# ... write your app code ...
vaip build --yes # Build a wheel
vaip deploy # Deploy to the VAIP instance
vaip destroy --project-name=my_app --yes # Remove app from the instance
```
## Usage
```console
$ vaip [OPTIONS] COMMAND [ARGS]...
```
**Options**:
* `--version`
* `--verbose / --no-verbose`: [default: no-verbose]
* `--install-completion`: Install completion for the current shell.
* `--show-completion`: Show completion for the current shell, to copy it or customize the installation.
* `--help`: Show this message and exit.
## Commands
### `vaip config`
Create or update a configuration file for connecting to a VAIP instance.
Requires a friendly name, host URL, API token, and username. Supports multiple named contexts.
```console
$ vaip config [OPTIONS]
```
* `-N, --name TEXT`: Friendly name for the VAIP instance (e.g., `predict-dev`) [required]
* `-H, --host TEXT`: Backend hostname (e.g., `https://predict-api-dev.virtualitics.com`) [required]
* `-T, --token TEXT`: API token for authentication
* `-U, --username TEXT`: Username associated with API token
### `vaip use-context`
Switch the active context for deployment.
```console
$ vaip use-context CONTEXT_NAME
```
### `vaip show-context`
Display the current configuration file.
```console
$ vaip show-context
```
### `vaip delete-context`
Delete a specific context from the configuration file.
```console
$ vaip delete-context CONTEXT_NAME
```
### `vaip edit-context`
Modify a specific context in the configuration file.
```console
$ vaip edit-context CONTEXT_NAME
```
### `vaip init`
Scaffold a new VAIP app structure with a `pyproject.toml` and package directory.
```console
$ vaip init [OPTIONS]
```
* `-n, --project-name TEXT`: Name for the VAIP App (no spaces, numbers, or special chars besides `_`) [required]
* `-v, --version TEXT`: Version for the VAIP App [required]
* `-d, --description TEXT`: Description for the VAIP App [required]
* `-a, --authors TEXT`: Authors for the VAIP App [required]
* `-l, --licenses TEXT`: License for the VAIP App [required]
### `vaip build`
Build a Python wheel file from the `pyproject.toml` in the current directory.
```console
$ vaip build [OPTIONS]
```
* `-y, --yes`: Confirm the build [required]
### `vaip deploy`
Deploy the VAIP App wheel to the configured VAIP instance.
```console
$ vaip deploy [OPTIONS]
```
* `-f, --file TEXT`: Path to the wheel file (defaults to `./dist/*.whl`)
### `vaip destroy`
Delete a VAIP module and all its apps from the instance.
```console
$ vaip destroy [OPTIONS]
```
* `-n, --project-name TEXT`: Project name to delete [required]
* `-y, --yes`: Confirm deletion [required]
### `vaip publish`
Publish a VAIP App to other users in your organization. *(Not currently implemented.)*
```console
$ vaip publish
```
| text/markdown | null | Virtualitics Engineering <engineering@virtualitics.com> | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows :: Windows 10",
"Operating System :: POSIX",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.14 | [] | [] | [] | [
"art>=6.1",
"build>=1.2.1",
"requests>=2.31.0",
"typer>=0.19.2",
"ruff>=0.9.0; extra == \"dev\"",
"virtualitics-sdk>=1.26.0; extra == \"optional\"",
"pip-audit>=2.7.0; extra == \"test\"",
"pytest-cov>=5.0.0; extra == \"test\"",
"pytest>=8.1.1; extra == \"test\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"20.04","id":"focal","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T01:01:13.687505 | virtualitics_cli-1.54.1-py3-none-any.whl | 8,104 | b7/26/371b2ea8581f246fbb967b85fb68961188f7f673f8f5332a4414553bee2a/virtualitics_cli-1.54.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 1fc1255fdf7c6a0215bcae28fe08f521 | c2b9aac2083db8ef4f16b76029cf8a771f47a3792f202a7f87ac9e0449dc6430 | b726371b2ea8581f246fbb967b85fb68961188f7f673f8f5332a4414553bee2a | null | [] | 88 |
2.4 | denoise-audio | 0.1.2 | Denoise WAV audio using RNNoise, DeepFilterNet, and FacebookResearch Denoiser. | # denoise-audio
A CLI tool for denoising **WAV** audio with 3 leading backends:
- **rnnoise** (fast, CPU-only, low-latency)
- **deepfilternet** (high-quality full-band denoising)
- **fbdenoiser** (FacebookResearch Denoiser / causal Demucs; strong enhancement)
> Inputs/outputs are WAV files. If your audio is not WAV, convert it first.
---
## Install with `pip` (PyPI)
> Use this if you want to install and use the tool without cloning the repo.
> Requires **(Python ==3.10.*)**.
## Install with `pip` (PyPI)
> Use this if you want to install and use the tool without cloning the repo.
### Install
```bash
pip install denoise-audio
```
### CLI usage (pip)
Global help:
```bash
python -m denoise --help
```
List available backends:
```bash
python -m denoise --list-models
```
Model-specific help:
```bash
python -m denoise --model rnnoise --help
python -m denoise --model deepfilternet --help
python -m denoise --model fbdenoiser --help
```
Basic command (all models):
```bash
python -m denoise --model <model> --input <in.wav> --output <out.wav>
```
- `<model>` is one of: `rnnoise`, `deepfilternet`, `fbdenoiser`
- `--input` and `--output` must be WAV files
### Recommended environment (Python 3.10)
A fresh virtual environment with **Python 3.10** is recommended to avoid dependency conflicts.
### Troubleshooting: `wheel` / `packaging` conflicts
Some environments may already have a newer `wheel` installed that requires `packaging>=24`, while `deepfilternet` requires `packaging<24`.
If you see an error mentioning a `wheel`/`packaging` conflict, use a clean venv (recommended) or pin compatible versions:
```bash
python -m pip install --upgrade "packaging>=23,<24" "wheel<0.46"
python -m pip install --upgrade --force-reinstall denoise-audio
```
---
## Python usage (import)
You can also use this package directly in your Python code after installing with `pip install denoise-audio`.
### Quick sanity check
```bash
python -c "import denoise; print(denoise.__version__)"
```
### List available models/backends
```python
from denoise import available_models, backend_help
print(available_models())
print(backend_help())
```
### Inspect supported keyword arguments for each model
```python
from denoise import model_kwargs_help
print(model_kwargs_help("rnnoise"))
print(model_kwargs_help("deepfilternet"))
print(model_kwargs_help("fbdenoiser"))
```
### Run denoising from a Python file
Create `run_denoise.py`:
```python
from denoise import denoise_file
IN_WAV = "input.wav"
OUT_WAV = "output.wav"
# Choose one: rnnoise | deepfilternet | fbdenoiser
MODEL = "rnnoise"
# Model-specific kwargs (examples below)
kwargs = {}
# Example: RNNoise
# kwargs = {"rnnoise_sample_rate": 48000}
# Example: DeepFilterNet
# kwargs = {"df_model": "DeepFilterNet3", "df_pf": True, "df_compensate_delay": True}
# Example: FacebookResearch Denoiser
# kwargs = {"fb_model": "dns64", "fb_device": "cpu", "fb_dry": 1.0}
for _ in denoise_file(IN_WAV, OUT_WAV, model=MODEL, **kwargs):
pass
print(f"Wrote: {OUT_WAV}")
```
Run it:
```bash
python run_denoise.py
```
---
## Install from GitHub (uv)
Install `uv` using Astral’s standalone installer:
https://docs.astral.sh/uv/getting-started/installation/#standalone-installer
Verify:
```bash
uv --version
```
---
## Install dependencies
```bash
git clone https://github.com/Surya-Rayala/denoise-audio.git
cd denoise-audio
uv sync
```
---
## CLI usage
Run everything with uv run to ensure you’re using the project environment:
### Global help
```bash
uv run python -m denoise --help
```
### List available backends
```bash
uv run python -m denoise --list-models
```
### Model-specific help
Pass `--model <name>` and `--help` to see only that backend’s options:
```bash
uv run python -m denoise --model rnnoise --help
uv run python -m denoise --model deepfilternet --help
uv run python -m denoise --model fbdenoiser --help
```
---
## Basic command (all models)
```bash
uv run python -m denoise --model <model> --input <in.wav> --output <out.wav>
```
- `<model>` is one of: `rnnoise`, `deepfilternet`, `fbdenoiser`
- `--input` and `--output` must be WAV files
---
## Models
### RNNoise (`--model rnnoise`)
**Basic command**
```bash
uv run python -m denoise --model rnnoise --input <in.wav> --output <out.wav>
```
**Arguments**
- `--rnnoise-sample-rate <int>`: Force the RNNoise wrapper sample rate. If omitted, the tool infers the sample rate from the input WAV (recommended).
---
### DeepFilterNet (`--model deepfilternet`)
**Basic command**
```bash
uv run python -m denoise --model deepfilternet --input <in.wav> --output <out.wav>
```
**Arguments**
- `--df-model <name>`: Select which pretrained DeepFilterNet model to load. Common options: `DeepFilterNet`, `DeepFilterNet2`, `DeepFilterNet3`.
- To see available models, visit: https://github.com/Rikorose/DeepFilterNet/tree/main/models
- `--df-pf`: Enable the post-filter (can reduce residual noise; may sound more aggressive in very noisy sections).
- `--df-compensate-delay`: Add padding to compensate processing delay (useful when you need better alignment with the original audio).
---
### FacebookResearch Denoiser (`--model fbdenoiser`)
**Basic command**
```bash
uv run python -m denoise --model fbdenoiser --input <in.wav> --output <out.wav>
```
**Arguments**
- `--fb-model <name>`: Choose the pretrained model:
- `dns48`: pre-trained real time H=48 model trained on DNS
- `dns64`: pre-trained real time H=64 model trained on DNS
- `master64`: pre-trained real time H=64 model trained on DNS and Valentini
- `--fb-device <device>`: Inference device, e.g. `cpu` or `cuda`. If omitted, it automatically uses CUDA if available, otherwise CPU.
- `--fb-dry <float>`: Dry/wet mix (`0.0` = original input only, `1.0` = denoised output only). Values outside `[0.0, 1.0]` are clamped.
- `--fb-streaming`: Enable streaming mode (flag is accepted by the CLI).
- `--fb-batch-size <int>`: Batch size (flag is accepted by the CLI).
- `--fb-num-workers <int>`: Number of workers (flag is accepted by the CLI).
- `--fb-verbose`: Enable verbose logging (flag is accepted by the CLI).
---
## Updating from Git
If you’re using the repo locally with `uv sync` + `uv run`, updating to new code from Git is:
```bash
cd denoise-audio
git pull
uv sync
```
### If you have local changes and `git pull` refuses
Use one of these (pick what you intend):
- Keep your local changes and rebase on top:
```bash
git pull --rebase
uv sync
```
- Discard local changes and reset to remote (⚠️ destructive):
```bash
git fetch origin
git reset --hard origin/main
uv sync
```
## License
This project's source code is licensed under the MIT License.
**Note on Dependencies:** This tool relies on the `denoiser` library (Facebook Research), which is licensed under **CC-BY-NC 4.0** (Non-Commercial). Consequently, this tool as a whole is suitable for research and personal use only, unless you obtain a commercial license for the `denoiser` dependency.
| text/markdown | null | Surya Chand Rayala <suryachand2k1@gmail.com> | null | null | MIT License Copyright (c) 2026 Surya Chand Rayala Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | ==3.10.* | [] | [] | [] | [
"deepfilternet>=0.5.6",
"denoiser>=0.1.5",
"pyrnnoise>=0.4.3",
"requests>=2.32.5",
"torch==2.0.1",
"torchaudio==2.0.2"
] | [] | [] | [] | [
"Repository, https://github.com/Surya-Rayala/denoise-audio.git"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T01:00:43.850057 | denoise_audio-0.1.2.tar.gz | 17,196 | 6a/4d/0af9247a9eeb8ce7315aa5e5d0b44bfe0556ee20bd69f6d84b7494134226/denoise_audio-0.1.2.tar.gz | source | sdist | null | false | 8c28badb371e4b10bb067496c212e832 | 43780e48842b08a56f383a9ebf8e5fcc80406f08f1788762d48ec5536a353c18 | 6a4d0af9247a9eeb8ce7315aa5e5d0b44bfe0556ee20bd69f6d84b7494134226 | null | [
"LICENSE"
] | 259 |
2.4 | cyclopts | 5.0.0a5 | Intuitive, easy CLIs based on type hints. | <div align="center">
<img src="https://raw.githubusercontent.com/BrianPugh/Cyclopts/main/assets/logo_512w.png">
</div>
<div align="center">

[](https://pypi.org/project/cyclopts/)
[](https://cyclopts.readthedocs.io)
[](https://codecov.io/gh/BrianPugh/cyclopts)
</div>
---
**Documentation:** https://cyclopts.readthedocs.io
**Source Code:** https://github.com/BrianPugh/cyclopts
---
Cyclopts is a modern, easy-to-use command-line interface (CLI) framework that aims to provide an intuitive & efficient developer experience.
# Why Cyclopts?
- **Intuitive API**: Quickly write CLI applications using a terse, intuitive syntax.
- **Advanced Type Hinting**: Full support of all builtin types and even user-specified (yes, including [Pydantic](https://docs.pydantic.dev/latest/), [Dataclasses](https://docs.python.org/3/library/dataclasses.html), and [Attrs](https://www.attrs.org/en/stable/api.html)).
- **Rich Help Generation**: Automatically generates beautiful help pages from **docstrings** and other contextual data.
- **Extendable**: Easily customize converters, validators, token parsing, and application launching.
# Installation
Cyclopts requires Python >=3.10; to install Cyclopts, run:
```console
pip install cyclopts
```
# Quick Start
- Import `cyclopts.run()` and give it a function to run.
```python
from cyclopts import run
def foo(loops: int):
for i in range(loops):
print(f"Looping! {i}")
run(foo)
```
Execute the script from the command line:
```console
$ python start.py 3
Looping! 0
Looping! 1
Looping! 2
```
When you need more control:
- Create an application using `cyclopts.App`.
- Register commands with the `command` decorator.
- Register a default function with the `default` decorator.
```python
from cyclopts import App
app = App()
@app.command
def foo(loops: int):
for i in range(loops):
print(f"Looping! {i}")
@app.default
def default_action():
print("Hello world! This runs when no command is specified.")
app()
```
Execute the script from the command line:
```console
$ python demo.py
Hello world! This runs when no command is specified.
$ python demo.py foo 3
Looping! 0
Looping! 1
Looping! 2
```
With just a few additional lines of code, we have a full-featured CLI app.
See [the docs](https://cyclopts.readthedocs.io) for more advanced usage.
# Compared to Typer
Cyclopts is what you thought Typer was.
Cyclopts's includes information from docstrings, support more complex types (even Unions and Literals!), and include proper validation support.
See [the documentation for a complete Typer comparison](https://cyclopts.readthedocs.io/en/latest/vs_typer/README.html).
Consider the following short 29-line Cyclopts application:
```python
import cyclopts
from typing import Literal
app = cyclopts.App()
@app.command
def deploy(
env: Literal["dev", "staging", "prod"],
replicas: int | Literal["default", "performance"] = "default",
):
"""Deploy code to an environment.
Parameters
----------
env
Environment to deploy to.
replicas
Number of workers to spin up.
"""
if replicas == "default":
replicas = 10
elif replicas == "performance":
replicas = 20
print(f"Deploying to {env} with {replicas} replicas.")
if __name__ == "__main__":
app()
```
```console
$ my-script deploy --help
Usage: my-script.py deploy [ARGS] [OPTIONS]
Deploy code to an environment.
╭─ Parameters ────────────────────────────────────────────────────────────────────────────────────╮
│ * ENV --env Environment to deploy to. [choices: dev, staging, prod] [required] │
│ REPLICAS --replicas Number of workers to spin up. [choices: default, performance] [default: │
│ default] │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
$ my-script deploy staging
Deploying to staging with 10 replicas.
$ my-script deploy staging 7
Deploying to staging with 7 replicas.
$ my-script deploy staging performance
Deploying to staging with 20 replicas.
$ my-script deploy nonexistent-env
╭─ Error ────────────────────────────────────────────────────────────────────────────────────────────╮
│ Error converting value "nonexistent-env" to typing.Literal['dev', 'staging', 'prod'] for "--env". │
╰────────────────────────────────────────────────────────────────────────────────────────────────────╯
$ my-script --version
0.0.0
```
In its current state, this application would be impossible to implement in Typer.
However, lets see how close we can get with Typer (47-lines):
```python
import typer
from typing import Annotated, Literal
from enum import Enum
app = typer.Typer()
class Environment(str, Enum):
dev = "dev"
staging = "staging"
prod = "prod"
def replica_parser(value: str):
if value == "default":
return 10
elif value == "performance":
return 20
else:
return int(value)
def _version_callback(value: bool):
if value:
print("0.0.0")
raise typer.Exit()
@app.callback()
def callback(
version: Annotated[
bool | None, typer.Option("--version", callback=_version_callback)
] = None,
):
pass
@app.command(help="Deploy code to an environment.")
def deploy(
env: Annotated[Environment, typer.Argument(help="Environment to deploy to.")],
replicas: Annotated[
int,
typer.Argument(
parser=replica_parser,
help="Number of workers to spin up.",
),
] = replica_parser("default"),
):
print(f"Deploying to {env.name} with {replicas} replicas.")
if __name__ == "__main__":
app()
```
```console
$ my-script deploy --help
Usage: my-script deploy [OPTIONS] ENV:{dev|staging|prod} [REPLICAS]
Deploy code to an environment.
╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────╮
│ * env ENV:{dev|staging|prod} Environment to deploy to. [default: None] [required] │
│ replicas [REPLICAS] Number of workers to spin up. [default: 10] │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ───────────────────────────────────────────────────────────────────────────────────────╮
│ --help Show this message and exit. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
$ my-script deploy staging
Deploying to staging with 10 replicas.
$ my-script deploy staging 7
Deploying to staging with 7 replicas.
$ my-script deploy staging performance
Deploying to staging with 20 replicas.
$ my-script deploy nonexistent-env
Usage: my-script.py deploy [OPTIONS] ENV:{dev|staging|prod} [REPLICAS]
Try 'my-script.py deploy --help' for help.
╭─ Error ─────────────────────────────────────────────────────────────────────────────────────────╮
│ Invalid value for '[REPLICAS]': nonexistent-env │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
$ my-script --version
0.0.0
```
The Typer implementation is 47 lines long, while the Cyclopts implementation is just 29 (38% shorter!).
Not only is the Cyclopts implementation significantly shorter, but the code is easier to read.
Since Typer does not support Unions, the choices for ``replica`` could not be displayed on the help page.
Cyclopts is much more terse, much more readable, and much more intuitive to use.
| text/markdown | Brian Pugh | null | null | null | null | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"attrs>=23.1.0",
"docstring-parser<4.0,>=0.15",
"rich>=13.6.0",
"tomli>=2.0.0; python_version < \"3.11\"",
"typing-extensions>=4.8.0; python_version < \"3.11\"",
"ipdb>=0.13.9; extra == \"debug\"",
"line-profiler>=3.5.1; extra == \"debug\"",
"coverage[toml]>=5.1; extra == \"dev\"",
"mkdocs>=1.4.0; e... | [] | [] | [] | [
"Homepage, https://github.com/BrianPugh/cyclopts",
"Repository, https://github.com/BrianPugh/cyclopts"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T01:00:23.838392 | cyclopts-5.0.0a5.tar.gz | 167,366 | a0/e8/41e0e1e2c60defef32726c86ca65b103d435c6e77e21ab67e257e6304c00/cyclopts-5.0.0a5.tar.gz | source | sdist | null | false | 97efabd35e51532c04fe3a65328f7da2 | 438f8d013795a318dd618012d86babd38c294bd8ca4d6a8c04997b1818019cf7 | a0e841e0e1e2c60defef32726c86ca65b103d435c6e77e21ab67e257e6304c00 | Apache-2.0 | [
"LICENSE"
] | 5,771 |
2.4 | abi-core-ai | 1.6.3 | Agent-Based Infrastructure Core - Runtime and CLI | # ABI-Core 🤖
[](https://pypi.org/project/abi-core-ai/)
[](https://pypi.org/project/abi-core-ai/)
[](https://github.com/Joselo-zn/abi-core-ai/blob/main/LICENSE)
[](https://abi-core.readthedocs.io/en/latest/?badge=latest)
**ABI-Core-AI** — The foundation for building **Agent-Based Infrastructure (ABI)** — a new architectural paradigm where intelligent agents collaborate through semantic context, policy-driven governance, and modular orchestration.
**Agent-Based Infrastructure Core** — A comprehensive framework for building, deploying, and managing AI agent systems with semantic layers, orchestration, and security policies.
> 🎉 **v1.5.8 Released!** — Now with modular architecture, enhanced Open WebUI compatibility, and improved web interfaces.
---
## 🧭 Core Philosophy
ABI-Core is built on three fundamental principles:
1. **Semantic Interoperability** — Agents must share meaning, not just data.
2. **Distributed Intelligence** — No single model owns the truth; collaboration is the substrate.
3. **Governed Autonomy** — Security and compliance must evolve as fast as intelligence itself.
> ⚠️ **Beta Release**: This is a beta version. APIs may change and some features are experimental.
---
## 🚀 Quick Start
### Installation
```bash
pip install abi-core-ai
```
### Create Your First Project
```bash
# Create a new ABI project with semantic layer
abi-core create project my-ai-system --with-semantic-layer
# Navigate to your project
cd my-ai-system
# Provision models (automatically starts services and downloads models)
abi-core provision-models
# Create an agent
abi-core add agent my-agent --description "My first AI agent"
# Create an agent card for semantic discovery
abi-core add agent-card my-agent --description "General purpose AI assistant" --url http://localhost:8000
# Run your project
abi-core run
```
> 📖 **Need help?** Check out our [complete documentation](https://abi-core.readthedocs.io) with guides, examples, and API reference.
---
## 🆕 What's New in v1.2.0
### 🏗️ Modular Architecture
ABI-Core now uses a **modular monorepo structure** for better maintainability and community collaboration:
```
packages/
├── abi-core/ # Core libraries (common/, security/, opa/, abi_mcp/)
├── abi-agents/ # Agent implementations (orchestrator/, planner/)
├── abi-services/ # Services (semantic-layer/, guardian/)
├── abi-cli/ # CLI and scaffolding tools
└── abi-framework/ # Umbrella package with unified API
```
**Benefits:**
- ✅ **Backward Compatible** — All existing imports continue to work
- ✅ **Modular Development** — Each package can be developed independently
- ✅ **Community Friendly** — Easier to contribute to specific components
- ✅ **Deployment Flexibility** — Deploy only the components you need
### 🌐 Enhanced Open WebUI Compatibility
- ✅ **Fixed Connection Issues** — Resolved `Unclosed client session` errors
- ✅ **Improved Streaming** — Better real-time response handling
- ✅ **Proper Headers** — Correct CORS and connection management
- ✅ **Template Consistency** — Synchronized web interfaces across all agents
## 🔧 Model Serving Options
ABI-Core supports two model serving strategies for Ollama:
### Centralized (Recommended for Production)
A single shared Ollama service serves all agents:
- ✅ **Lower resource usage** — One Ollama instance for all agents
- ✅ **Easier model management** — Centralized model updates
- ✅ **Faster agent startup** — No need to start individual Ollama instances
- ✅ **Centralized caching** — Shared model cache across agents
```bash
abi-core create project my-app --model-serving centralized
```
### Distributed (Default)
Each agent has its own Ollama instance:
- ✅ **Complete isolation** — Each agent has independent models
- ✅ **Independent versions** — Different model versions per agent
- ✅ **Development friendly** — Easy to test different configurations
- ⚠️ **Higher resource usage** — Multiple Ollama instances
```bash
abi-core create project my-app --model-serving distributed
# or simply (distributed is default)
abi-core create project my-app
```
**Note:** Guardian service always maintains its own Ollama instance for security isolation, regardless of the chosen mode.
---
## 🎯 What is ABI-Core?
ABI-Core-AI is a production-ready framework for building **Agent-Based Infrastructure** systems that combine:
- **🤖 AI Agents** — LangChain-powered agents with A2A (Agent-to-Agent) communication
- **🧠 Semantic Layer** — Vector embeddings and distributed knowledge management
- **🔒 Security** — OPA-based policy enforcement and access control
- **🌐 Web Interfaces** — FastAPI-based REST APIs and real-time dashboards
- **📦 Containerization** — Docker-ready deployments with orchestration
---
## 🏗️ Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ AI Agents │◄──►│ Semantic Layer │◄──►│ Guardian │
│ │ │ │ │ Security │
│ • LangChain │ │ • Vector DB │ │ • OPA Policies │
│ • A2A Protocol │ │ • Embeddings │ │ • Access Control│
│ • Custom Logic │ │ • Knowledge │ │ • Monitoring │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
└───────────────────────┼───────────────────────┘
│
┌─────────────────┐
│ Web Interface │
│ │
│ • FastAPI │
│ • Real-time UI │
│ • Monitoring │
└─────────────────┘
```
---
## 📋 Features
### 🤖 Agent System
- **Multi-Agent Architecture** — Create specialized agents for different tasks
- **A2A Communication** — Agents can communicate and collaborate with automatic security validation
- **LangChain Integration** — Leverage the full LangChain ecosystem
- **Custom Tools** — Extend agents with domain-specific capabilities
- **Workflow System** — LangGraph-based workflow orchestration with built-in A2A validation
- **Centralized Config** — All agents have config/ directory for type-safe configuration
### 🧠 Semantic Layer
- **Agent Discovery** — MCP-based agent finding and routing
- **Vector Storage** — Weaviate-based semantic search (automatically configured)
- **Agent Cards** — Structured agent metadata and capabilities
- **Access Validation** — OPA-integrated security for semantic access with user validation
- **Embedding Mesh** — Distributed embedding computation and caching
- **Context Awareness** — Agents understand semantic relationships
- **Auto-Configuration** — Weaviate vector database included automatically
- **MCP Toolkit** — Dynamic access to custom MCP tools with pythonic syntax
### 🔒 Security & Governance
- **Policy Engine** — Open Policy Agent (OPA) integration
- **Access Control** — Fine-grained permissions and roles
- **A2A Validation** — Agent-to-Agent communication security with automatic validation
- **User Validation** — User-level access control for semantic layer
- **Audit Logging** — Complete activity tracking with user and agent context
- **Compliance** — Built-in security best practices
- **Centralized Configuration** — Type-safe config management for all services
### 🌐 Web & APIs
- **REST APIs** — FastAPI-based service endpoints
- **Real-time Updates** — WebSocket support for live data
- **Admin Dashboard** — Monitor and manage your agent system
- **Custom UIs** — Build domain-specific interfaces
---
## 🛠️ CLI Commands
### Project Management
```bash
# Create new projects with optional services and model serving strategy
abi-core create project <name> [--domain <domain>] [--with-semantic-layer] [--with-guardian] [--model-serving centralized|distributed]
abi-core provision-models # Download and configure LLM models (auto-starts services)
abi-core status # Check project status
abi-core run # Start all services
abi-core info # Show project information
```
### Agent Development
```bash
# Create and manage agents
abi-core add agent <name> [--description <desc>] [--model <model>] [--with-web-interface]
abi-core remove agent <name> # Remove an agent
abi-core info agents # List all agents
```
### Services Management
```bash
# Add services to existing projects
abi-core add service semantic-layer [--name <name>] [--domain <domain>]
abi-core add service guardian [--name <name>] [--domain <domain>]
abi-core add service guardian-native [--name <name>] [--domain <domain>]
# Quick service shortcuts
abi-core add semantic-layer [--domain <domain>] # Add semantic layer directly
abi-core remove service <name> # Remove any service
```
### Agent Cards & Semantic Layer
```bash
# Manage agent cards for semantic discovery
abi-core add agent-card <name> [--description <desc>] [--model <model>] [--url <url>] [--tasks <tasks>]
abi-core add policies <name> [--domain <domain>] # Add security policies
```
### Examples
```bash
# Create a finance project with centralized model serving (recommended for production)
abi-core create project fintech-ai --domain finance --with-semantic-layer --with-guardian --model-serving centralized
cd fintech-ai
# Provision models (starts Ollama and downloads qwen2.5:3b + embeddings)
abi-core provision-models
# Add a specialized trading agent (automatically uses centralized Ollama)
abi-core add agent trader --description "AI trading assistant" --model qwen2.5:3b
# Create agent card for semantic discovery
abi-core add agent-card trader --description "Execute trading operations" --url http://localhost:8001 --tasks "trade,analyze,risk-assessment"
# Add semantic layer to existing project (Weaviate included automatically)
abi-core add semantic-layer --domain finance
# Create a development project with distributed model serving (each agent has own Ollama)
abi-core create project dev-project --model-serving distributed
cd dev-project
# Provision models (starts all agents with their Ollama instances + main Ollama for embeddings)
abi-core provision-models
# Remove components when needed
abi-core remove service semantic_layer
abi-core remove agent trader
```
---
## 📁 Project Structure
When you create a new project, you get:
```
my-project/
├── agents/ # Your AI agents
│ └── my-agent/
│ ├── config/ # Centralized configuration (NEW)
│ │ ├── __init__.py
│ │ └── config.py # Type-safe config with A2A settings
│ ├── agent.py # Agent implementation
│ ├── main.py # Entry point
│ ├── models.py # Data models
│ └── agent_cards/ # Agent cards for semantic discovery
├── services/ # Supporting services
│ ├── web_api/ # Main web application
│ │ ├── config/ # Application configuration
│ │ ├── main.py # FastAPI application
│ │ ├── Dockerfile # Container configuration
│ │ └── requirements.txt
│ ├── semantic_layer/ # AI agent discovery & routing
│ │ ├── config/ # Semantic layer configuration (NEW)
│ │ └── layer/
│ │ ├── mcp_server/ # MCP server for agent communication
│ │ └── embedding_mesh/ # Vector embeddings & search
│ └── guardian/ # Security & policy enforcement
│ ├── config/ # Guardian configuration (NEW)
│ ├── agent/ # Guardian agent code
│ └── opa/ # OPA policies
│ └── policies/
│ ├── semantic_access.rego
│ └── a2a_access.rego # A2A validation policy (NEW)
├── compose.yaml # Container orchestration
├── .abi/ # ABI project metadata
│ └── runtime.yaml
└── README.md # Project documentation
```
---
## � Security Features
### A2A Validation (Agent-to-Agent)
Automatic security validation for all agent communications:
```python
from config import AGENT_CARD
from abi_core.common.workflow import WorkflowGraph
# Create workflow
workflow = WorkflowGraph()
# ... add nodes ...
# Set source card for automatic A2A validation
workflow.set_source_card(AGENT_CARD)
# All communications are now automatically validated!
async for chunk in workflow.run_workflow():
process(chunk)
```
**Features:**
- ✅ Automatic validation before each communication
- ✅ OPA policy-based access control
- ✅ Three modes: strict (production), permissive (dev), disabled (testing)
- ✅ Complete audit logging
- ✅ Configurable communication rules
### User Validation
User-level access control for semantic layer operations:
```python
from abi_core.security.agent_auth import with_agent_context
context = with_agent_context(
agent_id="my-agent",
tool_name="find_agent",
mcp_method="callTool",
user_email="user@example.com", # User validation
query="search query"
)
```
**Configuration:**
```bash
# Environment variables
A2A_VALIDATION_MODE=strict # strict, permissive, or disabled
A2A_ENABLE_AUDIT_LOG=true
GUARDIAN_URL=http://guardian:8383
```
---
## 🔧 Configuration
ABI-Core uses environment variables and YAML configuration files:
```yaml
# .abi/runtime.yaml
agents:
my-agent:
model: "qwen2.5:3b"
port: 8000
semantic_layer:
provider: "weaviate"
host: "localhost:8080"
security:
opa_enabled: true
policies_path: "./policies"
```
---
## 🚀 Deployment
### Docker (Recommended)
```bash
docker-compose up --build
docker-compose up --scale my-agent=3
```
### Kubernetes
```bash
abi-core-ai deploy kubernetes
kubectl apply -f ./k8s/
```
---
## 🧪 Examples
### Simple Agent
```python
from abi_core.agent.agent import AbiAgent
from abi_core.common.utils import abi_logging
class MyAgent(AbiAgent):
def __init__(self):
super().__init__(
agent_name='my-agent',
description='A helpful AI assistant'
)
async def stream(self, query: str, context_id: str, task_id: str):
abi_logging(f"Processing: {query}")
response = await self.llm.ainvoke(query)
yield {
'content': response.content,
'response_type': 'text',
'is_task_completed': True
}
```
### Agent Communication
```python
await self.send_message(
target_agent="agent-b",
message="Process this data",
data={"items": [1, 2, 3]}
)
```
---
## 📚 Documentation
**📖 Full Documentation:** [https://abi-core.readthedocs.io](https://abi-core.readthedocs.io)
- **[Getting Started](https://abi-core.readthedocs.io/en/latest/getting-started/installation.html)** - Installation and quick start
- **[Quick Start Guide](https://abi-core.readthedocs.io/en/latest/getting-started/quickstart.html)** - Get running in 5 minutes
- **[Models Guide](https://abi-core.readthedocs.io/en/latest/user-guide/models.html)** - Model selection and provisioning
- **[FAQ](https://abi-core.readthedocs.io/en/latest/faq.html)** - Frequently asked questions
- **[Architecture](https://abi-core.readthedocs.io/en/latest/architecture.html)** - System design and concepts
---
## 🤝 Contributing
We welcome contributions! This is a beta release, so your feedback is especially valuable.
### Development Setup
```bash
git clone https://github.com/Joselo-zn/abi-core
cd abi-core-ai
uv sync --dev
```
### Running Tests
```bash
uv run pytest
```
---
## 📄 License
Apache 2.0 License — see [LICENSE](LICENSE) for details.
---
## 🆘 Support
- **Issues** — [GitHub Issues](https://github.com/Joselo-zn/abi-core/issues)
- **Discussions** — [GitHub Discussions](https://github.com/Joselo-zn/abi-core/issues/discussions)
- **Email** — jl.mrtz@gmail.com
---
## 🗺️ Roadmap
| Milestone | Description | Status |
|------------|--------------|--------|
| v0.2.0 | Enhanced agent orchestration | 🔜 In progress |
| v0.3.0 | Advanced semantic search | 🧠 Planned |
| v0.4.0 | Multi-cloud deployment | 🧩 Planned |
| v1.0.0 | Production-ready stable release | 🏁 Target Q3 2026 |
---
**Built with ❤️ by [José Luis Martínez](https://github.com/Joselo-zn)**
Creator of **ABI (Agent-Based Infrastructure)** — redefining how intelligent systems interconnect.
✨ From Curiosity to Creation: A Personal Note
I first saw a computer in 1995. My dad had received a Windows 3.11 machine as payment for a job. I was fascinated.
At the time, I wanted to study robotics — but when I touched that machine, everything changed.
I didn't understand what the Internet was, and I had no idea where to go… but even in that confusion, I felt something big.
When I wrote my first Visual C++ program in 1999, I felt like a hacker. When I built my first web page, full of GIFs, I was flying.
Nobody taught me. I just read manuals. And now, years later, that journey continues — not just as a coder, but as the creator of ABI.
This is for the kids like me, then and now.
| text/markdown | null | José Luis Martínez Abundiz <jl.mrtz@gmail.com> | null | null | null | ai, agents, infrastructure, semantic, security | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial I... | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0.0",
"rich>=13.0.0",
"jinja2>=3.1.0",
"pyyaml>=6.0.0",
"pydantic>=2.0.0",
"langchain>=0.1.0",
"langchain-core>=0.2.20",
"langchain-ollama>=0.1.0",
"langchain-community>=0.1.0",
"langgraph>=0.1.0",
"fastapi>=0.100.0",
"uvicorn>=0.20.0",
"starlette>=0.27.0",
"requests>=2.31.0",
... | [] | [] | [] | [
"Homepage, https://github.com/Joselo-zn/abi-core",
"Documentation, https://abi-core.readthedocs.io",
"Repository, https://github.com/Joselo-zn/abi-core",
"Issues, https://github.com/Joselo-zn/abi-core/issues",
"Changelog, https://github.com/Joselo-zn/abi-core/blob/main/CHANGELOG.md"
] | uv/0.9.2 | 2026-02-20T01:00:23.503486 | abi_core_ai-1.6.3.tar.gz | 226,069 | 1e/76/8b1aa256690bfc9bfe1125849ebfe995f21f8dd227088e68b8789a960ca9/abi_core_ai-1.6.3.tar.gz | source | sdist | null | false | 98f15aeced17d2d8572f8b8e2ca645ae | c7a1f39c067cb4dc81f931b4aae28461c412004d1a463e0fbbeaa4f46e85165f | 1e768b1aa256690bfc9bfe1125849ebfe995f21f8dd227088e68b8789a960ca9 | Apache-2.0 | [
"LICENSE"
] | 268 |
2.4 | channelexplorer | 0.2.0 | Neural network activation channel explorer for TensorFlow and PyTorch | # ChannelExplorer
A visual analytics tool for exploring neural network activation channels.
Supports both **TensorFlow / Keras** and **PyTorch** models.
ChannelExplorer extracts per-channel activation summaries from convolutional
and dense layers, then presents them through coordinated views — heatmaps,
dimensionality-reduced embeddings, clustering, and overlay visualizations — so
you can quickly identify patterns, outliers, and redundancies across classes.
## Features
- **Model Graph View** — Interactive, layered visualization of the network architecture.
- **Activation Heatmaps** — Per-channel activation magnitudes across all images.
- **Embedding Projections** — MDS, t-SNE, UMAP, PCA, or autoencoder projections to reveal class separability at each layer.
- **Activation Overlays** — Superimpose channel activations onto original inputs.
- **Clustering & Outlier Detection** — X-Means / K-Means clustering with automatic outlier flagging.
- **Pluggable Summary Functions** — L2 norm, percentile, Otsu threshold, and more — or bring your own.
## Demo Quickstart (InceptionV3 + Imagenette)
To run the demo, you can use the following command:
```bash
docker run -p 8000:8000/tcp channelexplorer
```
Then open <http://localhost:8000> in your browser.
## Installation
Available on PyPI. **Requires Python >= 3.12**.
```bash
# TensorFlow support
pip install channelexplorer[tf]
# PyTorch support
pip install channelexplorer[torch]
# Both
pip install channelexplorer[all]
```
### Redis
A running Redis server is used for caching analysis results.
```bash
# Install redis
sudo apt install redis-server # Debian/Ubuntu
# sudo pacman -S redis # Arch
# Run redis
redis-server --daemonize yes
# sudo systemctl start redis
```
You can also use the official [Redis Docker image](https://hub.docker.com/_/redis).
To point at a non-default Redis instance, set these environment variables:
| Variable | Default |
| --- | --- |
| `REDIS_HOST` | `localhost` |
| `REDIS_PORT` | `6379` |
| `REDIS_DB` | `0` |
## Usage
### TensorFlow
```python
from channelexplorer import ChannelExplorer_TF, metrics
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
from nltk.corpus import wordnet as wn
model = tf.keras.applications.vgg16.VGG16(weights="imagenet")
model.compile(loss="categorical_crossentropy", optimizer="adam")
ds, info = tfds.load(
"imagenette/320px-v2",
shuffle_files=False,
with_info=True,
as_supervised=True,
batch_size=None,
)
labels = list(
map(
lambda l: wn.synset_from_pos_and_offset(l[0], int(l[1:])).name(),
info.features["label"].names,
)
)
dataset = ds["train"]
vgg16_input_shape = tf.keras.applications.vgg16.VGG16().input.shape[1:3].as_list()
@tf.function
def preprocess(x, y):
x = tf.image.resize(x, vgg16_input_shape, method=tf.image.ResizeMethod.BILINEAR)
x = tf.keras.applications.vgg16.preprocess_input(x)
return x, y
def preprocess_inv(x, y):
x = x.squeeze(0)
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype("uint8")
return x, y
server = ChannelExplorer_TF(
model=model,
dataset=dataset,
label_names=labels,
preprocess=preprocess,
preprocess_inverse=preprocess_inv,
summary_fn_image=metrics.summary_fn_image_l2,
log_level="info",
)
server.run(host="localhost", port=8000)
```
### PyTorch
```python
from channelexplorer import APAnalysisTorchModel
import torchvision.models as models
import torchvision.datasets as datasets
import torchvision.transforms as transforms
model = models.vgg16(weights=models.VGG16_Weights.IMAGENET1K_V1)
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
dataset = datasets.MNIST("./data", train=True, download=True, transform=transform)
server = APAnalysisTorchModel(
model=model,
input_shape=(1, 3, 224, 224),
dataset=dataset,
label_names=[str(i) for i in range(10)],
log_level="info",
)
server.run(host="localhost", port=8000)
```
Once the server is running, open <http://localhost:8000> (or use the
standalone frontend in development mode — see below).
## Development
This project uses [uv](https://docs.astral.sh/uv/) for Python dependency
management and [pnpm](https://pnpm.io/) for the Next.js frontend.
```bash
# Clone the repo
git clone https://github.com/rahatzamancse/APalysis.git
cd APalysis
# Install Python deps with TF extras
uv sync --extra tf
# Run the TF example
uv run --extra tf examples/run_tf.py --host localhost --port 8000
# Run the PyTorch example
uv run --extra torch examples/run_torch.py
```
### Frontend (Next.js)
```bash
cd frontend
pnpm install
pnpm dev # starts on http://localhost:3000
```
## Project Structure
```
├── src/channelexplorer/ # Python library
│ ├── server.py # Base FastAPI server
│ ├── metrics.py # Activation summary functions
│ ├── types.py # Shared type aliases
│ ├── utils.py # Graph layout & image utilities
│ ├── redis_cache.py # Redis caching
│ ├── channelexplorer_tf/ # TensorFlow backend
│ └── channelexplorer_torch/ # PyTorch backend
├── frontend/ # Next.js frontend
├── examples/ # Ready-to-run example scripts
├── Dockerfile # Production Docker image
└── pyproject.toml
```
| text/markdown | null | Rahat Zaman <rahatzamancse@gmail.com> | null | null | null | null | [] | [] | null | null | <3.13,>=3.12 | [] | [] | [] | [
"fastapi>=0.128.0",
"grandalf>=0.8",
"networkx>=3.4.2",
"nptyping>=2.5.0",
"pillow>=12.1.0",
"pyclustering>=0.10.1.2",
"pydot>=4.0.1",
"redis>=7.1.0",
"scikit-learn>=1.7.2",
"scipy>=1.14.0",
"tqdm>=4.67.1",
"umap-learn>=0.5.9.post2",
"uvicorn>=0.40.0",
"keract>=4.5.2; extra == \"all\"",
... | [] | [] | [] | [] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"CachyOS Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T01:00:00.451744 | channelexplorer-0.2.0.tar.gz | 85,088,385 | 74/c1/be2d33611bd582aaaf6178a45be848f9a8f39135533c362b318c45225887/channelexplorer-0.2.0.tar.gz | source | sdist | null | false | e82bf00a67e16b0095bc094752288898 | ad4987929e2ba71275afc734086d7b9c39820ec04779eac06c592296c42d0ee6 | 74c1be2d33611bd582aaaf6178a45be848f9a8f39135533c362b318c45225887 | null | [] | 239 |
2.4 | rastr | 0.12.0 | Geospatial Raster datatype library for Python. | <h1 align="center">
<img src="https://raw.githubusercontent.com/tonkintaylor/rastr/refs/heads/develop/docs/logo.svg"><br>
</h1>
# rastr
[](<https://pypi.python.org/pypi/rastr>)
[](https://pypi.python.org/pypi/rastr)

A lightweight geospatial raster datatype library for Python focused on simplicity.
For more details, read the documentation: <https://rastr.readthedocs.io/en/stable/>.
## Overview
`rastr` provides an intuitive interface for creating, reading, manipulating, and exporting geospatial raster data in Python.
### Features
- 🧮 **Complete raster arithmetic**: Full support for mathematical operations (`+`, `-`, `*`, `/`) between rasters and scalars.
- 📊 **Flexible visualization**: Built-in plotting with matplotlib and interactive mapping with folium.
- 🗺️ **Geospatial analysis tools**: Contour generation, Gaussian blurring, and spatial sampling.
- 🛠️ **Data manipulation**: Fill NaN values, extrapolate missing data, and resample to different resolutions.
- 🔗 **Seamless integration**: Works with GeoPandas, rasterio, and the broader Python geospatial ecosystem.
- ↔️ **Vector-to-raster workflows**: Convert GeoDataFrame polygons, points, and lines to raster format.
## Installation
<!--pytest.mark.skip-->
```bash
# With uv
uv add rastr
# With pip
pip install rastr
```
## Quick Start
```python
from pyproj.crs.crs import CRS
from rasterio.transform import from_origin
from rastr import Raster, RasterMeta
from rastr.create import full_raster
# Create an example raster
raster = Raster.example()
# Write to and read from a file
raster.to_file("raster.tif")
raster = Raster.read_file("raster.tif")
# Basic arithmetic operations
doubled = raster * 2
summed = raster + 10
combined = raster + doubled
# Visualize the data
ax = raster.plot(cbar_label="Values")
# Interactive web mapping (requires folium)
m = raster.explore(opacity=0.8, colormap="plasma")
# Sample values at specific coordinates
xy_points = [(100.0, 200.0), (150.0, 250.0)]
values = raster.sample(xy_points)
# Generate contour lines
contours = raster.contour(levels=[0.1, 0.5, 0.9], smoothing=True)
# Apply spatial operations
blurred = raster.blur(sigma=2.0) # Gaussian blur
filled = raster.extrapolate(method="nearest") # Fill NaN values via nearest-neighbours
resampled = raster.resample(cell_size=0.5) # Change resolution
# Export to file
raster.to_file("output.tif")
# Convert to GeoDataFrame for vector analysis
gdf = raster.as_geodataframe(name="elevation")
```
## Quick Reference
```python
from rastr import Raster
```
### Data access
- [`Raster.bbox`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.bbox) - bounding box polygon.
- [`Raster.bounds`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.bounds) - bounding box as `(xmin, ymin, xmax, ymax)`.
- [`Raster.cell_size`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.cell_size) - cell size.
- [`Raster.crs`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.crs) - coordinate reference system.
- [`Raster.shape`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.shape) - raster shape (rows, columns).
- [`Raster.transform`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.transform) - affine transform.
- [`Raster.sample(xy)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.sample) - sample raster values at given coordinates.
### I/O
- [`Raster.read_file(path)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.read_file) - read raster from file.
- [`Raster.to_file(path)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.to_file) - write raster to file.
- [`Raster.to_clipboard()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.to_clipboard) - copy raster data to clipboard in a tabular format.
### Geometric Operations
- [`Raster.crop(bounds)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.crop) - remove cells outside given bounds.
- [`Raster.pad(width)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.pad) - add NaN border around raster.
- [`Raster.resample(cell_size)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.resample) - resample raster to a new cell size.
- [`Raster.taper_border(width)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.taper_border) - gradually reduce values to zero at the border.
- [`Raster.gdf()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.gdf) - Vectorize to a GeoDataFrame of cell polygons and values.
### NaN Management and value replacements
- [`Raster.clip(polygon)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.clip) - replace values outside a polygon with NaN.
- [`Raster.extrapolate()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.extrapolate) - fill NaN values via nearest-neighbours.
- [`Raster.fillna(value)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.fillna) - fill NaN values with a specified value.
- [`Raster.replace(to_replace, value)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.replace) - replace specific cell values.
- [`Raster.replace_polygon(polygon, value)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.replace_polygon) - replace cell values within a polygon.
- [`Raster.trim_nan()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.trim_nan) - remove border rows/columns that are entirely NaN.
### Image Processing
- [`Raster.blur(radius)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.blur) - apply Gaussian blur.
- [`Raster.dilate(radius)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.dilate) - apply morphological dilation.
- [`Raster.sobel()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.sobel) - apply Sobel filter (edge detection/gradient).
### Visualization
- [`Raster.explore()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.explore) - interactive web map visualization with folium.
- [`Raster.plot()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.plot) - matplotlib static plot with colorbar.
- [`Raster.contour(levels)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.contour) - get a GeoDataFrame of contour lines.
### Cell-wise Operations
- [`Raster.apply(func)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.apply) - apply a function to cell values.
- [`Raster.abs()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.abs) - absolute value of cell values.
- [`Raster.clamp()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.clamp) - clip cell values to an `(a_min, a_max)` range.
- [`Raster.exp()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.exp) - exponential of cell values.
- [`Raster.log()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.log) - logarithm of cell values.
- [`Raster.max()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.max) - maximum of cell values.
- [`Raster.mean()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.mean) - mean of cell values.
- [`Raster.median()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.median) - median of cell values.
- [`Raster.min()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.min) - minimum of cell values.
- [`Raster.normalize()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.normalize) - normalize cell values to [0, 1].
- [`Raster.quantile(q)`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.quantile) - quantile of cell values.
- [`Raster.std()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.std) - standard deviation of cell values.
- [`Raster.sum()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.sum) - sum of cell values.
- [`Raster.unique()`](https://rastr.readthedocs.io/en/stable/autoapi/rastr/raster/#rastr.raster.Raster.unique) - array of unique cell values.
## Limitations
Current version limitations:
- Only Single-band rasters are supported.
- In-memory processing only (streaming support planned).
- Square cells only (rectangular cell support planned).
- Only float dtypes (integer support planned).
## Similar Projects
- [rasters](https://github.com/python-rasters/rasters) is a project with similar goals of providing a dedicated raster datatype in Python with higher-level interfaces for GIS operations. Unlike `rastr`, it has support for multi-band rasters, and has some more advanced functionality for Earth Science applications. Both projects are relatively new and under active development.
- [rasterio](https://rasterio.readthedocs.io/) is a core dependency of `rastr` and provides low-level raster I/O and processing capabilities.
- [rioxarray](https://corteva.github.io/rioxarray/stable/getting_started/getting_started.html) extends [`xarray`](https://docs.xarray.dev/en/stable/index.html) for raster data with geospatial support via `rasterio`.
### Contributing
[](https://github.com/tonkintaylor/rastr/releases)
See the
[CONTRIBUTING.md](https://github.com/usethis-python/usethis-python/blob/main/CONTRIBUTING.md)
file.
| text/markdown | null | Tonkin & Taylor Limited <Sub-DisciplineData+AnalyticsStaff@tonkintaylor.co.nz>, Nathan McDougall <nmcdougall@tonkintaylor.co.nz>, Ben Karl <bkarl@tonkintaylor.co.nz> | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"affine>=2.4.0",
"branca>=0.8.1",
"folium>=0.20.0",
"geopandas>=1.1.1",
"matplotlib>=3.10.5",
"numpy>=2.2.6",
"pandas>=2.3.1",
"pydantic>=2.11.7",
"pyproj>=3.7.1",
"rasterio>=1.4.3",
"scikit-image>=0.25.2",
"scipy>=1.15.3",
"shapely>=2.1.1",
"tqdm>=4.67.1",
"typing-extensions>=4.14.1"
] | [] | [] | [] | [
"Source Code, https://github.com/tonkintaylor/rastr",
"Bug Tracker, https://github.com/tonkintaylor/rastr/issues",
"Releases, https://github.com/tonkintaylor/rastr/releases",
"Source Archive, https://github.com/tonkintaylor/rastr/archive/c86a7c6f0fce142e74fcaba2934bdcca9c1ccd2d.zip"
] | uv/0.7.13 | 2026-02-20T00:58:12.575036 | rastr-0.12.0.tar.gz | 404,422 | 0f/6f/c84764573cd8b1e36ed95ab5a2d4799bedeea5b095b19316eee35986df89/rastr-0.12.0.tar.gz | source | sdist | null | false | 7038e513ebb94b28c43f42167a510c6c | bc49dc9c4d228f2fff558aeaffbfb5974f5df6ae83f6fddd7627c70474c66aa5 | 0f6fc84764573cd8b1e36ed95ab5a2d4799bedeea5b095b19316eee35986df89 | MIT | [
"LICENSE"
] | 248 |
2.4 | AndroidManifestExplorer | 1.0.0 | A professional tool to automate attack surface detection in Android applications by parsing Manifest files. | # **📲 AndroidManifestExplorer**
A high-performance static analysis utility designed to automate the discovery of attack surfaces in Android applications. By parsing decompiled `AndroidManifest.xml` files, this tool identifies exposed components, security misconfigurations, and deep-link vectors, providing ready-to-use `adb` payloads for immediate dynamic verification.
## **🎯 Security Objectives**
* **Attack Surface Mapping**: Identify all exported Activities, Services, Broadcast Receivers, and Content Providers.
* **Implicit Export Detection**: Flag components that are exported by default due to the presence of intent-filters without explicit `android:exported="false"` attributes.
* **Deep Link Analysis**: Extract URI schemes and hosts to facilitate intent-fuzzing and unauthorized navigation testing.
* **Permission Audit**: Highlight unprotected components and evaluate the strength of defined custom permissions.
* **Config Analysis**: Detect high-risk flags such as `debuggable="true"`, `allowBackup="true"`, and `testOnly="true"`.
## **🚀 Installation**
### Prerequisites
- Python 3.6+
- [apktool](https://apktool.org/) (for decompiling binary XML)
### **Setup**
1. Clone the repository and install the dependencies:
```bash
$: git clone https://github.com/mateofumis/AndroidManifestExplorer.git
$: cd AndroidManifestExplorer
$: pip install .
```
- Alternatively, install the requirements directly:
```bash
$: pip install -r requirements.txt
```
1. Using PyPI (Available for `pip` or `pipx`)
```bash
# with pip/pip3
$: pip install AndroidManifestExplorer
# or pipx
$: pipx install AndroidManifestExplorer
```
## **🛠 Usage Workflow**
### **1. Decompile Target APK**
The tool operates on the plain-text XML output of `apktool`.
```bash
$: apktool d target_app.apk -o output_dir
```
### **2. Execute Scan**
Run the explorer against the generated manifest:
```bash
$: AndroidManifestExplorer -f output_dir/AndroidManifest.xml
```
If running the script directly without installation:
```bash
$: python3 AndroidManifestExplorer.py -f output_dir/AndroidManifest.xml
```
## **📊 Technical Output Overview**
The tool categorizes findings by risk and generates specific `adb` commands:
* **Activities**: Generates `am start` commands.
* **Services**: Generates `am start-service` commands.
* **Receivers**: Generates `am broadcast` commands.
* **Providers**: Generates `content query` commands with a default SQLi test payload (`--where "1=1"`).
### **Example Result:**
```
[+] ACTIVITY EXPORTED: com.package.name.InternalActivity
[!] NO PERMISSION REQUIRED (High Risk)
[>] ADB: adb shell am start -n com.package.name/com.package.name.InternalActivity
[★] DEEP LINK DETECTED: secret-app://debug_panel
[>] Attack: adb shell am start -W -a android.intent.action.VIEW -d "secret-app://debug_panel" com.package.name
```
## **⚖️ Disclaimer**
This tool is intended for professional security research and authorized penetration testing only. Unauthorized use against systems without prior written consent is strictly prohibited and may violate local and international laws. The developer assumes no liability for misuse or damage caused by this utility.
| text/markdown | Mateo Fumis | mateofumis@mfumis.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Topic :: Security",
"Intended Audience :: Information Technology",
"Environment :: Console"
] | [] | https://github.com/mateofumis/AndroidManifestExplorer | null | >=3.6 | [] | [] | [] | [
"colorama>=0.4.4"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-20T00:57:46.111714 | androidmanifestexplorer-1.0.0.tar.gz | 4,786 | fc/0b/7a58d014e3407c695ea1b2112acadb3f9ce07164b56de4d57fd6dfcdda7c/androidmanifestexplorer-1.0.0.tar.gz | source | sdist | null | false | e91c07c39153204773d17766cba9a9ca | f4c06c88583c9a21d6c0edfcc62e336be634fcfd9d6ff91090c06c07f21039b2 | fc0b7a58d014e3407c695ea1b2112acadb3f9ce07164b56de4d57fd6dfcdda7c | null | [] | 0 |
2.4 | apache-airflow-providers-microsoft-fabric | 0.0.9 | A plugin for Apache Airflow to interact with Microsoft Fabric items | # Apache Airflow Plugin for Microsoft Fabric Plugin. 🚀
## Introduction
A Python package that helps Data and Analytics engineers trigger run on demand job items of [Microsoft Fabric](https://www.microsoft.com/en-us/microsoft-fabric) in Apache Airflow DAGs.
[Microsoft Fabric](https://www.microsoft.com/microsoft-fabric) is an end-to-end analytics and data platform designed for enterprises that require a unified solution. It encompasses data movement, processing, ingestion, transformation, real-time event routing, and report building. It offers a comprehensive suite of services including Data Engineering, Data Factory, Data Science, Real-Time Analytics, Data Warehouse, and Databases.
## How to Use
### Install the Plugin
Pypi package: https://pypi.org/project/apache-airflow-microsoft-fabric/
```bash
pip install apache-airflow-microsoft-fabric
```
### Prerequisities
Before diving in,
* The plugin supports the <strong>authentication using user tokens</strong>. Tenant level admin account must enable the setting <strong>Allow user consent for apps</strong>. Refer to: [Configure user consent](https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/configure-user-consent?pivots=portal)
* [Create a Microsoft Entra Id app](https://learn.microsoft.com/entra/identity-platform/quickstart-register-app?tabs=certificate) if you don’t have one.
* You must have [Refresh token](https://learn.microsoft.com/entra/identity-platform/v2-oauth2-auth-code-flow#refresh-the-access-token).
Since custom connection forms aren't feasible in Apache Airflow plugins, use can use `Generic` connection type. Here's what you need to store:
1. `Connection Id`: Name of the connection Id
2. `Connection Type`: Generic
3. `Login`: The Client ID of your service principal.
4. `Password`: The refresh token fetched using Microsoft OAuth.
5. `Extra`: {
"tenantId": "The Tenant Id of your service principal",
"clientSecret": "(optional) The Client Secret for your Entra ID App"
"scopes": "(optional) Scopes you used to fetch the refresh token"
}
> **_NOTE:_** Default scopes applied are: https://api.fabric.microsoft.com/Item.Execute.All, https://api.fabric.microsoft.com/Item.ReadWrite.All, offline_access, openid, profile
## Operators
### MSFabricRunItemOperator
This operator composes the logic for this plugin. It triggers the Fabric item run and pushes the details in Xcom. It can accept the following parameters:
* `workspace_id`: The workspace Id.
* `item_id`: The Item Id. i.e Notebook and Pipeline.
* `fabric_conn_id`: Connection Id for Fabric.
* `job_type`: "RunNotebook" or "Pipeline".
* `wait_for_termination`: (Default value: True) Wait until the run item.
* `timeout`: int (Default value: 60 * 60 * 24 * 7). Time in seconds to wait for the pipeline or notebook. Used only if `wait_for_termination` is True.
* `check_interval`: int (Default value: 60s). Time in seconds to wait before rechecking the refresh status.
* `max_retries`: int (Default value: 5 retries). Max number of times to poll the API for a valid response after starting a job.
* `retry_delay`: int (Default value: 1s). Polling retry delay.
* `deferrable`: Boolean. Use the operator in deferrable mode.
* `job_params`: Dict. Parameters to pass into the job.
## Features
* #### Refresh token rotation:
Refresh token rotation is a security mechanism that involves replacing the refresh token each time it is used to obtain a new access token.
This process enhances security by reducing the risk of stolen tokens being reused indefinitely.
* #### Xcom Integration:
The Fabric run item enriches the Xcom with essential fields for downstream tasks:
1. `run_id`: Run Id of the Fabric item.
2. `run_status`: Fabric item run status.
* `In Progress`: Item run is in progress.
* `Completed`: Item run successfully completed.
* `Failed`: Item run failed.
* `Disabled`: Item run is disabled by a selective refresh.
3. `run_location`: The location of item run status.
* #### External Monitoring link:
The operator conveniently provides a redirect link to the Microsoft Fabric item run.
* ### Deferable Mode:
The operator runs in deferrable mode. The operator is deferred until the target status of the item run is achieved.
## Sample DAG to use the plugin.
Ready to give it a spin? Check out the sample DAG code below:
```python
from __future__ import annotations
from airflow import DAG
from airflow.providers.microsoft.fabric.operators.run_item import MSFabricRunItemOperator
from airflow.utils.dates import days_ago
default_args = {
"owner": "airflow",
"start_date": days_ago(1),
}
with DAG(
dag_id="fabric_items_dag",
default_args=default_args,
schedule_interval="@daily",
catchup=False,
) as dag:
run_notebook = MSFabricRunItemOperator(
task_id="run_fabric_notebook",
workspace_id="<workspace_id>",
item_id="<item_id>",
fabric_conn_id="fabric_conn_id",
job_type="RunNotebook",
wait_for_termination=True,
deferrable=True,
)
run_notebook
```
Feel free to tweak and tailor this DAG to suit your needs!
## Contributing
We welcome any contributions:
- Report all enhancements, bugs, and tasks as [GitHub issues](https://github.com/ambika-garg/apache-airflow-microsoft-fabric-plugin/issues)
- Provide fixes or enhancements by opening pull requests in GitHub.
| text/markdown | null | Vinicius Fontes <vfontes@microsot.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Development Status :: 3 - Alpha",
"Environment :: Plugins",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests",
"pytest; extra == \"test\"",
"pytest-mock; extra == \"test\"",
"pytest-asyncio; extra == \"test\"",
"apache-airflow==2.10.5; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/microsoft/apache-airflow-microsoft-fabric-plugin.git"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T00:56:26.596103 | apache_airflow_providers_microsoft_fabric-0.0.9.tar.gz | 40,786 | 7d/7f/3a123743b241ca8a9548d054cbaf71f48a955911de2813fe3c2fe8f36b54/apache_airflow_providers_microsoft_fabric-0.0.9.tar.gz | source | sdist | null | false | 9c358293edd063bac277f5ee301eb4c9 | 97356891308a688a4dcff8f63d33611b9ce0f5868bd8790708500c2bb0a60ee3 | 7d7f3a123743b241ca8a9548d054cbaf71f48a955911de2813fe3c2fe8f36b54 | null | [
"LICENSE"
] | 397,357 |
2.4 | cloud-radar | 0.15.1a127 | Run functional tests on cloudformation stacks. | <!-- PROJECT SHIELDS -->
<!--
*** I'm using markdown "reference style" links for readability.
*** Reference links are enclosed in brackets [ ] instead of parentheses ( ).
*** See the bottom of this document for the declaration of the reference variables
*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use.
*** https://www.markdownguide.org/basic-syntax/#reference-style-links
-->
[![Python][py-versions-shield]][pypi-url]
[![Latest][version-shield]][pypi-url]
[![Tests][test-shield]][test-url]
[![Coverage][codecov-shield]][codecov-url]
[![License][license-shield]][license-url]
<!-- [![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url] -->
<!-- PROJECT LOGO -->
<br />
<p align="center">
<!-- <a href="https://github.com/DontShaveTheYak/cloud-radar">
<img src="images/logo.png" alt="Logo" width="80" height="80">
</a> -->
<h3 align="center">Cloud-Radar</h3>
<p align="center">
Write unit and functional tests for AWS Cloudformation.
<!-- <br />
<a href="https://github.com/DontShaveTheYak/cloud-radar"><strong>Explore the docs »</strong></a>
<br /> -->
<br />
<!-- <a href="https://github.com/DontShaveTheYak/cloud-radar">View Demo</a>
· -->
<a href="https://github.com/DontShaveTheYak/cloud-radar/issues">Report Bug</a>
·
<a href="https://github.com/DontShaveTheYak/cloud-radar/issues">Request Feature</a>
·
<a href="https://la-tech.co/post/hypermodern-cloudformation/getting-started/">Guide</a>
</p>
</p>
<!-- TABLE OF CONTENTS -->
<details open="open">
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#built-with">Built With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li><a href="#usage">Usage</a></li>
<li><a href="#roadmap">Roadmap</a></li>
<li><a href="#contributing">Contributing</a></li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgements">Acknowledgements</a></li>
</ol>
</details>
## About The Project
<!-- [![Product Name Screen Shot][product-screenshot]](https://example.com) -->
Cloud-Radar is a python module that allows testing of Cloudformation Templates/Stacks using Python.
### Unit Testing
You can now unit test the logic contained inside your Cloudformation template. Cloud-Radar takes your template, the desired region and some parameters. We render the template into its final state and pass it back to you.
You can Test:
* That Conditionals in your template evaluate to the correct value.
* Conditional resources were created or not.
* That resources have the correct properties.
* That resources are named as expected because of `!Sub`.
You can test all this locally without worrying about AWS Credentials.
A number of these tests can be configured in a common way to apply to all templates through the use of the [hooks](./examples/unit/hooks/README.md) functionality.
### Functional Testing
This project is a wrapper around Taskcat. Taskcat is a great tool for ensuring your Cloudformation Template can be deployed in multiple AWS Regions. Cloud-Radar enhances Taskcat by making it easier to write more complete functional tests.
Here's How:
* You can interact with the deployed resources directly with tools you already know like boto3.
* You can control the lifecycle of the stack. This allows testing if resources were retained after the stacks were deleted.
* You can run tests without hardcoding them in a taskcat config file.
This project is new and it's possible not all features or functionality of Taskcat/Cloudformation are supported (see [Roadmap](#roadmap)). If you find something missing or have a use case that isn't covered then please let me know =)
### Built With
* [Taskcat](https://github.com/aws-quickstart/taskcat)
* [cfn_tools from cfn-flip](https://github.com/awslabs/aws-cfn-template-flip)
## Getting Started
Cloud-Radar is available as an easy to install pip package.
### Prerequisites
Cloud-Radar requires python >= 3.8
### Installation
1. Install with pip.
```sh
pip install cloud-radar
```
## Usage
<details>
<summary>Unit Testing <span style='font-size: .67em'>(Click to expand)</span></summary>
Using Cloud-Radar starts by importing it into your test file or framework. We will use this [Template](./tests/templates/log_bucket/log_bucket.yaml) for an example shown below. More scenario based examples are currently being built up in the [examples/unit](./examples/unit) directory of this project.
```python
from pathlib import Path
from cloud_radar.cf.unit import Template
template_path = Path("tests/templates/log_bucket/log_bucket.yaml")
# template_path can be a str or a Path object
template = Template.from_yaml(template_path.resolve())
params = {"BucketPrefix": "testing", "KeepBucket": "TRUE"}
# parameters and region are optional arguments.
stack = template.create_stack(params, region="us-west-2")
stack.no_resource("LogsBucket")
bucket = stack.get_resource("RetainLogsBucket")
assert "DeletionPolicy" in bucket
assert bucket["DeletionPolicy"] == "Retain"
bucket_name = bucket.get_property_value("BucketName")
assert "us-west-2" in bucket_name
```
The AWS [pseudo parameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html) are all class attributes and can be modified before rendering a template.
```python
# The value of 'AWS::AccountId' in !Sub "My AccountId is ${AWS::AccountId}" can be changed:
Template.AccountId = '8675309'
```
_Note: Region should only be changed to change the default value. To change the region during testing pass the desired region to render(region='us-west-2')_
The default values for pseudo parameters:
| Name | Default Value |
| ---------------- | --------------- |
| AccountId | "555555555555" |
| NotificationARNs | [] |
| **NoValue** | "" |
| **Partition** | "aws" |
| Region | "us-east-1" |
| StackId | (generated based on other values) |
| StackName | "my-cloud-radar-stack" |
| **URLSuffix** | "amazonaws.com" |
_Note: Bold variables are not fully implemented yet see the [Roadmap](#roadmap)_
At the point of creating the `Template` instance additional configuration is required to be provided if you are using certain approaches to resolving values.
If you use [Fn::ImportValue](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html), a dictionary of key/value pairs is required containing all the keys that your template uses. If an import name is referenced by the template which is not included in this dictionary, an error will be raised.
```
imports = {
"FakeKey": "FakeValue"
}
template = Template(template_content, imports=imports)
```
If you use [Dynamic References](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html), a dictionary containing the service and key/value pairs is required containing all the dynamic references that your template uses. If a dynamic reference is included in the template and not contained in the configuration object, an error will be raised.
```
template_content = {
"Resources": {
"Foo": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": (
"mgt-{{resolve:ssm:/account/current/short_name}}-launch-role-pol"
),
},
},
},
}
dynamic_references = {
"ssm": {
"/account/current/short_name": "dummy"
}
}
template = Template(template_content, dynamic_references=dynamic_references)
```
There are cases where the default behaviour of our `Ref` and `GetAtt` implementations may not be sufficient and you need a more accurate returned value. When unit testing there are no real AWS resources created, and cloud-radar does not attempt to realistically generate attribute values - a string is always returned. For `Ref` this is the logical resource name, for `GetAtt` this is `<logical resource name>.<attribute name>`. This works good enough most of the time, but there are some cases where if you are attempting to apply intrinsic functions against these value it needs to be more correct. When this occurs, you can add Metadata to the template to provide test values to use.
```
Resources:
MediaPackageV2Channel:
Type: AWS::MediaPackageV2::Channel
Metadata:
Cloud-Radar:
ref: arn:aws:mediapackagev2:region:AccountId:ChannelGroup/ChannelGroupName/Channel/ChannelName
attribute-values:
# Default behaviour of a string is not good enough here, the attribute value is expected to be a List.
IngestEndpointUrls:
- http://one.example.com
- http://two.example.com
Properties:
ChannelGroupName: dev_video_1
ChannelName: !Sub ${AWS::StackName}-MediaPackageChannel
```
If you are unable to modify the template itself, it is also possible to inject this metadata as part of the unit test. See [this test case](./tests/test_cf/test_unit/test_functions_ref.py) for an example.
A real unit testing example using Pytest can be seen [here](./tests/test_cf/test_examples/test_unit.py)
</details>
<details>
<summary>Functional Testing <span style='font-size: .67em'>(Click to expand)</span></summary>
Using Cloud-Radar starts by importing it into your test file or framework.
```python
from pathlib import Path
from cloud_radar.cf.e2e import Stack
# Stack is a context manager that makes sure your stacks are deleted after testing.
template_path = Path("tests/templates/log_bucket/log_bucket.yaml")
params = {"BucketPrefix": "testing", "KeepBucket": "False"}
regions = ['us-west-2']
# template_path can be a string or a Path object.
# params can be optional if all your template params have default values
# regions can be optional, default region is 'us-east-1'
with Stack(template_path, params, regions) as stacks:
# Stacks will be created and returned as a list in the stacks variable.
for stack in stacks:
# stack will be an instance of Taskcat's Stack class.
# It has all the expected properties like parameters, outputs and resources
print(f"Testing {stack.name}")
bucket_name = ""
for output in stack.outputs:
if output.key == "LogsBucketName":
bucket_name = output.value
break
assert "logs" in bucket_name
assert stack.region.name in bucket_name
print(f"Created bucket: {bucket_name}")
# Once the test is over then all resources will be deleted from your AWS account.
```
You can use taskcat [tokens](https://aws.amazon.com/blogs/infrastructure-and-automation/a-deep-dive-into-testing-with-taskcat/) in your parameter values.
```python
parameters = {
"BucketPrefix": "taskcat-$[taskcat_random-string]",
"KeepBucket": "FALSE",
}
```
You can skip the context manager. Here is an example for `unittest`
```python
import unittest
from cloud-radar.cf.e2e import Stack
class TestLogBucket(unittest.TestCase):
@classmethod
def setUpClass(cls):
template_path = Path("tests/templates/log_bucket/log_bucket.yaml")
cls.test = Stack(template_path)
cls.test.create()
@classmethod
def tearDownClass(cls):
cls.test.delete()
def test_bucket(self):
stacks = self.__class__.test.stacks
for stack in stacks:
# Test
```
All the properties and methods of a [stack instance](https://github.com/aws-quickstart/taskcat/blob/main/taskcat/_cfn/stack.py#L188).
A real functional testing example using Pytest can be seen [here](./tests/test_cf/test_examples/test_functional.py)
</details>
## Roadmap
### Project
- Add Logo
- Easier to pick regions for testing
### Unit
- Add full functionality to pseudo variables.
* Variables like `Partition`, `URLSuffix` should change if the region changes.
- Handle References to resources that shouldn't exist.
* It's currently possible that a `!Ref` to a Resource stays in the final template even if that resource is later removed because of a conditional.
### Functional
- Add the ability to update a stack instance to Taskcat.
See the [open issues](https://github.com/DontShaveTheYak/cloud-radar/issues) for a list of proposed features (and known issues).
## Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
This project uses poetry to manage dependencies and pre-commit to run formatting, linting and tests. You will need to have both installed to your system as well as python 3.12.
1. Fork the Project
2. Setup environment (`poetry install`)
3. Setup commit hooks (`pre-commit install`)
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
## License
Distributed under the Apache-2.0 License. See [LICENSE.txt](./LICENSE.txt) for more information.
## Contact
Levi - [@shady_cuz](https://twitter.com/shady_cuz)
<!-- ACKNOWLEDGEMENTS -->
## Acknowledgements
* [Taskcat](https://aws-quickstart.github.io/taskcat/)
* [Hypermodern Python](https://cjolowicz.github.io/posts/hypermodern-python-01-setup/)
* [Best-README-Template](https://github.com/othneildrew/Best-README-Template)
* [David Hutchison (@dhutchison)](https://github.com/dhutchison) - He was the first contributor to this project and finished the last couple of features to make this project complete. Thank you!
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[python-shield]: https://img.shields.io/pypi/pyversions/cloud-radar?style=for-the-badge
[py-versions-shield]: https://img.shields.io/pypi/pyversions/cloud-radar?style=for-the-badge
[version-shield]: https://img.shields.io/pypi/v/cloud-radar?label=latest&style=for-the-badge
[pypi-url]: https://pypi.org/project/cloud-radar/
[test-shield]: https://img.shields.io/github/actions/workflow/status/DontShaveTheYak/cloud-radar/test.yml?label=Tests&style=for-the-badge
[test-url]: https://github.com/DontShaveTheYak/cloud-radar/actions?query=workflow%3ATests+branch%3Amaster
[codecov-shield]: https://img.shields.io/codecov/c/gh/DontShaveTheYak/cloud-radar?color=green&style=for-the-badge&token=NE5C92139X
[codecov-url]: https://codecov.io/gh/DontShaveTheYak/cloud-radar
[contributors-shield]: https://img.shields.io/github/contributors/DontShaveTheYak/cloud-radar.svg?style=for-the-badge
[contributors-url]: https://github.com/DontShaveTheYak/cloud-radar/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/DontShaveTheYak/cloud-radar.svg?style=for-the-badge
[forks-url]: https://github.com/DontShaveTheYak/cloud-radar/network/members
[stars-shield]: https://img.shields.io/github/stars/DontShaveTheYak/cloud-radar.svg?style=for-the-badge
[stars-url]: https://github.com/DontShaveTheYak/cloud-radar/stargazers
[issues-shield]: https://img.shields.io/github/issues/DontShaveTheYak/cloud-radar.svg?style=for-the-badge
[issues-url]: https://github.com/DontShaveTheYak/cloud-radar/issues
[license-shield]: https://img.shields.io/github/license/DontShaveTheYak/cloud-radar.svg?style=for-the-badge
[license-url]: https://github.com/DontShaveTheYak/cloud-radar/blob/master/LICENSE.txt
[product-screenshot]: images/screenshot.png
| text/markdown | Levi Blaney | shadycuz+dev@gmail.com | null | null | null | aws, cloudformation, cloud-radar, testing, taskcat, cloud, radar | [
"Development Status :: 2 - Pre-Alpha",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Langua... | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"botocore<2.0.0,>=1.35.36",
"cfn-flip<2.0.0,>=1.3.0",
"taskcat<1.0.0,>=0.9.41"
] | [] | [] | [] | [
"Changelog, https://github.com/DontShaveTheYak/cloud-radar/releases",
"Issues, https://github.com/DontShaveTheYak/cloud-radar/issues",
"Repository, https://github.com/DontShaveTheYak/cloud-radar"
] | poetry/2.3.2 CPython/3.13.11 Linux/6.11.0-1018-azure | 2026-02-20T00:56:25.770274 | cloud_radar-0.15.1a127-py3-none-any.whl | 33,827 | af/0e/ece1af380ad925ef417a157c159c6e9ddb8a435c38a4524e92e943953bb5/cloud_radar-0.15.1a127-py3-none-any.whl | py3 | bdist_wheel | null | false | b9a83c39fc537b4f56cbd1e0967b9681 | 69946e530042555e324e416881d65f845271efc07e30a8e06fdcd7385890f874 | af0eece1af380ad925ef417a157c159c6e9ddb8a435c38a4524e92e943953bb5 | Apache-2.0 | [
"LICENSE.txt"
] | 230 |
2.4 | gallery-track-lib | 0.2.1 | A modular video tracking + gallery ReID toolkit built on top of det-v1 detection outputs. This package is the second stage in the Vision Pipeline. | # gallery-track-lib
**Python:** `>=3.10, <3.13`
**gallery-track-lib** is a modular **video object tracking + gallery ReID** toolkit with a clean **track-v1** JSON schema, pluggable trackers, and optional tooling.
This is the **second stage** of the Vision Pipeline.
Trackers included:
- **gallery_hybrid**: temporal tracking (ByteTrack-style) + optional periodic gallery ReID
- **gallery_only**: gallery assignment only (no temporal association)
> By default, `gallery-track-lib` **does not write any files**. You opt-in to saving JSON, frames, or annotated video via flags.
---
## Vision Pipeline
```
Original Video (.mp4) ───────────────┐
│ │
▼ │
detect-lib │
(Detection Stage) │
│ │
└── detections.json (det-v1) │
│ │
└──────┐ │
▼ ▼
track-lib (Tracking + ReID Stage)
│
└── tracked.json (track-v1)
```
Stage 1 (Detection):
- PyPI: https://pypi.org/project/detect-lib/
- GitHub: https://github.com/Surya-Rayala/VideoPipeline-detection
---
## track-v1 output (returned + optionally saved)
`track-lib` always produces a canonical JSON payload in-memory with:
- `schema_version`: always **"track-v1"**
- `parent_schema_version`: upstream schema (typically **"det-v1"**)
- `video`: carried from det payload when present
- `detector`: carried from det payload when present
- `tracker`: tracker settings used for the run (name + config)
- `frames`: per-frame detections
- tracking: `track_id` (string)
- gallery ReID (optional): `gallery_id` (string)
### Minimal schema example
```json
{
"schema_version": "track-v1",
"parent_schema_version": "det-v1",
"video": {
"path": "in.mp4",
"fps": 30.0,
"frame_count": 120,
"width": 1920,
"height": 1080
},
"detector": {
"name": "yolo_bbox",
"weights": "yolo26n",
"classes": null,
"conf_thresh": 0.25,
"imgsz": 640,
"device": "cpu",
"half": false
},
"tracker": {
"name": "gallery_hybrid",
"class_filter": {
"track_classes": null,
"filter_gallery_for_tracked_classes": false
},
"config": {
"track_thresh": 0.45,
"match_thresh": 0.8,
"track_buffer": 25,
"frame_rate": 30,
"per_class": false,
"max_obs": 30,
"reid_weights": null,
"gallery": null,
"reid_frequency": 10,
"gallery_match_threshold": 0.25,
"device": "cpu",
"half": false
}
},
"frames": [
{
"frame": 0,
"detections": [
{
"bbox": [100.0, 50.0, 320.0, 240.0],
"score": 0.91,
"class_id": 0,
"class_name": "person",
"track_id": "3",
"gallery_id": "person_A"
}
]
}
]
}
```
### Returned vs saved
- **Returned (always):** the full track-v1 payload is available as `TrackResult.payload` (Python) and is always produced in-memory.
- **Saved (opt-in):** nothing is written unless you enable artifacts:
- `--json` saves `tracked.json`
- `--frames` saves annotated frames under `frames/`
- `--save-video` saves an annotated video
When no artifacts are enabled, no output directory/run folder is created.
---
## Install with `pip` (PyPI)
> Use this if you want to install and use the tool without cloning the repo.
> Requires **Python >= 3.10**.
### Install
```bash
pip install gallery-track-lib
```
---
## CLI usage (pip)
Global help:
```bash
python -m gallery_track.cli.track_video -h
python -m gallery_track.tools.reid_export -h
python -m gallery_track.tools.build_gallery -h
```
> Note: the PyPI package name is `gallery-track-lib`, but the Python module/import name remains `gallery_track`.
Package version:
```bash
python -m gallery_track.cli --version
```
List trackers:
```bash
python -m gallery_track.cli.track_video --list-trackers
python -c "import gallery_track; print(gallery_track.available_trackers())"
```
List ReID architectures / known weight names (from your BoxMOT install):
```bash
python -m gallery_track.cli.track_video --list-reid-models
python -m gallery_track.cli.track_video --list-reid-weights
```
---
## Tracking CLI: `gallery_track.cli.track_video`
### Quick start (hybrid)
```bash
python -m gallery_track.cli.track_video \
--dets-json detections.json \
--video in.mp4 \
--tracker gallery_hybrid
```
### Quick start (gallery-only)
```bash
python -m gallery_track.cli.track_video \
--dets-json detections.json \
--video in.mp4 \
--tracker gallery_only \
--reid-weights osnet_x0_25_msmt17.pt \
--gallery galleries/
```
### Save artifacts (opt-in)
```bash
python -m gallery_track.cli.track_video \
--dets-json detections.json \
--video in.mp4 \
--tracker gallery_hybrid \
--json \
--frames \
--save-video annotated.mp4 \
--out-dir out --run-name demo
```
### Tracker behavior overview
#### `gallery_hybrid`
Temporal tracker.
- Uses detection `score` to split detections into:
- **high confidence**: `score > track_thresh` (main association + new track creation)
- **low confidence**: `0.1 < score < track_thresh` (secondary association)
- Optionally assigns `gallery_id` by computing ReID embeddings for active tracks every `reid_frequency` frames and matching them to a gallery.
When you provide `--reid-weights` and `--gallery`, the tracker will attempt identity assignment.
#### `gallery_only`
No temporal association.
- For each frame, runs ReID on detections and assigns `gallery_id` by matching against the gallery.
- `track_id` becomes a per-frame detection identifier (stringified `det_ind`).
---
## CLI arguments
### Required
- `--dets-json <path>`: Path to the det-v1 detections JSON produced by detect-lib (must correspond to the same video passed via `--video`).
- `--video <path>`: Path to the source video used for detection. The tracker reads frames from this file for timing/visualization and expects frame order to match `--dets-json`.
### Tracker selection
- `--tracker <name>`: Tracking backend to use. `gallery_hybrid` (default) performs temporal association; `gallery_only` assigns identities per frame with no temporal linking.
### Class filtering
- `--classes <ids>`: Comma/semicolon-separated class IDs to *track*. If omitted, all classes are tracked. If provided, only these classes receive `track_id` / gallery matching; other classes are passed through unchanged.
- `--filter-gallery`: For tracked classes only, drop detections that did not receive a `gallery_id`. Detections from non-tracked classes are always kept.
### Optional labels
- `--class-names <file>`: Optional newline-delimited class-name file where line index = `class_id`. Used only for on-frame labels (does not affect tracking logic).
### Artifact saving (opt-in)
- `--json`: Write the track-v1 payload to `<run>/tracked.json`.
- `--frames`: Save annotated frames as JPEGs under `<run>/frames/` (can be large).
- `--save-video <name.mp4>`: Save an annotated video as `<run>/<name.mp4>`.
- `--out-dir <dir>`: Output root used only when saving artifacts (default: `out`). No run folder is created unless a saving flag is enabled.
- `--run-name <name>`: Name of the run folder under `--out-dir`. Defaults to the input video stem.
- `--display`: Show a live annotated window while processing (press `q` to quit). Does not write files unless saving flags are set.
- `--save-fps <float>`: Override FPS for the saved video only (default: source FPS). Useful if the source FPS metadata is incorrect.
- `--fourcc <fourcc>`: FourCC codec for `--save-video` (default: `mp4v`). Try `avc1`/`H264` if supported by your OpenCV build.
### Hybrid tracker knobs (`gallery_hybrid`)
- `--track-thresh <float>` (hybrid only): Confidence threshold for primary association and new track creation. Detections in `0.1 < score < track_thresh` may still be used in a secondary association step.
- Increase → fewer tracks, cleaner but may miss weak detections.
- Decrease → more tracks, more noise.
- `--match-thresh <float>` (hybrid only): IoU matching threshold for associating detections to existing tracks.
- Increase → stricter linking (fewer wrong matches) but more fragmentation.
- Decrease → more aggressive linking but more ID switches.
- `--track-buffer <int>` (hybrid only): Max number of frames to keep a lost track before it is removed.
- Increase → better occlusion recovery.
- Decrease → faster cleanup.
- `--frame-rate <int>` (hybrid only): Reference FPS used to scale time-based behavior in the tracker (default 30). Set to your video FPS if it differs significantly.
- `--per-class` (hybrid only): Maintain independent tracking state per `class_id` (reduces cross-class ID swaps at the cost of more state/compute).
- Helps avoid cross-class linking.
- Adds small overhead.
- `--max-obs <int>` (hybrid only): Max observation history stored per track for internal smoothing/state.
### ReID / gallery knobs
- `--reid-weights <path|name>`: ReID weights to use for embedding extraction.
- Provide either an explicit file path **or** a filename that exists under `--models-dir`.
- **Custom weights naming note:** If you pass a *name* (not a full path), make sure the weight filename starts with one of the **model names** printed by `--list-reid-models` (this improves architecture auto-detection / compatibility). Example: `osnet_*`, `lmbn_*`, etc.
- Required for `gallery_only`.
- Optional for `gallery_hybrid` (if omitted, hybrid runs temporal tracking with no gallery assignment).
- `--gallery <dir>`: Gallery root directory containing one subfolder per identity (images inside each).
```
galleries/
person_A/
*.jpg
person_B/
*.jpg
```
- `--reid-frequency <int>` (hybrid only): Run gallery matching every N frames (lower = more frequent updates, higher = faster).
- `--gallery-match-threshold <float>`: Cosine-distance threshold for assigning a `gallery_id` (lower = stricter, higher = more assignments but more risk of false IDs).
- `--device <str>`: Compute device for ReID: `auto`, `cpu`, `cuda`, `mps`, or a CUDA device index like `0`.
- `--half`: Enable FP16 for ReID when supported (typically GPU-only).
- `--models-dir <dir>`: Directory used for resolving weight *names* passed to `--reid-weights` (default: `models`).
### UX
- `--no-progress`: Disable the tqdm progress bar (useful for clean logs).
---
## Python usage (import)
You can use `track-lib` as a library after installing it with pip.
### Quick sanity check
```bash
python -c "import gallery_track; print(gallery_track.available_trackers())"
```
### Python API reference (keywords)
#### `gallery_track.track_video(...)`
**Required**
- `dets_json`: Path to det-v1 detections JSON (must correspond to the same video passed via `video`).
- `video`: Path to the source video used for detection.
- `tracker`: Tracker backend (`gallery_hybrid` or `gallery_only`).
**Class filtering**
- `classes`: Class IDs to track. If provided, only these classes receive track IDs / gallery matching; other classes pass through unchanged.
- `filter_gallery`: For tracked classes only, drop detections that did not receive a `gallery_id`.
**Hybrid tracker knobs (`gallery_hybrid`)**
- `track_thresh` (hybrid only): Confidence threshold for primary association and new track creation.
- `match_thresh` (hybrid only): IoU matching threshold for associating detections to existing tracks.
- `track_buffer` (hybrid only): Max number of frames to keep a lost track before removal.
- `frame_rate` (hybrid only): Reference FPS used to scale time-based behavior (default 30).
- `per_class` (hybrid only): Maintain independent tracking state per class_id.
- `max_obs` (hybrid only): Max observation history stored per track.
**ReID / gallery knobs**
- `reid_weights`: Weights to use for ReID embedding extraction (path or name under `models_dir`).
- `gallery`: Gallery root directory (subfolder per identity, images inside).
- `reid_frequency` (hybrid only): Run gallery matching every N frames.
- `gallery_match_threshold`: Cosine-distance threshold for assigning a gallery_id.
- `device`: Compute device for ReID (`auto/cpu/cuda/mps/0`).
- `half`: Enable FP16 for ReID when supported.
- `models_dir`: Directory used to resolve weight names.
**Artifacts (all off by default)**
- `save_json_flag`: Write `<run>/tracked.json`.
- `save_frames`: Write annotated JPEG frames under `<run>/frames/`.
- `save_video`: Filename for annotated video under the run folder (e.g., `annotated.mp4`).
- `out_dir`: Output root used only when saving artifacts.
- `run_name`: Run folder name (defaults to video stem).
- `display`: Show a live annotated window during processing.
- `save_fps`: Override FPS for saved video only.
- `fourcc`: FourCC codec for saved video (default `mp4v`).
- `class_names`: Optional class-name mapping for visualization labels.
- `no_progress`: Disable tqdm progress bar.
Returns a `TrackResult` with `payload` (track-v1 JSON), `paths` (only populated when saving), and `stats`.
### Run tracking from a Python file
Create `run_track.py`:
```python
from gallery_track import track_video
res = track_video(
dets_json="detections.json",
video="in.mp4",
tracker="gallery_hybrid",
)
payload = res.payload
print(payload["schema_version"], len(payload["frames"]))
print(res.paths) # populated only if you enable saving artifacts
print(res.stats)
```
Run:
```bash
python run_track.py
```
### Run tracking with gallery matching (Python)
```python
from gallery_track import track_video
res = track_video(
dets_json="detections.json",
video="in.mp4",
tracker="gallery_hybrid",
reid_weights="models/osnet_x0_25_msmt17.pt",
gallery="galleries/",
reid_frequency=10,
gallery_match_threshold=0.25,
device="auto",
)
print(res.payload["tracker"]["name"], res.stats)
```
---
## Install from GitHub (uv)
Use this if you are developing locally or want reproducible project environments.
Install uv:
https://docs.astral.sh/uv/getting-started/installation/#standalone-installer
Verify:
```bash
uv --version
```
### Install dependencies
```bash
git clone https://github.com/Surya-Rayala/VisionPipeline-gallery-track.git
cd VisionPipeline-gallery-track
uv sync
```
---
## CLI usage (uv)
Global help:
```bash
uv run python -m gallery_track.cli.track_video -h
uv run python -m gallery_track.tools.reid_export -h
uv run python -m gallery_track.tools.build_gallery -h
```
List trackers:
```bash
uv run python -m gallery_track.cli.track_video --list-trackers
```
Basic command (hybrid tracking):
```bash
uv run python -m gallery_track.cli.track_video \
--dets-json detections.json \
--video in.mp4 \
--tracker gallery_hybrid
```
Basic command (gallery-only):
```bash
uv run python -m gallery_track.cli.track_video \
--dets-json detections.json \
--video in.mp4 \
--tracker gallery_only \
--reid-weights osnet_x0_25_msmt17.pt \
--gallery galleries/
```
---
# ReID export tool
`gallery-track-lib` includes a thin wrapper around BoxMOT’s ReID export pipeline.
This tool:
- exports your ReID weights into one or more formats
- collects artifacts under a run folder (`--out-dir` / `--run-name`)
- writes `export_meta.json` with settings and final output paths
## CLI arguments
All exports are collected under a run folder:
- `<out-dir>/<run-name>/...`
- `export_meta.json` is written alongside exported artifacts with settings + final output paths.
### Output organization
- `--out-dir <dir>`: Root folder where export runs are written (default: `models/exports`).
- `--run-name <name>`: Run subfolder name under `--out-dir`. Defaults to the weights file stem.
### Core export arguments
- `--weights <path>`: Path to the source ReID `.pt` weights to export. Required unless using a `--list-*` option.
- **Custom weights naming note:** If your `weighta` file is a custom weight, prefer naming it so the filename starts with a model name from `--list-reid-models` (e.g., `osnet_custom.pt`).
-- `--include <formats...>`: One or more export formats to generate (default: `torchscript`). Supported: `torchscript`, `onnx`, `openvino`, `engine`, `tflite`.
> **TensorRT (`engine`) export note:** If you include `engine`, you must install **both**:
> 1) a **TensorRT** build that is **compatible with your CUDA toolkit**, and
> 2) NVIDIA’s **`nvidia-tensorrt`** package.
>
> (CUDA/TensorRT mismatches are the most common cause of export/runtime errors.)
>
> - pip:
> - `pip install <compatible-tensorrt>`
> - `pip install nvidia-tensorrt`
> - uv: run `uv sync` first, then:
> - `uv add <compatible-tensorrt>`
> - `uv add nvidia-tensorrt`
- `--device <str>`: Device used for export and dummy inference (`cpu`, `mps`, or a CUDA index like `0`) (Note: there is no auto backend support here and have to be manually selected and the exact device to be specified during inferencing).
- `--half`: Enable FP16 where supported (typically GPU exporters).
- `--batch-size <int>`: Dummy batch size used during export (default: `1`). Some backends only support `1`.
### Per-format knobs
- `--optimize`: Optimize TorchScript for mobile (CPU-only).
- `--dynamic`: Enable dynamic shapes where supported (commonly affects ONNX/TensorRT).
- `--simplify`: Run ONNX graph simplification after export.
- `--opset <int>`: ONNX opset version (default: `18`).
- `--verbose`: Enable verbose logging for TensorRT export.
### Listing / discovery
- `--list-formats`: Print supported export formats and exit.
- `--list-reid-models`: Print available ReID architectures (from your BoxMOT install) and exit.
- `--list-reid-weights`: Print known pretrained weight names (from BoxMOT registries, if available) and exit.
## CLI usage (pip)
Global help:
```bash
python -m gallery_track.tools.reid_export -h
```
List supported formats:
```bash
python -m gallery_track.tools.reid_export --list-formats
```
List ReID architectures / downloadable weight names (from BoxMOT registries, if available):
```bash
python -m gallery_track.tools.reid_export --list-reid-models
python -m gallery_track.tools.reid_export --list-reid-weights
```
Export TorchScript:
```bash
python -m gallery_track.tools.reid_export \
--weights osnet_x0_25_msmt17.pt \
--include torchscript \
--out-dir models/exports --run-name osnet_ts
```
Export ONNX:
```bash
python -m gallery_track.tools.reid_export \
--weights osnet_x0_25_msmt17.pt \
--include onnx \
--opset 18 --simplify \
--out-dir models/exports --run-name osnet_onnx
```
## CLI usage (uv)
```bash
uv run python -m gallery_track.tools.reid_export -h
```
Example (uv + onnx):
```bash
uv run python -m gallery_track.tools.reid_export \
--weights osnet_x0_25_msmt17.pt \
--include onnx \
--out-dir models/exports --run-name osnet_onnx
```
---
# Gallery builder tool
`gallery-track-lib` includes an optional GUI tool to build a gallery directory by drawing crops on a video.
It creates a directory structure compatible with both trackers:
```
galleries/
identity_A/
identity_A_00000.jpg
identity_A_00001.jpg
identity_B/
identity_B_00000.jpg
```
## What the UI does (in plain English)
The gallery builder is a small desktop window that lets you:
- open a video
- pause on frames where your target identity is visible
- draw a bounding box around the person/object you want to add to the gallery
- save that crop into a folder named after the identity
Over time you build a small set of example images per identity (a “gallery”). The trackers then match detections against this gallery to produce `gallery_id`.
### What you’ll see
- A video preview panel (current frame)
- A simple toolbar / buttons for selecting:
- the input **video**
- the output **gallery root folder**
- Identity controls:
- a text field to type an **identity name** (folder name, e.g., `person_A`)
- a selector/list of existing identities found under the gallery root
- a button to **add/create** the identity folder if it doesn’t exist
- A counter/indicator showing how many crops have been saved for the selected identity
(Exact layout may vary slightly by OS.)
### Typical workflow
1) Launch the tool.
2) Select the **gallery root** folder (where galleries will be written).
3) Select the **video** file.
4) Type an **identity name** (e.g., `person_A`).
5) Scrub / step through the video to find good frames.
6) Draw a box around the identity and save the crop.
7) Repeat for multiple frames and multiple identities.
### Adding and selecting identities
- **Identity = folder name.** Each identity you create becomes a subfolder under the gallery root.
- To add a new identity:
1) Type a new name (for example: `person_A`).
2) Click the **Add/Create** identity button.
3) The tool creates `<gallery_root>/person_A/` if it doesn’t already exist.
- To switch to an existing identity:
- Select it from the identity list/dropdown. New crops will be saved into that identity’s folder.
### Drawing boxes (click + drag)
- Pause or scrub to a frame where the identity is clearly visible.
- **Click and hold** on the video frame, **drag** to form a rectangle around the identity, then **release** to finish the box.
- After the box is drawn, click the **Save** / **Save crop** button to write the cropped JPEG into the selected identity folder.
If you draw the wrong box, simply draw a new one and save again (you can delete unwanted images from the identity folder later).
Tips:
- Use a variety of views (front/side), lighting, and distances.
- Avoid heavy blur/occlusion; clean crops work best.
The tool will create the identity subfolders if they don’t exist. Each saved crop is written as a JPEG into the selected identity folder under the gallery root. If you reopen the tool later and select the same gallery root, it will automatically pick up the existing identity folders and let you continue adding more crops to them.
## Troubleshooting: OpenCV crash (common on some macOS/Linux setups)
If the **gallery builder UI** crashes on launch (often due to an OpenCV / Qt / GUI backend conflict), try removing any existing OpenCV wheels and reinstalling the **headless** build.
### pip
```bash
pip uninstall -y opencv-python opencv-contrib-python
pip install opencv-python-headless
```
### uv
```bash
uv remove opencv-python opencv-contrib-python
uv add opencv-python-headless
```
> Note: The headless build disables OpenCV GUI backends. If you need native OpenCV windows elsewhere, reinstall `opencv-python` instead.
## Paths and defaults
You can start the UI in two ways:
- **Without arguments**: the UI will prompt you to pick the **video** and **gallery root** inside the window.
- **With arguments**: you can pre-fill the paths from the command line.
If you launch without `--gallery-root` and/or `--video`, you must choose them in the UI before you can save crops.
When you choose a gallery root that already contains identity subfolders, the UI will load and list them automatically so you can keep adding crops.
## CLI usage (pip)
Help:
```bash
python -m gallery_track.tools.build_gallery -h
```
Launch with no pre-selected paths (you will choose video + gallery root in the UI):
```bash
python -m gallery_track.tools.build_gallery
```
Launch with initial paths:
```bash
python -m gallery_track.tools.build_gallery \
--gallery-root galleries \
--video in.mp4
```
When you pass these flags, the UI starts with the fields pre-filled, but you can still change them inside the app.
## CLI usage (uv)
```bash
uv run python -m gallery_track.tools.build_gallery --gallery-root galleries --video in.mp4
```
---
# License
This project is licensed under the **AGPL-3.0 License**. See `LICENSE`.
| text/markdown | null | Surya Chand Rayala <suryachand2k1@gmail.com> | null | null | GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software. A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public. The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version. An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Remote Network Interaction; Use with the GNU General Public License. Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see <https://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see <https://www.gnu.org/licenses/>. | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"boxmot>=16.0.11",
"pyqt5>=5.15.11"
] | [] | [] | [] | [] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T00:54:49.630600 | gallery_track_lib-0.2.1-py3-none-any.whl | 74,547 | ad/9b/d9edad6458da426760bb9d19bf836ce81d93dfff637a2b1143cc8b1f9169/gallery_track_lib-0.2.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 780aeb0cd103ddddfb98eb6660e5fc70 | 5871cc5dda23d5c00301964d7e1f7fe3298634d1a027ba50ed1617cbf22d07b4 | ad9bd9edad6458da426760bb9d19bf836ce81d93dfff637a2b1143cc8b1f9169 | null | [
"LICENSE"
] | 253 |
2.4 | datasette-files-s3 | 0.1a0 | datasette-files S3 backend | # datasette-files-s3
[](https://pypi.org/project/datasette-files-s3/)
[](https://github.com/datasette/datasette-files-s3/releases)
[](https://github.com/datasette/datasette-files-s3/actions/workflows/test.yml)
[](https://github.com/datasette/datasette-files-s3/blob/main/LICENSE)
S3 storage backend for [datasette-files](https://github.com/datasette/datasette-files).
## Installation
Install this plugin in the same environment as Datasette.
```bash
datasette install datasette-files-s3
```
## Usage
Configure a datasette-files source to use S3 storage by setting `"storage": "s3"` and providing the required configuration options:
```yaml
plugins:
datasette-files:
sources:
my-s3-files:
storage: s3
config:
bucket: my-bucket-name
region: us-east-1
access_key_id: AKIAIOSFODNN7EXAMPLE
secret_access_key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
```
Or using Datasette's `-s` flag:
```bash
datasette data.db \
-s plugins.datasette-files.sources.my-s3-files.storage s3 \
-s plugins.datasette-files.sources.my-s3-files.config.bucket my-bucket-name \
-s plugins.datasette-files.sources.my-s3-files.config.region us-east-1
```
### Configuration options
- **bucket** (required): The name of the S3 bucket.
- **region** (optional, default `us-east-1`): The AWS region.
- **prefix** (optional): A prefix to add to all S3 object keys. This allows you to store files under a specific path within the bucket. A trailing slash will be added automatically if not provided - `"uploads"` and `"uploads/"` are equivalent.
- **endpoint_url** (optional): A custom S3 endpoint URL, for use with S3-compatible services.
- **access_key_id** (optional): AWS access key ID.
- **secret_access_key** (optional): AWS secret access key.
### Authentication
The plugin resolves AWS credentials using the following priority:
1. **Direct configuration**: `access_key_id` and `secret_access_key` in the config block.
2. **datasette-secrets**: If [datasette-secrets](https://github.com/datasette/datasette-secrets) is installed, the plugin will look for `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` secrets.
3. **Default AWS credential chain**: If no credentials are provided through the above methods, the plugin falls back to the default AWS credential chain (environment variables, IAM roles, etc.).
### Prefix
The `prefix` option lets you scope all files to a specific path within the bucket. For example, with `prefix: "uploads/"`, a file uploaded as `photo.jpg` will be stored at the S3 key `uploads/photo.jpg`.
It does not matter whether you include a trailing slash or not - `"uploads"` and `"uploads/"` will both result in files stored under `uploads/`.
## Development
To set up this plugin locally, first checkout the code.
```bash
cd datasette-files-s3
```
Run tests like this:
```bash
uv run pytest
```
You can use [SeaweedFS](https://github.com/seaweedfs/seaweedfs) to run a local development server against a local imitation of the S3 API:
```bash
brew install seaweedfs
./dev-server.sh
```
To run a local development server against a real S3 bucket, create a `dev-s3.sh` script (this file is in `.gitignore`):
```bash
#!/bin/bash
set -e
BUCKET="your-bucket-name"
REGION="us-east-1"
ACCESS_KEY="your-access-key-id"
SECRET_KEY="your-secret-access-key"
uv run datasette data.db --create --internal internal.db --root --secret 1 --reload \
-s plugins.datasette-files.sources.s3-live.storage s3 \
-s plugins.datasette-files.sources.s3-live.config.bucket "$BUCKET" \
-s plugins.datasette-files.sources.s3-live.config.region "$REGION" \
-s plugins.datasette-files.sources.s3-live.config.access_key_id "$ACCESS_KEY" \
-s plugins.datasette-files.sources.s3-live.config.secret_access_key "$SECRET_KEY" \
-s plugins.datasette-files.sources.s3-live.config.prefix "demo-prefix/" \
-s permissions.files-browse true \
-s permissions.files-upload true \
-s permissions.files-edit true
```
Then run it with `bash dev-s3.sh` and follow the login token URL printed to the console.
| text/markdown | Simon Willison | null | null | null | Apache-2.0 | null | [
"Framework :: Datasette"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"datasette-files>=0.1a0",
"aioboto3"
] | [] | [] | [] | [
"Homepage, https://github.com/datasette/datasette-files-s3",
"Changelog, https://github.com/datasette/datasette-files-s3/releases",
"Issues, https://github.com/datasette/datasette-files-s3/issues",
"CI, https://github.com/datasette/datasette-files-s3/actions"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:53:54.774467 | datasette_files_s3-0.1a0.tar.gz | 10,598 | a5/8d/971cbc5c3df9e0be3debd420983a40032b681cefbaaa07c544bc3d413bbe/datasette_files_s3-0.1a0.tar.gz | source | sdist | null | false | d78cf6e1c8846421db8c05ac30982ef2 | 05e42c69076e49cd00d2e9d4e9a05d8ad99a66fb6aa0677940b75138f6593d75 | a58d971cbc5c3df9e0be3debd420983a40032b681cefbaaa07c544bc3d413bbe | null | [
"LICENSE"
] | 223 |
2.4 | odoo-boost | 0.4.1 | AI coding agents with deep introspection into running Odoo instances via MCP tools | # Odoo Boost
AI coding agents with deep introspection into running Odoo instances via MCP tools.
Inspired by [Laravel Boost](https://github.com/laravel/boost), Odoo Boost gives your AI coding assistant deep knowledge of your Odoo project — models, views, records, access rights, configuration, and more — plus Odoo-specific development guidelines and step-by-step skills.
## Features
- **15 MCP Tools** — Introspect models, views, records, access rights, config, routes, workflows, and more from a live Odoo instance
- **6 AI Agents** — Claude Code, Cursor, Copilot, Codex, Gemini CLI, Junie
- **Odoo Guidelines** — Version-aware development best practices injected into your agent's context
- **8 Skills** — Step-by-step guides for common Odoo development tasks (creating models, views, security, OWL components, etc.)
- **Multi-version** — Supports Odoo 17, 18, and 19
- **Zero config on Odoo side** — Connects via XML-RPC, no Odoo module installation needed
## Installation
```bash
pip install odoo-boost
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
uv pip install odoo-boost
```
You can also install it as a global CLI tool:
```bash
uv tool install odoo-boost
```
## Quick Start
### 1. Run the install wizard
```bash
cd /path/to/your/odoo-project
odoo-boost install
```
The wizard will:
- Ask for your Odoo connection details (URL, database, username, password)
- Test the connection and detect the Odoo version
- Let you select which AI agents to configure
- Generate all necessary files (guidelines, MCP config, skills)
### 2. Verify the connection
```bash
odoo-boost check
```
Or with explicit credentials:
```bash
odoo-boost check --url http://localhost:8069 --database mydb --username admin --password admin
```
### 3. Start coding
Your AI agent is now configured. The MCP server starts automatically when your agent needs it. Try asking your agent:
> "What models are available in this Odoo instance?"
> "Show me the fields on the res.partner model"
> "Search for all installed modules related to accounting"
## Commands
| Command | Description |
|---------|-------------|
| `odoo-boost install` | Interactive setup wizard |
| `odoo-boost check` | Test connection to Odoo |
| `odoo-boost update` | Re-generate files from saved config |
| `odoo-boost mcp` | Start the MCP server (stdio) |
| `odoo-boost --version` | Show version |
You can also run any command via `python -m odoo_boost`, e.g. `python -m odoo_boost --version`.
## How It Works
```
┌─────────────────┐ stdio ┌─────────────────┐ XML-RPC ┌──────────────┐
│ AI Agent │◄──────────────►│ Odoo Boost │◄─────────────►│ Odoo │
│ (Claude, etc.) │ │ MCP Server │ │ Instance │
└─────────────────┘ └─────────────────┘ └──────────────┘
│ │
▼ │
Guidelines + 15 MCP Tools
Skills (md) (models, views, records,
config, access rights…)
```
Odoo Boost sits between your AI agent and your Odoo instance. It provides:
1. **MCP Tools** — Your agent calls tools like `list_models`, `search_records`, `database_schema` to understand your Odoo instance in real-time
2. **Guidelines** — Odoo development best practices are injected into your agent's context so it writes idiomatic code
3. **Skills** — Step-by-step guides for common tasks (creating models, views, security rules, etc.)
### Robust MCP Server Resolution
The generated MCP config files use the **full path to the Python interpreter** that has Odoo Boost installed, rather than relying on a bare `odoo-boost` command being available on `PATH`. This ensures the MCP server starts correctly regardless of how your AI agent spawns subprocesses.
For example, the generated `.mcp.json` for Claude Code looks like:
```json
{
"mcpServers": {
"odoo-boost": {
"command": "/path/to/your/venv/bin/python",
"args": ["-m", "odoo_boost", "mcp"]
}
}
}
```
This means:
- The MCP server always runs in the correct Python environment
- No dependency on `PATH` configuration or shell activation
- Works with virtualenvs, `uv tool`, and system installs alike
## .gitignore
Generated files contain environment-specific paths and should generally not be committed. Add the following to your `.gitignore`:
```gitignore
# Odoo Boost
odoo-boost.json
CLAUDE.md
AGENTS.md
GEMINI.md
.mcp.json
.ai/skills/
.cursor/rules/odoo-boost.mdc
.cursor/mcp.json
.cursor/skills/
.vscode/mcp.json
.github/copilot-instructions.md
.github/skills/
.codex/
.gemini/settings.json
.agents/skills/
.junie/
```
> **Note:** Only the files listed above are generated by Odoo Boost.
> Directories like `.github/` and `.vscode/` may contain other project files — do not ignore the entire directory.
## Documentation
- [Getting Started](https://github.com/havmedia/odoo-boost/blob/main/docs/getting-started.md) — Full setup walkthrough
- [MCP Tools Reference](https://github.com/havmedia/odoo-boost/blob/main/docs/mcp-tools.md) — All 15 tools with parameters and examples
- [Agent Configuration](https://github.com/havmedia/odoo-boost/blob/main/docs/agents.md) — Supported agents and their generated files
- [Configuration](https://github.com/havmedia/odoo-boost/blob/main/docs/configuration.md) — `odoo-boost.json` schema and CLI options
- [Guidelines](https://github.com/havmedia/odoo-boost/blob/main/docs/guidelines.md) — Bundled Odoo development guidelines
- [Skills](https://github.com/havmedia/odoo-boost/blob/main/docs/skills.md) — Step-by-step development skills
- [Contributing](https://github.com/havmedia/odoo-boost/blob/main/CONTRIBUTING.md) — How to add tools, agents, and skills
## Supported Agents
| Agent | Guidelines | MCP Config | Skills |
|-------|-----------|------------|--------|
| Claude Code | `CLAUDE.md` | `.mcp.json` | `.ai/skills/` |
| Cursor | `.cursor/rules/odoo-boost.mdc` | `.cursor/mcp.json` | `.cursor/skills/` |
| GitHub Copilot | `.github/copilot-instructions.md` | `.vscode/mcp.json` | `.github/skills/` |
| OpenAI Codex | `AGENTS.md` | `.codex/config.toml` | `.agents/skills/` |
| Gemini CLI | `GEMINI.md` | `.gemini/settings.json` | `.agents/skills/` |
| Junie | `.junie/guidelines.md` | `.junie/mcp/mcp.json` | `.junie/skills/` |
## License
MIT
| text/markdown | null | Jan-Phillip Oesterling <jappi2000@ewetel.net> | null | Jan-Phillip Oesterling <jappi2000@ewetel.net> | null | ai, boost, coding-assistant, mcp, odoo | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"rich>=13.0.0",
"typer>=0.9.0",
"mypy>=1.13; extra == \"dev\"",
"pytest-cov>=6.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/havmedia/odoo-boost",
"Repository, https://github.com/havmedia/odoo-boost"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:53:41.134971 | odoo_boost-0.4.1.tar.gz | 58,581 | e5/ae/283755f5fb32bd95529c79d4044d2c4c5dcabe3042bda35bb556e23ea078/odoo_boost-0.4.1.tar.gz | source | sdist | null | false | f22736c106c9431f85d71b0b0b47a6c0 | 70c7276e174ec271297f4b779a97c36887e587515e09c02117845b6fea7e96a0 | e5ae283755f5fb32bd95529c79d4044d2c4c5dcabe3042bda35bb556e23ea078 | MIT | [] | 250 |
2.4 | risk-distributions | 2.3.0 | Components for building distributions. Compatible for use with ``vivarium`` | Risk Distributions
======================
.. image:: https://badge.fury.io/py/risk_distributions.svg
:target: https://badge.fury.io/py/risk-distributions
.. image:: https://github.com/ihmeuw/risk_distributions/actions/workflows/build.yml/badge.svg?branch=main
:target: https://github.com/ihmeuw/risk_distributions
:alt: Latest Version
.. image:: https://readthedocs.org/projects/risk-distributions/badge/?version=latest
:target: https://risk-distributions.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
This library contains various probability distributions, compatible for use with the ``Vivarium`` framework
You can install ``risk_distributions`` from PyPI with pip:
``> pip install risk_distributions``
or build it from source with
``> git clone https://github.com/ihmeuw/risk_distributions.git``
``> cd risk_distributions``
``> python setup.py install``
`Check out the docs! <https://risk-distributions.readthedocs.io/en/latest/>`_
-----------------------------------------------------------------------------
| null | The risk_distributions developers | vivarium.dev@gmail.com | null | null | BSD-3-Clause | null | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Natural Language :: English",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX",
"Operating System :: POSIX :: BSD",
"Operating S... | [] | https://github.com/ihmeuw/risk_distributions | null | null | [] | [] | [] | [
"vivarium_dependencies[numpy,pandas,scipy]",
"vivarium_build_utils<3.0.0,>=2.0.1",
"vivarium_dependencies[pytest]; extra == \"test\"",
"vivarium_dependencies[sphinx]; extra == \"docs\"",
"vivarium_dependencies[interactive]; extra == \"interactive\"",
"vivarium_dependencies[pytest]; extra == \"dev\"",
"v... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:53:17.452664 | risk_distributions-2.3.0.tar.gz | 29,240 | de/6b/1155d7cca62e9ff39265efe5d558160747ecb98b512d3b6680f42930761e/risk_distributions-2.3.0.tar.gz | source | sdist | null | false | 806df3c64a3a3b76710bc79e62ad5eac | 37a9bdf2d787b9550bef4ce9497f8e8dfc5cf53e2d1e8e2380d53d612ef04ad1 | de6b1155d7cca62e9ff39265efe5d558160747ecb98b512d3b6680f42930761e | null | [
"LICENSE.txt",
"AUTHORS.rst"
] | 348 |
2.4 | budgetgate | 0.1.0 | Deterministic, pre-execution spend limiting for semantic actions in agent systems. | # BudgetGate
Deterministic, pre-execution spend limiting for semantic actions in agent systems.
## Source of Truth
The canonical source is [github.com/actiongate-oss/budgetgate](https://github.com/actiongate-oss/budgetgate). PyPI distribution is a convenience mirror.
**Vendoring encouraged.** This is a small, stable primitive. Copy it, fork it, reimplement it.
---
## Quick Start
```python
from decimal import Decimal
from budgetgate import Engine, Ledger, Budget, BudgetExceeded
engine = Engine()
@engine.guard(
Ledger("openai", "gpt-4", "user:123"),
Budget(max_spend=Decimal("10.00"), window=3600), # $10/hour
cost=Decimal("0.03"), # fixed cost per call
)
def call_gpt4(prompt: str) -> str:
return openai.chat(prompt)
try:
response = call_gpt4("Hello")
except BudgetExceeded as e:
print(f"Budget exceeded: {e.decision.spent_in_window} spent")
```
---
## Two Cost Modes
### Fixed Cost (pre-execution)
When cost is known before execution:
```python
@engine.guard(
Ledger("openai", "embedding"),
Budget(max_spend=Decimal("5.00"), window=3600),
cost=Decimal("0.0001"), # fixed cost per call
)
def embed(text: str) -> list[float]:
return openai.embed(text)
```
### Bounded Dynamic Cost (pre-execution with estimate)
When cost depends on the result but has a known upper bound:
```python
@engine.guard_bounded(
Ledger("anthropic", "claude", "user:123"),
Budget(max_spend=Decimal("5.00"), window=3600),
estimate=Decimal("0.50"), # max possible cost (reserved before execution)
actual=lambda r: Decimal(str(r.usage.total_cost)), # actual cost (committed after)
)
def call_claude(prompt: str) -> Response:
return anthropic.messages.create(...)
```
The estimate is reserved before execution. If it doesn't fit the budget, the action is blocked. After execution, the actual cost is committed and unused budget is recovered.
---
## Core Concepts
### Ledger
Identifies a spend-tracked stream:
```python
Ledger(namespace, resource, principal)
Ledger("openai", "gpt-4", "user:123") # per-user
Ledger("anthropic", "claude", "team:eng") # per-team
Ledger("infra", "compute", "global") # global
```
### Budget
```python
Budget(
max_spend=Decimal("10.00"), # max spend in window
window=3600, # rolling window (seconds)
mode=Mode.HARD, # HARD raises, SOFT returns result
on_store_error=StoreErrorMode.FAIL_CLOSED,
)
```
### Decision
Every check returns a Decision with:
```python
decision.allowed # bool
decision.spent_in_window # Decimal - current spend
decision.remaining # Decimal - budget remaining
decision.requested # Decimal - amount requested
```
---
## Decorator Styles
| Decorator | Cost Mode | Returns | On Block |
|-----------|-----------|---------|----------|
| `guard` | Fixed | `T` | Raises `BudgetExceeded` |
| `guard_bounded` | Dynamic | `T` | Raises `BudgetExceeded` |
| `guard_result` | Fixed | `Result[T]` | Returns blocked result |
| `guard_bounded_result` | Dynamic | `Result[T]` | Returns blocked result |
```python
# Raises on block
@engine.guard(ledger, budget, cost=Decimal("0.01"))
def fixed_action(): ...
@engine.guard_bounded(ledger, budget, estimate=Decimal("0.50"), actual=lambda r: r.cost)
def dynamic_action(): ...
# Never raises - returns Result[T]
@engine.guard_result(ledger, budget, cost=Decimal("0.01"))
def fixed_action(): ...
@engine.guard_bounded_result(ledger, budget, estimate=Decimal("0.50"), actual=lambda r: r.cost)
def dynamic_action(): ...
```
---
## Relation to ActionGate
BudgetGate complements [ActionGate](https://github.com/actiongate-oss/actiongate):
| Primitive | Limits | Use case |
|-----------|--------|----------|
| ActionGate | calls/time | Rate limiting |
| BudgetGate | cost/time | Spend limiting |
Both are:
- Deterministic
- Pre-execution
- Decorator-friendly
- Store-backed
Use together:
```python
from decimal import Decimal
@actiongate_engine.guard(Gate("api", "search"), Policy(max_calls=100))
@budgetgate_engine.guard(Ledger("api", "search"), Budget(max_spend=Decimal("1.00")), cost=Decimal("0.01"))
def search(query: str) -> list:
...
```
---
## API Reference
| Type | Purpose |
|------|---------|
| `Engine` | Core spend tracking |
| `Ledger` | Spend stream identity |
| `Budget` | Spend policy |
| `Decision` | Evaluation result |
| `Result[T]` | Wrapper for `guard_result` |
| `BudgetExceeded` | Exception from `guard` |
| Enum | Values |
|------|--------|
| `Mode` | `HARD`, `SOFT` |
| `StoreErrorMode` | `FAIL_CLOSED`, `FAIL_OPEN` |
| `Status` | `ALLOW`, `BLOCK` |
| `BlockReason` | `BUDGET_EXCEEDED`, `STORE_ERROR` |
---
## Numeric Precision
All spend amounts use `Decimal` to avoid floating-point drift. See [SEMANTICS.md](SEMANTICS.md) §9.
---
## Support
Published as-is under the MIT license. No support, SLA, or maintenance guaranteed.
## License
MIT
| text/markdown | actiongate-oss | null | null | null | null | ai-agents, budget, cost-control, llm, spend-limiting | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"mypy>=1.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"redis>=4.0; extra == \"redis\""
] | [] | [] | [] | [
"Homepage, https://github.com/actiongate-oss/budgetgate",
"Documentation, https://github.com/actiongate-oss/budgetgate#readme",
"Repository, https://github.com/actiongate-oss/budgetgate"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T00:52:32.169082 | budgetgate-0.1.0.tar.gz | 12,400 | ff/15/9e9322ce835a3c3ea3e595b999b9632ae9f8010cbd8a58c2505c90a42bcd/budgetgate-0.1.0.tar.gz | source | sdist | null | false | c2c0341f9deae8b86f752be28874e5e7 | 49295ceb67608e3c625b4c85ec37f5403d932e0057791afcfe690108166a888c | ff159e9322ce835a3c3ea3e595b999b9632ae9f8010cbd8a58c2505c90a42bcd | MIT | [
"LICENSE"
] | 252 |
2.1 | aws-cdk.asset-awscli-v2 | 2.0.163 | An Asset construct that contains the AWS CLI, for use in Lambda Layers | # Asset with AWS CLI v2
<!--BEGIN STABILITY BANNER-->---

---
> This library is currently under development. Do not use!
<!--END STABILITY BANNER-->
This module exports a single class called `AwsCliAsset` which is an `s3_assets.Asset` that bundles the AWS CLI v2.
Usage:
```python
# AwsCliLayer bundles the AWS CLI in a lambda layer
from aws_cdk.asset_awscli_v2 import AwsCliAsset
import aws_cdk.aws_lambda as lambda_
import aws_cdk.aws_s3_assets as s3_assets
from aws_cdk import FileSystem
# fn: lambda.Function
awscli = AwsCliAsset(self, "AwsCliCode")
fn.add_layers(lambda_.LayerVersion(self, "AwsCliLayer",
code=lambda_.Code.from_bucket(awscli.bucket, awscli.s3_object_key)
))
```
The CLI will be installed under `/opt/awscli/aws`.
| text/markdown | Amazon Web Services<aws-cdk-dev@amazon.com> | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/cdklabs/awscdk-asset-awscli#readme | null | ~=3.9 | [] | [] | [] | [
"aws-cdk-lib<3.0.0,>=2.0.0",
"constructs<11.0.0,>=10.0.5",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdklabs/awscdk-asset-awscli.git"
] | twine/6.1.0 CPython/3.14.2 | 2026-02-20T00:51:34.606363 | aws_cdk_asset_awscli_v2-2.0.163.tar.gz | 62,267,405 | 31/32/74fa136fcab4f89732b020f9901a8c9c89b7cd1f21fc3f934fd479540890/aws_cdk_asset_awscli_v2-2.0.163.tar.gz | source | sdist | null | false | ffec9a1fb13d90144c3e4f7f82384e78 | 173c152f49f6fd7b9b0fb7e01e3b8a8e5e91a930434e06c2d2e8aea47ce89428 | 313274fa136fcab4f89732b020f9901a8c9c89b7cd1f21fc3f934fd479540890 | null | [] | 0 |
2.3 | zensols-zotsite | 1.3.0 | This program exports your local [Zotero] library to a usable HTML website with following features. | # Zotsite: A Zotero Export Utility
[![PyPI][pypi-badge]][pypi-link]
[![Python 3.11][python311-badge]][python311-link]
[![Python 3.12][python312-badge]][python312-link]
[![Build Status][build-badge]][build-link]
This project exports your local [Zotero] library to a usable HTML website with
following features:
* Easily access your papers, site snapshots, notes from a navigation tree.
* Provides metadata from collections and attachments (i.e. referenes etc).
* Display PDF papers and website snapshot (the latter as framed).
* Search function dynamically narrows down the papers you're looking for.
* Embed links to a specific collection, article, item, note etc.
* Export only a portion of your collection with regular expressions using the
collection name.
* [BetterBibtex] integration.
* Snazzy look and feel from the latest [Bootstrap] CSS/Javascript library.
<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->
## Table of Contents
- [Documentation](#documentation)
- [Obtaining](#obtaining)
- [Process](#process)
- [Sample Site Demonstration](#sample-site-demonstration)
- [Requirements](#requirements)
- [Usage](#usage)
- [Command Line](#command-line)
- [API](#api)
- [Configuration File](#configuration-file)
- [Screenshot](#screenshot)
- [Ubuntu and Linux Systems with Python 3.5 or Previous Version](#ubuntu-and-linux-systems-with-python-35-or-previous-version)
- [Attribution](#attribution)
- [Todo](#todo)
- [Zotero Plugin Listing](#zotero-plugin-listing)
- [Changelog](#changelog)
- [Community](#community)
- [License](#license)
<!-- markdown-toc end -->
## Documentation
See the [full documentation](https://plandes.github.io/zotsite/index.html).
The [API reference](https://plandes.github.io/zotsite/api.html) is also
available.
## Obtaining
The library can be installed with pip from the [pypi] repository:
```bash
pip3 install zensols.zotsite
```
## Process
The tool does the following:
1. Exports the meta data (directory structure, references, notes, etc) from
your [Zotero] library. On MacOS, this is done by querying the file system
SQLite DB files.
2. Copies a static site that enables traversal of the exported data.
3. Copies your [Zotero] stored papers, snapshot (sites) etc.
4. Generates a navigation tree to easily find your papers/content.
## Sample Site Demonstration
See the [live demo], which provides a variety of resources found in my own
library. *Note:* To my knowledge, all of these resources are free to
distribute and violate no laws. If I've missed one,
please [create an issue](CONTRIBUTING.md).
## Requirements
[BetterBibtex] plugin for Zotero.
## Usage
The library is typically used from the command line to create websites, but it
can also be used as an API from Python.
### Command Line
The command line program has two modes: show configuration (a good first step)
and to create the web site. You can see what the program is parsing from your
[Zotero] library:
```bash
zotsite print
```
To create the stand-alone site, run the program (without the angle brackets):
```bash
zotsite export
```
If your library is not in the default `~/zotero` directory you will need to
change that path by making a zotsite.conf config file. This will create the
html files in the directory `./zotsite`:
```bash
zotsite export --collection zotsite.conf
```
A mapping of BetterBibtex citation keys to Zotero's database unique *item keys*
can be useful to scripts:
```bash
zotsite citekey -k all
```
The tool also provides a means of finding where papers are by *item key*:
```bash
zotsite docpath -k all
```
See [usage](doc/usage.md) for more information. Command line usage as provided
with the `--help` option.
### API
The API provides access to a Python object that creates the website, can
resolve BetterBibtex citation keys to Zotero unique identifier *item keys* and
provide paths of item attachments (such as papers).
The following example come from [this working script](example/showpaper.py).
```python
>>> from typing import Dict, Any
>>> from pathlib import Path
>>> from zensols.zotsite import Resource, ApplicationFactory
# get the resource facade objects, which provides access to create the site,
# citation and path lookup methods
>>> resource: Resource = ApplicationFactory.get_resource()
# get a mapping from <library ID>_<item key> to entry dictionaries
>>> entries: Dict[str, Dict[str, Any]] = resource.cite_db.entries
# get a mapping from item key (sans library ID) to the attachment path
>>> paths: Dict[str, Path] = resource.zotero_db.item_paths
# create BetterBibtex citation key to item key mapping
>>> bib2item: Dict[str, str] = dict(map(
... lambda e: (e['citationKey'], e['itemKey']),
... entries.values()))
# get the item key from the citation key
>>> itemKey: str = bib2item['landesCALAMRComponentALignment2024']
# get the path using the Zotero DB item key
>>> paper_path: Path = paths[itemKey]
>>> print(paper_path)
# display the paper (needs 'pip install zensols.rend')
>>> from zensols.rend import ApplicationFactory as RendAppFactory
>>> RendAppFactory.get_browser_manager()(paper_path)
```
### Configuration File
Either an environment variable `ZOTSITERC` must be set or a `-c` configuration
option must be given and point to a file to customize how the program works.
See the test [configuration file] for an example and inline comments for more
detail on how and what can be configured.
## Screenshot
Also see the [live demo].
![Screenshot][screenshot]
## Ubuntu and Linux Systems with Python 3.5 or Previous Version
Please [read this issue](https://github.com/plandes/zotsite/issues/4) if you
are installing a Ubuntu or any Linux system with Python 3.5 or previous
version.
## Attribution
This software uses:
* Python 3
* [jQuery] version 3
* [DataTables] version 1.12
* [Bootstrap] version 4
* [Tree View] for Bootstrap
* [Popper] for tooltips
* [Copy to Clipboard] function
## Todo
* Make the site portion a proper Javascript site. Right now, all the `min`s
are added in the distribution to same directory as
the [main navigation/content](resources/site/src/js/zotero.js) file.
* Use something like zotxt to make this work with a plugin rather than directly
against the SQLite DB.
## Zotero Plugin Listing
This is listed as a [plugin] on the Zotero site.
## Changelog
An extensive changelog is available [here](CHANGELOG.md).
## Community
Please star this repository and let me know how and where you use this API.
[Contributions](CONTRIBUTING.md) as pull requests, feedback, and any input is
welcome.
## License
[MIT License](LICENSE.md)
Copyright (c) 2019 - 2026 Paul Landes
<!-- links -->
[pypi]: https://pypi.org/project/zensols.zotsite/
[pypi-link]: https://pypi.python.org/pypi/zensols.zotsite
[pypi-badge]: https://img.shields.io/pypi/v/zensols.zotsite.svg
[python311-badge]: https://img.shields.io/badge/python-3.11-blue.svg
[python311-link]: https://www.python.org/downloads/release/python-3110
[python312-badge]: https://img.shields.io/badge/python-3.11-blue.svg
[python312-link]: https://www.python.org/downloads/release/python-3120
[build-badge]: https://github.com/plandes/zotsite/workflows/CI/badge.svg
[build-link]: https://github.com/plandes/zotsite/actions
[gitter-link]: https://gitter.im/zoterosite/zotsite
[gitter-badge]: https://badges.gitter.im/zoterosite/gitter.png
[live demo]: https://plandes.github.io/zotsite/demo/index.html
[screenshot]: https://raw.githubusercontent.com/plandes/zotsite/master/doc/snapshot.png
[Zotero]: https://www.zotero.org
[jQuery]: https://jquery.com
[DataTables]: https://datatables.net
[Bootstrap]: https://getbootstrap.com
[Tree View]: https://github.com/jonmiles/bootstrap-treeview
[Popper]: https://popper.js.org
[plugin]: https://www.zotero.org/support/plugins#website_integration
[Copy to Clipboard]: https://ourcodeworld.com/articles/read/143/how-to-copy-text-to-clipboard-with-javascript-easily
[BetterBibtex]: https://github.com/retorquere/zotero-better-bibtex
[configuration file]: test-resources/zotsite.conf
[Python regular expression]: https://docs.python.org/3/library/re.html
| text/markdown | null | Paul Landes <landes@mailc.net> | null | null | MIT | academia, papers, web, zotero | [] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"zensols-db~=1.5.0"
] | [] | [] | [] | [
"Homepage, https://github.com/plandes/zotsite",
"Documentation, https://plandes.github.io/zotsite",
"Repository, https://github.com/plandes/zotsite.git",
"Issues, https://github.com/plandes/zotsite/issues",
"Changelog, https://github.com/plandes/zotsite/blob/master/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T00:48:49.236626 | zensols_zotsite-1.3.0-py3-none-any.whl | 629,745 | ce/8a/c88e820083a639744d31c67d4088312ecdfc83c2fd1beb5335689f4f6071/zensols_zotsite-1.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | e775278c3436c085dfb10d5234d294b2 | fbada668d45de9f50adba4c0dbfdc35f86b5476a64b9cfa1e28f14864ce67bca | ce8ac88e820083a639744d31c67d4088312ecdfc83c2fd1beb5335689f4f6071 | null | [] | 95 |
2.4 | modernmetric | 1.6.0 | Calculate code metrics in various languages | # Modern Metric
Calculate code metrics and complexity in various languages
## Purpose
This tool tries to calculate the following metrics for many, many programming languages
* Comment to Code percentage
* Cyclomatic complexity according to McCabe
* Difficulty according to Halstead
* Effort according to Halstead
* Fan-Out
* Lines of code
* Maintainability index
* Metric according to pylint
* Metric according to TIOBE
* Number of delivered bugs according to Halstead
* Time required to program according to Halstead
* Volume according to Halstead
This tool was heavily inspired by [metrics](https://github.com/markfink/metrics)
## Requirements
* python3
* [chardet](https://pypi.org/project/chardet/)
* [Pygments](http://pygments.org/)
## Installation
### PyPi
simply run
```sh
pip3 install modernmetric
```
### From source
* git clone this repository
* cd to \<clone folder\>
* Install the needed requirements by running ```pip3 install -r requirements.txt```
* run `python3 setup.py build`
## Usage
```shell
usage: modernmetric [-h] [--warn_compiler WARN_COMPILER]
[--warn_duplication WARN_DUPLICATION]
[--warn_functional WARN_FUNCTIONAL]
[--warn_standard WARN_STANDARD]
[--warn_security WARN_SECURITY] [--coverage COVERAGE]
[--bugpredict {old,new}]
[--maintindex {sei,classic,microsoft}]
[--file=path_to_filelist]
AND/OR
files [files ...]
Calculate code metrics in various languages
positional arguments:
files Files to parse
optional arguments:
-h, --help show this help message and exit
--file=path_to_filelist
Path to JSON filelist to scan. Format is:
[
{
"name": "test.c",
"path": "../testfiles/test.c"
}
]
--output_file=path to output, optional
--warn_compiler WARN_COMPILER
File(s) holding information about compiler warnings
--warn_duplication WARN_DUPLICATION
File(s) holding information about code duplications
--warn_functional WARN_FUNCTIONAL
File(s) holding information about static code analysis findings
--warn_standard WARN_STANDARD
File(s) holding information about language standard violations
--warn_security WARN_SECURITY
File(s) File(s) holding information about found security issue
--coverage COVERAGE File(s) with compiler warningsFile(s) holding information about testing coverage
--bugpredict {old,new}
Method how to calculate the bug prediction
--maintindex {sei,classic,microsoft}
Method how to calculate the maintainability index
Currently you could import files of the following types for --warn_* or --coverage
Following information can be read
<file> = full path to file
<content> = either a string
<severity> = optional severity
Note: you could also add a single line, then <content>
has to be a number reflecting to total number of findings
File formats
csv: CSV file of following line format
<file>,<content>,<severity>
json: JSON file
<file>: {
"content": <content>,
"severity": <severity>
}
```
By default tool guesses the content type by the filename, if that doesn't work for you please see below
## Output
Output will be written to stdout as json.
### Output structure
* `files` contains a list of each file passed by CLI
* `overall` contains the calculated values for all passed files
* `stats` contains the statistically calculated values over all files passed [see Statistical additions](#statistics)
#### Item structure
| item | description | range | recommendation |
| --------------------- | ---------------------------------------------- | -------- | -------------- |
| comment_ratio | Comment to Code percentage | 0..100 | > 30.0 |
| cyclomatic_complexity | Cyclomatic complexity according to McCabe | 0..(inf) | < 10 |
| fanout_external | Number imports from out of tree modules | 0..(inf) | |
| fanout_internal | Number imports from same source tree modules | 0..(inf) | |
| halstead_bugprop | Number of delivered bugs according to Halstead | 0..(inf) | < 0.05 |
| halstead_difficulty | Difficulty according to Halstead | 0..(inf) | |
| halstead_effort | Effort according to Halstead | 0..(inf) | |
| halstead_timerequired | Time required to program according to Halstead | 0..(inf) | |
| halstead_volume | Volume according to Halstead | 0..(inf) | |
| lang | list of identified programming languages | list | |
| loc | Lines of code | 1..(inf) | |
| maintainability_index | Maintainability index | 0..100 | > 80.0 |
| operands_sum | Number of used operands | 1..(inf) | |
| operands_uniq | Number of unique used operands | 1..(inf) | |
| operators_sum | Number of used operators | 1..(inf) | |
| operators_uniq | Number of unique used operators | 1..(inf) | |
| pylint | General quality score according to pylint | 0..100 | > 80.0 |
| tiobe_compiler | Compiler warnings score according to TIOBE | 0..100 | > 90.0 |
| tiobe_complexity | Complexity according to TIOBE | 0..100 | > 80.0 |
| tiobe_coverage | Coverage according to TIOBE | 0..100 | > 80.0 |
| tiobe_duplication | Code duplications score according to TIOBE | 0..100 | > 80.0 |
| tiobe_fanout | Fan-Out score according to TIOBE | 0..100 | > 80.0 |
| tiobe_functional | Functional defect score according to TIOBE | 0..100 | > 90.0 |
| tiobe_security | Security score according to TIOBE | 0..100 | > 90.0 |
| tiobe_standard | Language standard score according to TIOBE | 0..100 | > 80.0 |
| tiobe | General quality score according to TIOBE | 0..100 | > 80.0 |
#### Statistics
The item `stats` contains in addition to the above mentioned the following items, which by themselves contain all the items mentioned at [Item structure](#item-structure)
* `max` = the maximum value of all items of the metric
* `mean` = statistical mean over all items of the metric
* `median` = statistical median over all items of the metric
* `min` = the minimum value of all items of the metric
* `sd` = standard deviation over all items of the metric
## Further reading
* [Pygments](http://pygments.org/)
## Bugs & Contribution
Feel free to create issues or pull requests
| text/markdown | null | Jason Nichols <github@verinfast.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: Free for non-commercial use",
"Natural Language :: English",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Program... | [] | null | null | >=3.11 | [] | [] | [] | [
"cachehash>=1.1.4",
"chardet>=5.2.0",
"httpx[http2]~=0.28.1",
"pygments-tsx>=1.0.4",
"pygments>=2.19.2",
"pygount>=3.1.1",
"black>=26.1.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/verinfast/modernmetric",
"Bug Tracker, https://github.com/verinfast/modernmetric/issues",
"Source, https://github.com/verinfast/modernmetric"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:47:31.940655 | modernmetric-1.6.0.tar.gz | 18,011,094 | 80/a3/fc58e6d97a5a70aac83395f9d8df2811dd62b8846f2852718fc906c84eec/modernmetric-1.6.0.tar.gz | source | sdist | null | false | 98b3c864eb71bf432b3232ce118078bf | 4128582d418cf94ceee2a2726ab97ac340ef285ca1cac133b46e10b4c6fc81fb | 80a3fc58e6d97a5a70aac83395f9d8df2811dd62b8846f2852718fc906c84eec | CC-BY-NC-4.0 | [
"LICENSE"
] | 277 |
2.4 | rusticlone | 1.6.1 | 3-2-1 backups using Rustic and RClone | <!--
// ┌───────────────────────────────────────────────────────────────┐
// │ Contents of README.md │
// ├───────────────────────────────────────────────────────────────┘
// │
// ├──┐Rusticlone
// │ ├── Motivation
// │ ├── Installation
// │ └──┐Usage
// │ ├── Backup
// │ ├──┐Restore
// │ │ ├── From the local Rustic repo
// │ │ └── From the RClone remote
// │ ├── Individual commands
// │ ├── Push notifications
// │ ├── Parallel processing
// │ ├── Exclude profiles
// ├──┐[...]
// │ │ ├── Custom log file
// │ │ └── Automatic system backups
// │ ├── Testing
// │ ├── Known limitations
// │ ├── Contribute
// │ └── License
// │
// └───────────────────────────────────────────────────────────────
-->
# Rusticlone
<div style='text-align: center;'>
<img alt="PyPI Downloads" src="https://static.pepy.tech/badge/rusticlone">
<img alt="Test Coverage" src="https://github.com/AlphaJack/rusticlone/raw/master/images/coverage.svg">
</div>
<p style='text-align: center;'>
<strong>3-2-1 backups using Rustic and RClone</strong>
</p>
<div style='text-align: left;'>
<img alt="backup process divided in archive and upload" src="https://github.com/AlphaJack/rusticlone/raw/master/images/process-backup.png" style="width: 43%; vertical-align: top;"/>
<img alt="output of rusticlone backup parallel" src="https://github.com/AlphaJack/rusticlone/raw/master/images/parallel-backup.png" style="width: 40%; vertical-align: top;"/>
<img alt="output of rusticlone backup sequential" src="https://github.com/AlphaJack/rusticlone/raw/master/images/sequential-backup.png" style="width: 14%; vertical-align: top;"/>
<br/>
<img alt="restore process divided in download and extract" src="https://github.com/AlphaJack/rusticlone/raw/master/images/process-restore.png" style="width: 43%; vertical-align: top;"/>
<img alt="output of rusticlone restore parallel" src="https://github.com/AlphaJack/rusticlone/raw/master/images/parallel-restore.png" style="width: 40%; vertical-align: top;"/>
<img alt="output of rusticlone restore sequential" src="https://github.com/AlphaJack/rusticlone/raw/master/images/sequential-restore.png" style="width: 14%; vertical-align: top;"/>
</div>
## Motivation
[Rustic](https://rustic.cli.rs/) comes with [native support](https://rustic.cli.rs/docs/commands/init/rclone.html) for [RClone](https://rclone.org/)'s built-in [Restic server](https://rclone.org/commands/rclone_serve_restic/).
After trying this feature, I experienced an abysmally low backup speed, much lower than my upload bandwidth: the bottleneck was the synchronous RClone server, as Rustic was waiting for a response before sending other data.
Another side effect of this feature is that Rustic does not create a local repo, meaning I would have to restore directly from the cloud in case of a disaster.
Since I could not run Rustic once for all my profiles (Documents, Pictures, etc.) I came up with a tool to:
- run Rustic for all my profiles
- archive them to local Rustic repos
- upload local repos to a RClone remote
When restoring, this tool would first download a copy of the RClone remote, and then restore from local Rustic repos.
By decoupling these operations, I got:
- three copies of my data, two of which are local and one is remote (3-2-1 backup strategy)
- the bottlenecks are now the SSD speed (for archive and extract operations) and Internet bandwidth (for upload and download operations)
If it sounds interesting, keep reading!
## Installation
Install [RClone](https://rclone.org/install/) >= 1.67, [Rustic](https://rustic.cli.rs/docs/installation.html) >= 0.10, [Python](https://www.python.org/downloads/) >= 3.11 and then `rusticlone`:
```bash
pip install rusticlone
```
[Configure RClone](https://rclone.org/commands/rclone_config/) by adding a remote.
[Create your Rustic TOML profiles](https://github.com/rustic-rs/rustic/tree/main/config) under "/etc/rustic/" or "$HOME/.config/rustic/" on Linux and MacOS. On Windows, you can put them under "%PROGRAMDATA%/rustic/config" or "%APPDATA%/rustic/config".
Configure your profiles to have one or more sources.
They should also have a local repository destination, without specifying the RClone remote.
You can take inspiration from the profiles in the [example](example/rustic) folder.
Include variables for the location (and password) of the RClone configuration:
```toml
[global.env]
RCLONE_CONFIG = "/home/user/.config/rclone/rclone.conf"
RCLONE_CONFIG_PASS = "XXXXXX"
#escape double quotes inside TOML strings
#RCLONE_PASSWORD_COMMAND = "python -c \"print('YYYYYY')\""
```
## Usage
### Backup
Let's assume you want to backup your **PC Documents** to both an **external hard drive** (HDD) and **Google Drive**.
With RClone, you have configured your Google Drive as the <gdrive:/> remote.
You have created the "/etc/rustic/Documents.toml" Rustic profile with:
- source "/home/user/Documents"
- destination "/mnt/backup/Documents" (assuming your external HDD is mounted on "/mnt")
Launch Rusticlone specifying the RClone remote and the `backup` command:
```bash
rusticlone -r "gdrive:/PC" backup
```
Great! You just backed up your documents to both "/mnt/backup/Documents" and <gdrive:/PC/Documents>!
Check the result with the following commands:
```bash
#size of all your documents
du -sh "/home/users/Documents"
#contents of local rustic repo
rustic -P "Documents" repoinfo
tree "/mnt/backup/Documents"
#contents of remote rustic repo
rclone ncdu "gdrive:/PC/Documents"
```
### Restore
#### From the local Rustic repo
In case you lose your PC, but still have your external HDD, on your new PC you need:
- `rusticlone` and dependencies installed
- your Rustic profiles in place
- your external HDD mounted
Then, run:
```bash
rusticlone extract
```
Great! You just restored your documents from "/mnt/backup/Documents" to "/home/user/Documents".
#### From the RClone remote
In case you lose both your PC files and your external HDD, don't worry! You still have your data on the RClone remote.
On your new PC you need:
- `rusticlone` and dependencies installed
- your RClone configuration
- your Rustic profiles in place
- a new external HDD mounted
Then, run:
```bash
rusticlone -r "gdrive:/PC" restore
```
Fantastic! You downloaded a copy of your Google Drive backup to the external HDD,
and you restored your documents from the HDD to their original location.
Check that everything went well:
```bash
#your remote backup files are still there
rclone ncdu "gdrive:/PC/Documents"
#your new external HDD contains a rustic repo
ls -lah "/mnt/backups/Documents"
rustic -P "Documents" repoinfo
#your documents have been restored
du -sh "/home/users/Documents"
ls -lah "/home/users/Documents"
```
You can now run `rusticlone -r "gdrive:/PC" backup` as always to keep your data safe.
### Individual commands
In alternative to `backup` and `restore`, you can also run individual `rusticlone` commands:
```bash
#use rustic from source to local repo
rusticlone archive
#use rclone from local repo to remote
rusticlone -r "gdrive:/PC" upload
#use rclone from remote to local repo
rusticlone -r "gdrive:/PC" download
#use rustic from local repo to source
rusticlone extract
```
### Push notifications
Rusticlone can send a push notification with the operation results using Apprise:
<img alt="Push Notification" src="https://github.com/AlphaJack/rusticlone/raw/master/images/notification.png">
Just pass the [Apprise notification URL](https://github.com/caronc/apprise?tab=readme-ov-file#supported-notifications) via the `--apprise-url` argument or `APPRISE_URL` environment variable:
```bash
rusticlone --apprise-url "tgram:/XXXXXX/YYYYYY/" archive
#alternative
APPRISE_URL="tgram:/XXXXXX/YYYYYY/" rusticlone archive
```
### Parallel processing
You can specify the `--parallel` argument with any command to process all your profiles at the same time:
```bash
rusticlone --parallel -r "gdrive:/PC" backup
```
Beware that this may fill your RAM if you have many profiles or several GB of data to archive.
Parallel processing is also not (yet) compatible with push notifications.
### Exclude profiles
Rustic has a handy feature: you can create additional profiles to store options shared between profiles.
Let's assume this profile is called "common.toml" and contains the following:
```toml
[forget]
prune = true
keep-last = 1
keep-daily = 7
keep-weekly = 4
keep-monthly = 3
keep-quarter-yearly = 4
keep-yearly = 1
```
As it doesn't contain a "\[repository]" section, it will not be treated as a standalone profile by Rusticlone.
This "common.toml" profile can be referenced from our documents by adding to "Documents.toml" the following:
```toml
[global]
use-profile = ["common"]
# [...]
```
### Custom log file
The default behavior is that, if present, both Rustic and RClone use the log file specified in the Rustic profile configuration.
A custom log file for both Rustic and RClone can be specified with `--log-file`.
```bash
rusticlone --log-file "/var/log/rusticlone.log" archive
```
If no argument is passed and no log file can be found in Rustic configuration, "rusticlone.log" in the current folder is used.
### Automatic system backups
Place your profiles under "/etc/rustic".
If you are storing the RClone password inside the profiles, make sure the folder is only readable by root.
Create a Systemd timer unit "/etc/systemd/system/rusticlone.timer" and copy inside the following:
```ini
[Unit]
Description=Rusticlone timer
[Timer]
#every day at midnight
OnCalendar=*-*-* 00:00:00
[Install]
WantedBy=timers.target
```
Create a Systemd service unit "/etc/systemd/system/rusticlone.service" and copy inside the following:
```ini
[Unit]
Description=Rusticlone service
[Service]
Type=oneshot
ExecStart=rusticlone --remote "gdrive:/PC" backup
```
Adjust your `--remote` as needed.
Apply your changes and enable the timer:
```bash
sudo systemctl daemon-reload
sudo systemctl enable --now rusticlone.timer
```
## Testing
You can test Rusticlone with dummy files before using it for your precious data:
```bash
make install
make test
```
You will need `bash`, `coreutils`, `rclone`, and `rustic` installed to run the test.
Before running the test, make sure that you have no important files under "$HOME/.config/rustic".
At the end, you can read a test coverage report with your browser, to see which lines of the source code were run during the test.
## Known limitations
- Rustic does not **save ownership and permission** for the source location, but **only for files and folders inside the source**. If you backup "/home/jack" with user "jack" and permission "0700", when you will restore it will have user "root" and permission "0755" ([intended rustic behavior](https://github.com/rustic-rs/rustic/issues/1108#issuecomment-2016584568))
- Rustic does not recognize proper Windows paths ([bug](https://github.com/rustic-rs/rustic/issues/1104))
- Rustic will not archive empty folders ([bug](https://github.com/rustic-rs/rustic/issues/1157))
- Rustic may introduce [breaking changes](https://rustic.cli.rs/docs/breaking_changes.html)
## Contribute
Feel free to open an Issue to report bugs or request new features.
Pull Requests are welcome, too!
## License
Licensed under [GPL-3.0](LICENSE) terms.
Not affiliated with Rustic or RClone.
| text/markdown | AlphaJack | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: End Users/Desktop",
"Topic :: System :: Archiving :: Backup",
"Environment :: Console",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"configargparse",
"importlib-metadata"
] | [] | [] | [] | [
"Homepage, https://github.com/AlphaJack/rusticlone",
"Issues, https://github.com/AlphaJack/rusticlone/issues",
"Repository, https://github.com/AlphaJack/rusticlone",
"Changelog, https://github.com/AlphaJack/rusticlone/blob/master/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:46:44.368024 | rusticlone-1.6.1.tar.gz | 37,974 | a0/1b/aa85f9a6c329d9a35d3a8e2e26920325bca37d1b6da4e095cba65debb6f7/rusticlone-1.6.1.tar.gz | source | sdist | null | false | f155db66093b26bd2e52af9823a00bf8 | e0e096c0901080f84c0cb7e79338bd90d4f6bf9a62f4a0a666f048c8dc65929b | a01baa85f9a6c329d9a35d3a8e2e26920325bca37d1b6da4e095cba65debb6f7 | GPL-3.0-or-later | [
"LICENSE"
] | 253 |
2.4 | langchain-kredo | 0.3.3 | LangChain integration for the Kredo agent attestation protocol | # langchain-kredo
LangChain integration for the [Kredo](https://aikredo.com) agent attestation protocol.
One line of code. Signed attestation. Done.
## Install
```bash
pip install langchain-kredo
```
## One-Liner
```python
from langchain_kredo import attest
# That's it. Resolves name, looks up skill, signs, submits.
attest("jim", "incident-triage", "Triaged 3 incidents correctly in SOC exercise")
# With a URL — auto-detected as evidence artifact
attest("jim", "code-review", "https://github.com/org/repo/pull/47")
# With explicit proficiency (1-5, default 3)
attest("jim", "threat-hunting", "Found lateral movement in 4 minutes", proficiency=5)
```
Set `KREDO_PRIVATE_KEY` env var (hex seed) and go. Subject resolved by name or pubkey. Skill resolved by reverse taxonomy lookup — just say `"incident-triage"`, it finds the domain.
**Key handling:** Your signing key is a 32-byte Ed25519 seed. Store it as an environment variable, never hardcode it. Generate one with `kredo identity create` or any Ed25519 library.
## Trust Gate
Policy enforcement for agent pipelines:
```python
from langchain_kredo import KredoSigningClient, KredoTrustGate
client = KredoSigningClient(signing_key="your-hex-seed")
gate = KredoTrustGate(client, min_score=0.3, block_warned=True)
# Check trust
result = gate.check("ed25519:agent-pubkey")
# result.passed, result.score, result.skills, result.attestor_count
# Select best agent for a task (ranks by reputation + diversity + domain proficiency)
best = gate.select_best(candidates, domain="security-operations", skill="incident-triage")
# Build-vs-buy: delegate or self-compute?
delegate = gate.should_delegate(candidates, domain="code-generation", self_proficiency=2)
# Decorator
@gate.require(min_score=0.7)
def sensitive_operation(pubkey: str):
...
```
## LangChain Tools
Four tools for agent toolboxes. Read-only tools are safe for autonomous LLM use. The submit tool requires human approval by default.
```python
from langchain_kredo import KredoCheckTrustTool, KredoSearchAttestationsTool
# Safe for LLM agents — read-only
tools = [
KredoCheckTrustTool(client=client),
KredoSearchAttestationsTool(client=client),
]
```
| Tool | Name | LLM-Safe | Purpose |
|------|------|----------|---------|
| `KredoCheckTrustTool` | `kredo_check_trust` | Yes | Check agent reputation + skills + warnings |
| `KredoSearchAttestationsTool` | `kredo_search_attestations` | Yes | Find agents by skill/domain/proficiency |
| `KredoSubmitAttestationTool` | `kredo_submit_attestation` | **No** | Sign and submit skill attestation |
| `KredoGetTaxonomyTool` | `kredo_get_taxonomy` | Yes | Browse valid domains/skills |
**Warning:** `KredoSubmitAttestationTool` signs and submits irreversible cryptographic claims. By default it returns a preview for human approval. Only set `require_human_approval=False` if your pipeline has an explicit confirmation mechanism.
## Callback Handler
Tracks chain execution, builds attestation evidence automatically:
```python
from langchain_kredo import KredoCallbackHandler
handler = KredoCallbackHandler()
chain.invoke(input, config={"callbacks": [handler]})
for record in handler.get_records():
if record.success_rate >= 0.9:
client.attest_skill(
subject_pubkey="ed25519:...",
domain="security-operations",
skill="incident-triage",
proficiency=3,
context=record.build_evidence_context(),
artifacts=record.build_artifacts(),
)
```
Collects evidence but never auto-submits. You decide when and what to attest.
## Client
Full signing-aware client for when you need more control:
```python
client = KredoSigningClient(
signing_key=sk, # SigningKey, bytes, hex string, or env var
name="my-agent",
agent_type="agent",
)
# Read
profile = client.get_profile("ed25519:...")
my_profile = client.my_profile() # your own profile
results = client.search(domain="security-operations")
# Write
client.register()
client.attest_skill(
subject_pubkey="ed25519:...",
domain="security-operations",
skill="incident-triage",
proficiency=4,
context="Demonstrated expert-level triage in SOC exercise",
)
```
## Development
```bash
cd langchain-kredo
pip install -e ".[dev]"
pytest tests/ -v # 86 tests
```
## License
MIT
| text/markdown | Jim Motes, Vanguard | null | null | null | null | agents, attestation, ed25519, kredo, langchain, trust | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Security",
"Topic :: Software Development :: Libraries ::... | [] | null | null | >=3.11 | [] | [] | [] | [
"kredo>=0.8.0",
"langchain-core<1.0.0,>=0.3.0",
"pydantic>=2.0.0",
"pynacl>=1.5.0",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://aikredo.com",
"Repository, https://github.com/jimmotes2024/kredo",
"Documentation, https://aikredo.com"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T00:46:18.455719 | langchain_kredo-0.3.3.tar.gz | 19,593 | ff/72/5224ae8cf3711f6e29a0ec175f836cd72ae3c3ae94bb8b59cc1ab1e4b509/langchain_kredo-0.3.3.tar.gz | source | sdist | null | false | 04bfa4638ffe79b625475bba5b09dcb9 | fe08e720dec42f3794bbd79fba14f719e130f3fdae767ddf96eacf8e5e71ef1c | ff725224ae8cf3711f6e29a0ec175f836cd72ae3c3ae94bb8b59cc1ab1e4b509 | MIT | [
"LICENSE"
] | 260 |
2.4 | mujoco-blueprints | 0.0.5 | Blueprints is a Mujoco interface providing many graph manipulation operations and easy runtime data access. | # Blueprints
Blueprints is a pythonic library that constructs Mujoco simulations and gives access to runtime data. It is designed to simplify access to mujoco and provide helpful subroutines for procedural generation, aiding environment randomization, curriculum learning, hindsight experiance replay and other RL paradigms aiming to manipulate the environment.
Once a certain structure of Mujoco elements has been build, it can be copied, many times throughout the world. The same objects through which the simulation has been constructed also serve as access objects to the simulations runtime data, no need of keeping track of indecies for ``mj_data.obj_type(index)`` calls.
Blueprints offers many conveniences for maipulating many objects attributes at once, implements standart agent interfaces for RL and bundles all sorts of data access in a finegrained indexing scheme through Views.
## Installation
Directly from PyPI:
```
$ pip install mujoco_blueprints
```
Alternatively from Github:
```
$ git clone https://github.com/mortimervonchappuis/mujoco_blueprints.git
```
See [documentation](https://mujoco-blueprints.readthedocs.io/en/latest/index.html) for more details.
| text/markdown | Mortimer von Chappuis | null | null | null | null | null | [] | [] | https://mujoco-blueprints.readthedocs.io/en/latest/index.html | https://github.com/mortimervonchappuis/mujoco_blueprints | null | [] | [] | [] | [
"numpy>=2.0.0",
"imageio>=2.0.0",
"mujoco>=3.3.2",
"tqdm>=4.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.11 | 2026-02-20T00:45:09.459169 | mujoco_blueprints-0.0.5.tar.gz | 144,948 | 73/d9/08c3eee6d8684c690e6cf5f3cb5d8fb64096c95722e63db325b82e48c483/mujoco_blueprints-0.0.5.tar.gz | source | sdist | null | false | 5c413aeddff9fbd1b2b1ab4ec4fc02b0 | 5ed455f8292fd896e8ea767692b91bb2a9127bf693c9993e29079d941e80b8b2 | 73d908c3eee6d8684c690e6cf5f3cb5d8fb64096c95722e63db325b82e48c483 | null | [] | 250 |
2.4 | azure-ai-evaluation | 1.15.1 | Microsoft Azure Evaluation Library for Python | # Azure AI Evaluation client library for Python
Use Azure AI Evaluation SDK to assess the performance of your generative AI applications. Generative AI application generations are quantitatively measured with mathematical based metrics, AI-assisted quality and safety metrics. Metrics are defined as `evaluators`. Built-in or custom evaluators can provide comprehensive insights into the application's capabilities and limitations.
Use Azure AI Evaluation SDK to:
- Evaluate existing data from generative AI applications
- Evaluate generative AI applications
- Evaluate by generating mathematical, AI-assisted quality and safety metrics
Azure AI SDK provides following to evaluate Generative AI Applications:
- [Evaluators][evaluators] - Generate scores individually or when used together with `evaluate` API.
- [Evaluate API][evaluate_api] - Python API to evaluate dataset or application using built-in or custom evaluators.
[Source code][source_code]
| [Package (PyPI)][evaluation_pypi]
| [API reference documentation][evaluation_ref_docs]
| [Product documentation][product_documentation]
| [Samples][evaluation_samples]
## Getting started
### Prerequisites
- Python 3.9 or later is required to use this package.
- [Optional] You must have [Azure AI Foundry Project][ai_project] or [Azure Open AI][azure_openai] to use AI-assisted evaluators
### Install the package
Install the Azure AI Evaluation SDK for Python with [pip][pip_link]:
```bash
pip install azure-ai-evaluation
```
## Key concepts
### Evaluators
Evaluators are custom or prebuilt classes or functions that are designed to measure the quality of the outputs from language models or generative AI applications.
#### Built-in evaluators
Built-in evaluators are out of box evaluators provided by Microsoft:
| Category | Evaluator class |
|-----------|------------------------------------------------------------------------------------------------------------------------------------|
| [Performance and quality][performance_and_quality_evaluators] (AI-assisted) | `GroundednessEvaluator`, `RelevanceEvaluator`, `CoherenceEvaluator`, `FluencyEvaluator`, `SimilarityEvaluator`, `RetrievalEvaluator` |
| [Performance and quality][performance_and_quality_evaluators] (NLP) | `F1ScoreEvaluator`, `RougeScoreEvaluator`, `GleuScoreEvaluator`, `BleuScoreEvaluator`, `MeteorScoreEvaluator`|
| [Risk and safety][risk_and_safety_evaluators] (AI-assisted) | `ViolenceEvaluator`, `SexualEvaluator`, `SelfHarmEvaluator`, `HateUnfairnessEvaluator`, `IndirectAttackEvaluator`, `ProtectedMaterialEvaluator` |
| [Composite][composite_evaluators] | `QAEvaluator`, `ContentSafetyEvaluator` |
For more in-depth information on each evaluator definition and how it's calculated, see [Evaluation and monitoring metrics for generative AI][evaluation_metrics].
```python
import os
from azure.ai.evaluation import evaluate, RelevanceEvaluator, ViolenceEvaluator, BleuScoreEvaluator
# NLP bleu score evaluator
bleu_score_evaluator = BleuScoreEvaluator()
result = bleu_score(
response="Tokyo is the capital of Japan.",
ground_truth="The capital of Japan is Tokyo."
)
# AI assisted quality evaluator
model_config = {
"azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
"api_key": os.environ.get("AZURE_OPENAI_API_KEY"),
"azure_deployment": os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
}
relevance_evaluator = RelevanceEvaluator(model_config)
result = relevance_evaluator(
query="What is the capital of Japan?",
response="The capital of Japan is Tokyo."
)
# There are two ways to provide Azure AI Project.
# Option #1 : Using Azure AI Project Details
azure_ai_project = {
"subscription_id": "<subscription_id>",
"resource_group_name": "<resource_group_name>",
"project_name": "<project_name>",
}
violence_evaluator = ViolenceEvaluator(azure_ai_project)
result = violence_evaluator(
query="What is the capital of France?",
response="Paris."
)
# Option # 2 : Using Azure AI Project Url
azure_ai_project = "https://{resource_name}.services.ai.azure.com/api/projects/{project_name}"
violence_evaluator = ViolenceEvaluator(azure_ai_project)
result = violence_evaluator(
query="What is the capital of France?",
response="Paris."
)
```
#### Custom evaluators
Built-in evaluators are great out of the box to start evaluating your application's generations. However you can build your own code-based or prompt-based evaluator to cater to your specific evaluation needs.
```python
# Custom evaluator as a function to calculate response length
def response_length(response, **kwargs):
return len(response)
# Custom class based evaluator to check for blocked words
class BlocklistEvaluator:
def __init__(self, blocklist):
self._blocklist = blocklist
def __call__(self, *, response: str, **kwargs):
score = any([word in answer for word in self._blocklist])
return {"score": score}
blocklist_evaluator = BlocklistEvaluator(blocklist=["bad, worst, terrible"])
result = response_length("The capital of Japan is Tokyo.")
result = blocklist_evaluator(answer="The capital of Japan is Tokyo.")
```
### Evaluate API
The package provides an `evaluate` API which can be used to run multiple evaluators together to evaluate generative AI application response.
#### Evaluate existing dataset
```python
from azure.ai.evaluation import evaluate
result = evaluate(
data="data.jsonl", # provide your data here
evaluators={
"blocklist": blocklist_evaluator,
"relevance": relevance_evaluator
},
# column mapping
evaluator_config={
"relevance": {
"column_mapping": {
"query": "${data.queries}"
"ground_truth": "${data.ground_truth}"
"response": "${outputs.response}"
}
}
}
# Optionally provide your AI Foundry project information to track your evaluation results in your Azure AI Foundry project
azure_ai_project = azure_ai_project,
# Optionally provide an output path to dump a json of metric summary, row level data and metric and AI Foundry URL
output_path="./evaluation_results.json"
)
```
For more details refer to [Evaluate on test dataset using evaluate()][evaluate_dataset]
#### Evaluate generative AI application
```python
from askwiki import askwiki
result = evaluate(
data="data.jsonl",
target=askwiki,
evaluators={
"relevance": relevance_eval
},
evaluator_config={
"default": {
"column_mapping": {
"query": "${data.queries}"
"context": "${outputs.context}"
"response": "${outputs.response}"
}
}
}
)
```
Above code snippet refers to askwiki application in this [sample][evaluate_app].
For more details refer to [Evaluate on a target][evaluate_target]
### Simulator
Simulators allow users to generate synthentic data using their application. Simulator expects the user to have a callback method that invokes their AI application. The intergration between your AI application and the simulator happens at the callback method. Here's how a sample callback would look like:
```python
async def callback(
messages: Dict[str, List[Dict]],
stream: bool = False,
session_state: Any = None,
context: Optional[Dict[str, Any]] = None,
) -> dict:
messages_list = messages["messages"]
# Get the last message from the user
latest_message = messages_list[-1]
query = latest_message["content"]
# Call your endpoint or AI application here
# response should be a string
response = call_to_your_application(query, messages_list, context)
formatted_response = {
"content": response,
"role": "assistant",
"context": "",
}
messages["messages"].append(formatted_response)
return {"messages": messages["messages"], "stream": stream, "session_state": session_state, "context": context}
```
The simulator initialization and invocation looks like this:
```python
from azure.ai.evaluation.simulator import Simulator
model_config = {
"azure_endpoint": os.environ.get("AZURE_ENDPOINT"),
"azure_deployment": os.environ.get("AZURE_DEPLOYMENT_NAME"),
"api_version": os.environ.get("AZURE_API_VERSION"),
}
custom_simulator = Simulator(model_config=model_config)
outputs = asyncio.run(custom_simulator(
target=callback,
conversation_turns=[
[
"What should I know about the public gardens in the US?",
],
[
"How do I simulate data against LLMs",
],
],
max_conversation_turns=2,
))
with open("simulator_output.jsonl", "w") as f:
for output in outputs:
f.write(output.to_eval_qr_json_lines())
```
#### Adversarial Simulator
```python
from azure.ai.evaluation.simulator import AdversarialSimulator, AdversarialScenario
from azure.identity import DefaultAzureCredential
# There are two ways to provide Azure AI Project.
# Option #1 : Using Azure AI Project
azure_ai_project = {
"subscription_id": <subscription_id>,
"resource_group_name": <resource_group_name>,
"project_name": <project_name>
}
# Option #2 : Using Azure AI Project Url
azure_ai_project = "https://{resource_name}.services.ai.azure.com/api/projects/{project_name}"
scenario = AdversarialScenario.ADVERSARIAL_QA
simulator = AdversarialSimulator(azure_ai_project=azure_ai_project, credential=DefaultAzureCredential())
outputs = asyncio.run(
simulator(
scenario=scenario,
max_conversation_turns=1,
max_simulation_results=3,
target=callback
)
)
print(outputs.to_eval_qr_json_lines())
```
For more details about the simulator, visit the following links:
- [Adversarial Simulation docs][adversarial_simulation_docs]
- [Adversarial scenarios][adversarial_simulation_scenarios]
- [Simulating jailbreak attacks][adversarial_jailbreak]
## Examples
In following section you will find examples of:
- [Evaluate an application][evaluate_app]
- [Evaluate different models][evaluate_models]
- [Custom Evaluators][custom_evaluators]
- [Adversarial Simulation][adversarial_simulation]
- [Simulate with conversation starter][simulate_with_conversation_starter]
More examples can be found [here][evaluate_samples].
## Troubleshooting
### General
Please refer to [troubleshooting][evaluation_tsg] for common issues.
### Logging
This library uses the standard
[logging][python_logging] library for logging.
Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO
level.
Detailed DEBUG level logging, including request/response bodies and unredacted
headers, can be enabled on a client with the `logging_enable` argument.
See full SDK logging documentation with examples [here][sdk_logging_docs].
## Next steps
- View our [samples][evaluation_samples].
- View our [documentation][product_documentation]
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [cla.microsoft.com][cla].
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct][code_of_conduct]. For more information see the [Code of Conduct FAQ][coc_faq] or contact [opencode@microsoft.com][coc_contact] with any additional questions or comments.
<!-- LINKS -->
[source_code]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/evaluation/azure-ai-evaluation
[evaluation_pypi]: https://pypi.org/project/azure-ai-evaluation/
[evaluation_ref_docs]: https://learn.microsoft.com/python/api/azure-ai-evaluation/azure.ai.evaluation?view=azure-python-preview
[evaluation_samples]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios
[product_documentation]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk
[python_logging]: https://docs.python.org/3/library/logging.html
[sdk_logging_docs]: https://docs.microsoft.com/azure/developer/python/azure-sdk-logging
[azure_core_readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/core/azure-core/README.md
[pip_link]: https://pypi.org/project/pip/
[azure_core_ref_docs]: https://aka.ms/azsdk-python-core-policies
[azure_core]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/core/azure-core/README.md
[azure_identity]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/identity/azure-identity
[cla]: https://cla.microsoft.com
[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/
[coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/
[coc_contact]: mailto:opencode@microsoft.com
[evaluate_target]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk#evaluate-on-a-target
[evaluate_dataset]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk#evaluate-on-test-dataset-using-evaluate
[evaluators]: https://learn.microsoft.com/python/api/azure-ai-evaluation/azure.ai.evaluation?view=azure-python-preview
[evaluate_api]: https://learn.microsoft.com/python/api/azure-ai-evaluation/azure.ai.evaluation?view=azure-python-preview#azure-ai-evaluation-evaluate
[evaluate_app]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate/Supported_Evaluation_Targets/Evaluate_App_Endpoint
[evaluation_tsg]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/evaluation/azure-ai-evaluation/TROUBLESHOOTING.md
[ai_studio]: https://learn.microsoft.com/azure/ai-studio/what-is-ai-studio
[ai_project]: https://learn.microsoft.com/azure/ai-studio/how-to/create-projects?tabs=ai-studio
[azure_openai]: https://learn.microsoft.com/azure/ai-services/openai/
[evaluate_models]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate/Supported_Evaluation_Targets/Evaluate_Base_Model_Endpoint
[custom_evaluators]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate/Supported_Evaluation_Metrics/Custom_Evaluators
[evaluate_samples]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate
[evaluation_metrics]: https://learn.microsoft.com/azure/ai-studio/concepts/evaluation-metrics-built-in
[performance_and_quality_evaluators]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk#performance-and-quality-evaluators
[risk_and_safety_evaluators]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk#risk-and-safety-evaluators
[composite_evaluators]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/evaluate-sdk#composite-evaluators
[adversarial_simulation_docs]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/simulator-interaction-data#generate-adversarial-simulations-for-safety-evaluation
[adversarial_simulation_scenarios]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/simulator-interaction-data#supported-adversarial-simulation-scenarios
[adversarial_simulation]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate/Simulators/Simulate_Adversarial_Data
[simulate_with_conversation_starter]: https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/evaluate/Simulators/Simulate_Context-Relevant_Data/Simulate_From_Conversation_Starter
[adversarial_jailbreak]: https://learn.microsoft.com/azure/ai-studio/how-to/develop/simulator-interaction-data#simulating-jailbreak-attacks
# Release History
## 1.15.1 (2026-02-19)
### Bugs Fixed
- Red Team Agent Scenario Integration: integrated PyRIT's FoundryScenario for attack orchestration with Azure-specific scoring and result processing.
- Fixed total tokens calculation errors in evaluation results.
- Fixed red team SDK run status updates not firing consistently, preventing runs from being stuck as "Running" in the UI.
## 1.15.0 (2026-02-03)
### Bugs Fixed
- Prevent recursive stdout/stderr forwarding when NodeLogManager is nested, avoiding RecursionError in concurrent evaluation runs.
### Other Changes
- The `[redteam]` extra now requires `pyrit==0.11.0`, which depends on `pillow>=12.1.0`. This conflicts with `promptflow-devkit` (`pillow<=11.3.0`). Use separate virtual environments if you need both packages.
## 1.14.0 (2026-01-05)
### Bugs Fixed
- Updated CodeVulnerability and UngroundedAttributes evaluators for RedTeam to use the binary true/false scoring pattern so their results align with service responses.
- Fixed handling of nested fields for AOAI graders when using files as datasource
- Fixed `GroundednessEvaluator` with `query` not honoring `is_reasoning_model` (and `credential`) when reloading the query prompty, which could cause `max_tokens` to be sent to reasoning models. [#44385](https://github.com/Azure/azure-sdk-for-python/issues/44385)
## 1.13.7 (2025-11-14)
### Bugs Fixed
- Fixed NoneType error when generating usage summary in evaluation results.
- Fixed results for f1_score.
## 1.13.6 (2025-11-12)
### Bugs Fixed
- Added detection and retry handling for network errors wrapped in generic exceptions with "Error sending prompt with conversation ID" message
- Fix results for ungrounded_attributes
- score_mode grader improvements
- fix for Red Team to ensure hate/unfairness evaluation rows populate when OneDP sync evaluators report results under the hate_unfairness metric name.
## 1.13.5 (2025-11-10)
### Bugs Fixed
- **TaskAdherenceEvaluator:** treat tool definitions as optional so evaluations with only query/response inputs no longer raise “Either 'conversation' or individual inputs must be provided.”
## 1.13.4 (2025-11-10)
### Bugs Fixed
- Handle input data for evaluation result when evaluators.
## 1.13.3 (2025-11-08)
### Other Changes
- Added `scenario` property to red team evaluation request to align scores with red team concepts of attack success.
## 1.13.2 (2025-11-07)
### Bugs Fixed
- Added App Insights redaction for agent safety run telemetry so adversarial prompts are not stored in collected logs.
## 1.13.1 (2025-11-05)
### Features Added
- Improved RedTeam coverage across risk sub-categories to ensure comprehensive security testing
- Made RedTeam's `AttackStrategy.Tense` seed prompts dynamic to allow use of this strategy with additional risk categories
- Refactors error handling and result semantics in the RedTeam evaluation system to improve clarity and align with Attack Success Rate (ASR) conventions (passed=False means attack success)
### Bugs Fixed
- Fixed RedTeam evaluation error related to context handling for context-dependent risk categories
- Fixed RedTeam prompt application for model targets during Indirect Jailbreak XPIA (Cross-Platform Indirect Attack)
## 1.13.0 (2025-10-30)
### Features Added
- Updated `IndirectAttack` risk category for RedTeam to `IndirectJailbreak` to better reflect its purpose. This change allows users to apply cross-domain prompt injection (XPIA) attack strategies across all risk categories, enabling more comprehensive security testing of AI systems against indirect prompt injection attacks during red teaming.
- Added `TaskAdherence`, `SensitiveDataLeakage`, and `ProhibitedActions` as cloud-only agent safety risk categories for red teaming.
- Updated all evaluators' output to be of the following schema:
- `gpt_{evaluator_name}`, `{evaluator_name}`: float score,
- `{evaluator_name}_result`: pass/fail based on threshold,
- `{evaluator_name}_reason`, `{evaluator_name}_threshold`
- `{evaluator_name}_prompt_tokens`, `{evaluator_name}_completion_tokens`, `{evaluator_name}_total_tokens`, `{evaluator_name}_finish_reason`
- `{evaluator_name}_model`: model used for evaluation
- `{evaluator_name}_sample_input`, `{evaluator_name}_sample_output`: input and output used for evaluation
This change standardizes the output format across all evaluators and follows OTel convention.
### Bugs Fixed
- `image_tag` parameter in `AzureOpenAIPythonGrader` is now optional.
## 1.11.2 (2025-10-09)
### Bugs Fixed
- **kwargs in an evaluator signature receives input columns that are not otherwise named in the evaluator's signature
## 1.12.0 (2025-10-02)
### Features Added
- AOAI Graders now accept a "credential" parameter that can be used for authentication with an AzureOpenAIModelConfiguration
- Added `is_reasoning_model` parameter support to `CoherenceEvaluator`, `FluencyEvaluator`, `SimilarityEvaluator`, `GroundednessEvaluator`, `RetrievalEvaluator`, and `RelevanceEvaluator` to enable reasoning model configuration for o1/o3 models.
### Bugs Fixed
- Support for multi-level nesting in OpenAI grader (experimental)
## 1.11.1 (2025-09-19)
### Bugs Fixed
- Pinning duckdb version to 1.3.2 for redteam extra to fix error `TypeError: unhashable type: '_duckdb.typing.DuckDBPyType'`
## 1.11.0 (2025-09-03)
### Features Added
- Added support for user-supplied tags in the `evaluate` function. Tags are key-value pairs that can be used for experiment tracking, A/B testing, filtering, and organizing evaluation runs. The function accepts a `tags` parameter.
- Added support for user-supplied TokenCredentials with LLM based evaluators.
- Enhanced `GroundednessEvaluator` to support AI agent evaluation with tool calls. The evaluator now accepts agent response data containing tool calls and can extract context from `file_search` tool results for groundedness assessment. This enables evaluation of AI agents that use tools to retrieve information and generate responses. Note: Agent groundedness evaluation is currently supported only when the `file_search` tool is used.
- Added `language` parameter to `RedTeam` class for multilingual red team scanning support. The parameter accepts values from `SupportedLanguages` enum including English, Spanish, French, German, Italian, Portuguese, Japanese, Korean, and Simplified Chinese, enabling red team attacks to be generated and conducted in multiple languages.
- Added support for IndirectAttack and UngroundedAttributes risk categories in `RedTeam` scanning. These new risk categories expand red team capabilities to detect cross-platform indirect attacks and evaluate ungrounded inferences about human attributes including emotional state and protected class information.
### Bugs Fixed
- Fixed issue where evaluation results were not properly aligned with input data, leading to incorrect metrics being reported.
### Other Changes
- Deprecating `AdversarialSimulator` in favor of the [AI Red Teaming Agent](https://aka.ms/airedteamingagent-sample). `AdversarialSimulator` will be removed in the next minor release.
- Moved retry configuration constants (`MAX_RETRY_ATTEMPTS`, `MAX_RETRY_WAIT_SECONDS`, `MIN_RETRY_WAIT_SECONDS`) from `RedTeam` class to new `RetryManager` class for better code organization and configurability.
## 1.10.0 (2025-07-31)
### Breaking Changes
- Added `evaluate_query` parameter to all RAI service evaluators that can be passed as a keyword argument. This parameter controls whether queries are included in evaluation data when evaluating query-response pairs. Previously, queries were always included in evaluations. When set to `True`, both query and response will be evaluated; when set to `False` (default), only the response will be evaluated. This parameter is available across all RAI service evaluators including `ContentSafetyEvaluator`, `ViolenceEvaluator`, `SexualEvaluator`, `SelfHarmEvaluator`, `HateUnfairnessEvaluator`, `ProtectedMaterialEvaluator`, `IndirectAttackEvaluator`, `CodeVulnerabilityEvaluator`, `UngroundedAttributesEvaluator`, `GroundednessProEvaluator`, and `EciEvaluator`. Existing code that relies on queries being evaluated will need to explicitly set `evaluate_query=True` to maintain the previous behavior.
### Features Added
- Added support for Azure OpenAI Python grader via `AzureOpenAIPythonGrader` class, which serves as a wrapper around Azure Open AI Python grader configurations. This new grader object can be supplied to the main `evaluate` method as if it were a normal callable evaluator.
- Added `attack_success_thresholds` parameter to `RedTeam` class for configuring custom thresholds that determine attack success. This allows users to set specific threshold values for each risk category, with scores greater than the threshold considered successful attacks (i.e. higher threshold means higher
tolerance for harmful responses).
- Enhanced threshold reporting in RedTeam results to include default threshold values when custom thresholds aren't specified, providing better transparency about the evaluation criteria used.
### Bugs Fixed
- Fixed red team scan `output_path` issue where individual evaluation results were overwriting each other instead of being preserved as separate files. Individual evaluations now create unique files while the user's `output_path` is reserved for final aggregated results.
- Significant improvements to TaskAdherence evaluator. New version has less variance, is much faster and consumes fewer tokens.
- Significant improvements to Relevance evaluator. New version has more concrete rubrics and has less variance, is much faster and consumes fewer tokens.
### Other Changes
- The default engine for evaluation was changed from `promptflow` (PFClient) to an in-SDK batch client (RunSubmitterClient)
- Note: We've temporarily kept an escape hatch to fall back to the legacy `promptflow` implementation by setting `_use_pf_client=True` when invoking `evaluate()`.
This is due to be removed in a future release.
## 1.9.0 (2025-07-02)
### Features Added
- Added support for Azure Open AI evaluation via `AzureOpenAIScoreModelGrader` class, which serves as a wrapper around Azure Open AI score model configurations. This new grader object can be supplied to the main `evaluate` method as if it were a normal callable evaluator.
- Added new experimental risk categories ProtectedMaterial and CodeVulnerability for redteam agent scan.
### Bugs Fixed
- Significant improvements to IntentResolution evaluator. New version has less variance, is nearly 2x faster and consumes fewer tokens.
- Fixes and improvements to ToolCallAccuracy evaluator. New version has less variance. and now works on all tool calls that happen in a turn at once. Previously, it worked on each tool call independently without having context on the other tool calls that happen in the same turn, and then aggregated the results to a score in the range [0-1]. The score range is now [1-5].
- Fixed MeteorScoreEvaluator and other threshold-based evaluators returning incorrect binary results due to integer conversion of decimal scores. Previously, decimal scores like 0.9375 were incorrectly converted to integers (0) before threshold comparison, causing them to fail even when above the threshold. [#41415](https://github.com/Azure/azure-sdk-for-python/issues/41415)
- Added a new enum `ADVERSARIAL_QA_DOCUMENTS` which moves all the "file_content" type prompts away from `ADVERSARIAL_QA` to the new enum
- `AzureOpenAIScoreModelGrader` evaluator now supports `pass_threshold` parameter to set the minimum score required for a response to be considered passing. This allows users to define custom thresholds for evaluation results, enhancing flexibility in grading AI model responses.
## 1.8.0 (2025-05-29)
### Features Added
- Introduces `AttackStrategy.MultiTurn` and `AttackStrategy.Crescendo` to `RedTeam`. These strategies attack the target of a `RedTeam` scan over the course of multi-turn conversations.
### Bugs Fixed
- AdversarialSimulator in `ADVERSARIAL_CONVERSATION` mode was broken. It is now fixed.
## 1.7.0 (2025-05-12)
### Bugs Fixed
- azure-ai-evaluation failed with module not found [#40992](https://github.com/Azure/azure-sdk-for-python/issues/40992)
## 1.6.0 (2025-05-07)
### Features Added
- New `<evaluator>.binary_aggregate` field added to evaluation result metrics. This field contains the aggregated binary evaluation results for each evaluator, providing a summary of the evaluation outcomes.
- Added support for Azure Open AI evaluation via 4 new 'grader' classes, which serve as wrappers around Azure Open AI grader configurations. These new grader objects can be supplied to the main `evaluate` method as if they were normal callable evaluators. The new classes are:
- AzureOpenAIGrader (general class for experienced users)
- AzureOpenAILabelGrader
- AzureOpenAIStringCheckGrader
- AzureOpenAITextSimilarityGrader
### Breaking Changes
- In the experimental RedTeam's scan method, the `data_only` param has been replaced with `skip_evals` and if you do not want data to be uploaded, use the `skip_upload` flag.
### Bugs Fixed
- Fixed error in `evaluate` where data fields could not contain numeric characters. Previously, a data file with schema:
```
"query1": "some query", "response": "some response"
```
throws error when passed into `evaluator_config` as `{"evaluator_name": {"column_mapping": {"query": "${data.query1}", "response": "${data.response}"}},}`.
Now, users may import data containing fields with numeric characters.
## 1.5.0 (2025-04-04)
### Features Added
- New `RedTeam` agent functionality to assess the safety and resilience of AI systems against adversarial prompt attacks
## 1.4.0 (2025-03-27)
### Features Added
- Enhanced binary evaluation results with customizable thresholds
- Added threshold support for QA and ContentSafety evaluators
- Evaluation results now include both the score and threshold values
- Configurable threshold parameter allows custom binary classification boundaries
- Default thresholds provided for backward compatibility
- Quality evaluators use "higher is better" scoring (score ≥ threshold is positive)
- Content safety evaluators use "lower is better" scoring (score ≤ threshold is positive)
- New Built-in evaluator called CodeVulnerabilityEvaluator is added.
- It provides capabilities to identify the following code vulnerabilities.
- path-injection
- sql-injection
- code-injection
- stack-trace-exposure
- incomplete-url-substring-sanitization
- flask-debug
- clear-text-logging-sensitive-data
- incomplete-hostname-regexp
- server-side-unvalidated-url-redirection
- weak-cryptographic-algorithm
- full-ssrf
- bind-socket-all-network-interfaces
- client-side-unvalidated-url-redirection
- likely-bugs
- reflected-xss
- clear-text-storage-sensitive-data
- tarslip
- hardcoded-credentials
- insecure-randomness
- It also supports multiple coding languages such as (Python, Java, C++, C#, Go, Javascript, SQL)
- New Built-in evaluator called UngroundedAttributesEvaluator is added.
- It evaluates ungrounded inference of human attributes for a given query, response, and context for a single-turn evaluation only,
- where query represents the user query and response represents the AI system response given the provided context.
- Ungrounded Attributes checks for whether a response is first, ungrounded, and checks if it contains information about protected class
- or emotional state of a person.
- It identifies the following attributes:
- emotional_state
- protected_class
- groundedness
- New Built-in evaluators for Agent Evaluation (Preview)
- IntentResolutionEvaluator - Evaluates the intent resolution of an agent's response to a user query.
- ResponseCompletenessEvaluator - Evaluates the response completeness of an agent's response to a user query.
- TaskAdherenceEvaluator - Evaluates the task adherence of an agent's response to a user query.
- ToolCallAccuracyEvaluator - Evaluates the accuracy of tool calls made by an agent in response to a user query.
### Bugs Fixed
- Fixed error in `GroundednessProEvaluator` when handling non-numeric values like "n/a" returned from the service.
- Uploading local evaluation results from `evaluate` with the same run name will no longer result in each online run sharing (and bashing) result files.
## 1.3.0 (2025-02-28)
### Breaking Changes
- Multimodal specific evaluators `ContentSafetyMultimodalEvaluator`, `ViolenceMultimodalEvaluator`, `SexualMultimodalEvaluator`, `SelfHarmMultimodalEvaluator`, `HateUnfairnessMultimodalEvaluator` and `ProtectedMaterialMultimodalEvaluator` has been removed. Please use `ContentSafetyEvaluator`, `ViolenceEvaluator`, `SexualEvaluator`, `SelfHarmEvaluator`, `HateUnfairnessEvaluator` and `ProtectedMaterialEvaluator` instead.
- Metric name in ProtectedMaterialEvaluator's output is changed from `protected_material.fictional_characters_label` to `protected_material.fictional_characters_defect_rate`. It's now consistent with other evaluator's metric names (ending with `_defect_rate`).
## 1.2.0 (2025-01-27)
### Features Added
- CSV files are now supported as data file inputs with `evaluate()` API. The CSV file should have a header row with column names that match the `data` and `target` fields in the `evaluate()` method and the filename should be passed as the `data` parameter. Column name 'Conversation' in CSV file is not fully supported yet.
### Breaking Changes
- `ViolenceMultimodalEvaluator`, `SexualMultimodalEvaluator`, `SelfHarmMultimodalEvaluator`, `HateUnfairnessMultimodalEvaluator` and `ProtectedMaterialMultimodalEvaluator` will be removed in next release.
### Bugs Fixed
- Removed `[remote]` extra. This is no longer needed when tracking results in Azure AI Studio.
- Fixed `AttributeError: 'NoneType' object has no attribute 'get'` while running simulator with 1000+ results
- Fixed the non adversarial simulator to run in task-free mode
- Content safety evaluators (violence, self harm, sexual, hate/unfairness) return the maximum result as the
main score when aggregating per-turn evaluations from a conversation into an overall
evaluation score. Other conversation-capable evaluators still default to a mean for aggregation.
- Fixed bug in non adversarial simulator sample where `tasks` undefined
### Other Changes
- Changed minimum required python version to use this package from 3.8 to 3.9
- Stop dependency on the local promptflow service. No promptflow service will automatically start when running evaluation.
- Evaluators internally allow for custom aggregation. However, this causes serialization failures if evaluated while the
environment variable `AI_EVALS_BATCH_USE_ASYNC` is set to false.
## 1.1.0 (2024-12-12)
### Features Added
- Added image support in `ContentSafetyEvaluator`, `ViolenceEvaluator`, `SexualEvaluator`, `SelfHarmEvaluator`, `HateUnfairnessEvaluator` and `ProtectedMaterialEvaluator`. Provide image URLs or base64 encoded images in `conversation` input for image evaluation. See below for an example:
```python
evaluator = ContentSafetyEvaluator(credential=azure_cred, azure_ai_project=project_scope)
conversation = {
"messages": [
{
"role": "system",
"content": [
{"type": "text", "text": "You are an AI assistant that understands images."}
],
},
{
"role": "user",
"content": [
{"type": "text", "text": "Can you describe this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/68/178268-050-5B4E7FB6/Tom-Cruise-2013.jpg"
},
},
],
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "The image shows a man with short brown hair smiling, wearing a dark-colored shirt.",
}
],
},
]
}
print("Calling Content Safety Evaluator for multi-modal")
score = evaluator(conversation=conversation)
```
- Please switch to generic evaluators for image evaluations as mentioned above. `ContentSafetyMultimodalEvaluator`, `ContentSafetyMultimodalEvaluatorBase`, `ViolenceMultimodalEvaluator`, `SexualMultimodalEvaluator`, `SelfHarmMultimodalEvaluator`, `HateUnfairnessMultimodalEvaluator` and `ProtectedMaterialMultimodalEvaluator` will be deprecated in the next release.
### Bugs Fixed
- Removed `[remote]` extra. This is no longer needed when tracking results in Azure AI Foundry portal.
- Fixed `AttributeError: 'NoneType' object has no attribute 'get'` while running simulator with 1000+ results
## 1.0.1 (2024-11-15)
### Bugs Fixed
- Removing `azure-ai-inference` as dependency.
- Fixed `AttributeError: 'NoneType' object has no attribute 'get'` while running simulator with 1000+ results
## 1.0.0 (2024-11-13)
### Breaking Changes
- The `parallel` parameter has been removed from composite evaluators: `QAEvaluator`, `ContentSafetyChatEvaluator`, and `ContentSafetyMultimodalEvaluator`. To control evaluator parallelism, you can now use the `_parallel` keyword argument, though please note that this private parameter may change in the future.
- Parameters `query_response_generating_prompty_kwargs` and `user_simulator_prompty_kwargs` have been renamed to `query_response_generating_prompty_options` and `user_simulator_prompty_options` in the Simulator's __call__ method.
### Bugs Fixed
- Fixed an issue where the `output_path` parameter in the `evaluate` API did not support relative path.
- Output of adversarial simulators are of type `JsonLineList` and the helper function `to_eval_qr_json_lines` now outputs context from both user and assistant turns along with `category` if it exists in the conversation
- Fixed an issue where during long-running simulations, API token expires causing "Forbidden" error. Instead, users can now set an environment variable `AZURE_TOKEN_REFRESH_INTERVAL` to refresh the token more frequently to prevent expiration and ensure continuous operation of the simulation.
- Fixed an issue with the `ContentSafetyEvaluator` that caused parallel execution of sub-evaluators to fail. Parallel execution is now enabled by default again, but can still be disabled via the '_parallel' boolean keyword argument during class initialization.
- Fix `evaluate` function not producing aggregated metrics if ANY values to be aggregated were None, NaN, or
otherwise difficult to process. Such values are ignored fully, so the aggregated metric of `[1, 2, 3, NaN]`
would be 2, not 1.5.
### Other Changes
- Refined error messages for serviced-based evaluators and simulators.
- Tracing has been disabled due to Cosmos DB initialization issue.
- Introduced environment variable `AI_EVALS_DISABLE_EXPERIMENTAL_WARNING` to disable the warning message for experimental features.
- Changed the randomization pattern for `AdversarialSimulator` such that there is an almost equal number of Adversarial harm categories (e.g. Hate + Unfairness, Self-Harm, Violence, Sex) represented in the `AdversarialSimulator` outputs. Previously, for 200 `max_simulation_results` a user might see 140 results belonging to the 'Hate + Unfairness' category and 40 results belonging to the 'Self-Harm' category. Now, user will see 50 results for each of Hate + Unfairness, Self-Harm, Violence, and Sex.
- For the `DirectAttackSimulator`, the prompt templates used to generate simulated outputs for each Adversarial harm category will no longer be in a randomized order by default. To override this behavior, pass `randomize_order=True` when you call the `DirectAttackSimulator`, for example:
```python
adversarial_simulator = DirectAttackSimulator(azure_ai_project=azure_ai_pr | text/markdown | Microsoft Corporation | azuresdkengsysadmins@microsoft.com | null | null | MIT License | azure, azure sdk | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :... | [] | https://github.com/Azure/azure-sdk-for-python | null | >=3.9 | [] | [] | [] | [
"pyjwt>=2.8.0",
"azure-identity>=1.19.0",
"azure-core>=1.31.0",
"nltk>=3.9.1",
"azure-storage-blob>=12.19.0",
"httpx>=0.27.2",
"pandas<3.0.0,>=2.1.2; python_version < \"3.13\"",
"pandas<3.0.0,>=2.2.3; python_version == \"3.13\"",
"pandas<3.0.0,>=2.3.3; python_version >= \"3.14\"",
"openai>=1.108.0... | [] | [] | [] | [
"Bug Reports, https://github.com/Azure/azure-sdk-for-python/issues",
"Source, https://github.com/Azure/azure-sdk-for-python"
] | RestSharp/106.13.0.0 | 2026-02-20T00:43:03.683252 | azure_ai_evaluation-1.15.1-py3-none-any.whl | 1,182,282 | 19/80/c3e333010717f10af114cd72de01fe284d69a3f6e9d9041e57b5a08d4153/azure_ai_evaluation-1.15.1-py3-none-any.whl | py3 | bdist_wheel | null | false | a49adfda3793e3b3730c21339ed968a2 | 89b8e3bbedd520a11dc4eff5a2cc897e31e10f239c593f30d5cc3a4c096f5852 | 1980c3e333010717f10af114cd72de01fe284d69a3f6e9d9041e57b5a08d4153 | null | [] | 2,104 |
2.4 | keystroke-sender | 0.1.1 | Chrome Native Messaging host that simulates OS-level keystrokes and mouse clicks | # Keystroke Sender
A Chrome Native Messaging host that simulates OS-level keystrokes. Receives text strings from a Chrome extension and types them out as real key presses using `pynput`.
Works on **macOS**, **Linux**, and **Windows**.
Companion extension: [Chrome Form Filler](https://chromewebstore.google.com/detail/chrome-form-filler/dpdolkkncejkelemckjmjoaefmgdhepj) (`dpdolkkncejkelemckjmjoaefmgdhepj`)
## Prerequisites
- Python 3.7+
- pip
- Google Chrome
- [Chrome Form Filler](https://chromewebstore.google.com/detail/chrome-form-filler/dpdolkkncejkelemckjmjoaefmgdhepj) extension
## Installation
### pip install (recommended)
```bash
pip install keystroke-sender
```
Then register the Chrome native messaging host:
```bash
keystroke-sender-register YOUR_EXTENSION_ID
```
To unregister later:
```bash
keystroke-sender-register --unregister
```
### Manual install (macOS / Linux)
```bash
chmod +x install.sh
./install.sh
```
### Manual install (Windows)
```cmd
install.bat
```
The manual installer will:
1. Prompt you for your Chrome extension ID (find it at `chrome://extensions`)
2. Create the native messaging host manifest in the correct OS location
3. Install the `pynput` Python dependency
### macOS Accessibility Permission
On macOS, you must grant Accessibility permission to your terminal app (or Python) for keystroke simulation to work:
**System Preferences > Privacy & Security > Accessibility** — add your terminal app (Terminal, iTerm2, etc.).
## Usage
From a Chrome extension, send a message to the native host:
```javascript
chrome.runtime.sendNativeMessage(
"com.propdream.keystroke_sender",
{ text: "Hello, world!" },
(response) => {
console.log(response);
// { status: "ok", typed: 13 }
}
);
```
### Message Format
**Request:**
```json
{
"text": "string to type",
"delay": 0.05
}
```
- `text` (required): The string to type as OS-level keystrokes.
- `delay` (optional): Seconds between each keystroke. Default: `0.05`.
**Response:**
```json
{ "status": "ok", "typed": 13 }
```
or on error:
```json
{ "status": "error", "message": "description" }
```
## Extension Setup
Add `"nativeMessaging"` to your extension's `manifest.json` permissions:
```json
{
"permissions": ["nativeMessaging"]
}
```
## Uninstallation
### macOS / Linux
```bash
chmod +x uninstall.sh
./uninstall.sh
```
### Windows
```cmd
uninstall.bat
```
## Security
This app simulates real keystrokes and mouse clicks at the OS level, so it includes several safeguards to prevent misuse:
**Chrome Native Messaging isolation** — The host process can *only* be launched by Chrome, and *only* the extension ID you registered in `allowed_origins` can send it messages. No network socket is opened; communication is purely via stdin/stdout. An unauthorized extension or external program cannot talk to it.
**Idle timeout** — The host automatically exits after **1 minute** of inactivity. It does not stay running indefinitely.
**Rate limiting** — A maximum of **100 actions per second** is enforced. Bursts beyond this are rejected with an error response.
**Input size limits** — Messages larger than **1 MB** are rejected before parsing. Text payloads longer than **10,000 characters** are rejected before typing.
| Limit | Default |
|---|---|
| Idle timeout | 60 s (1 min) |
| Rate limit | 100 actions / 1 s |
| Max message size | 1 MB |
| Max text length | 10,000 chars |
These constants are defined at the top of `src/keystroke_sender/host.py` and can be adjusted if needed.
## Debugging
- Launch Chrome from the terminal to see native host stderr output
- Use `chrome://extensions` > Service Worker > Console to see extension-side errors
- Check `chrome.runtime.lastError` in the `sendNativeMessage` callback
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"pynput"
] | [] | [] | [] | [
"Repository, https://github.com/PropDream/keystroke-sender"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:42:43.559226 | keystroke_sender-0.1.1.tar.gz | 7,600 | 9a/be/0e87017706dba13416d9743875858134c6460b0938ad87b00c7cd5feb2f3/keystroke_sender-0.1.1.tar.gz | source | sdist | null | false | e7abb2a91ce01731d0c40f57ec8b7ec6 | c3622e1eb1166e67f11351b79383c475f19c49304d9dc0b61dbae48b51415818 | 9abe0e87017706dba13416d9743875858134c6460b0938ad87b00c7cd5feb2f3 | MIT | [] | 258 |
2.4 | kredo | 0.8.0 | Portable agent attestation protocol — Ed25519-signed skill certifications | # Kredo
Portable agent attestation protocol. Ed25519-signed skill certifications that work anywhere.
**Site:** [aikredo.com](https://aikredo.com) | **API:** [api.aikredo.com](https://api.aikredo.com/health) | **PyPI:** [kredo](https://pypi.org/project/kredo/)
## What is this?
Kredo lets AI agents and humans certify each other's skills with cryptographically signed attestations. Not karma. Not star ratings. Signed proof of demonstrated competence, linked to real evidence.
An attestation says: *"I worked with this agent on [specific task], they demonstrated [specific skill] at [proficiency level], here is the evidence, and I sign my name to it."*
Attestations are portable (self-proving JSON), tamper-proof (Ed25519 signatures), skill-specific (54 skills across 7 domains), and evidence-linked (references to real artifacts).
## Quick Start
```bash
pip install kredo
# Create an identity (Ed25519 keypair)
kredo identity create --name MyAgent --type agent
# Register on the Discovery API
kredo register
# Look up your profile
kredo lookup
# Search the network
kredo search --domain security-operations
```
## Attest a Skill
```bash
# Attest that another agent demonstrated a skill
kredo attest \
--subject ed25519:THEIR_PUBKEY \
--subject-name TheirName \
--domain code-generation \
--skill code-review \
--proficiency 4 \
--context "Reviewed 12 PRs during the auth refactor. Caught 3 critical issues." \
--artifacts "pr:auth-refactor-47" "pr:auth-refactor-52" \
--outcome successful_resolution
# Submit to the Discovery API
kredo submit ATTESTATION_ID
```
## CLI Commands
| Command | Description |
|---------|-------------|
| `kredo identity create` | Generate Ed25519 keypair |
| `kredo identity show` | Show your public key and name |
| `kredo attest` | Create and sign a skill attestation |
| `kredo warn` | Issue a behavioral warning (requires evidence) |
| `kredo verify` | Verify any signed Kredo document |
| `kredo revoke` | Revoke an attestation you issued |
| `kredo dispute` | Dispute a behavioral warning against you |
| `kredo register` | Register your key on the Discovery API |
| `kredo submit` | Submit a local attestation to the API |
| `kredo lookup [pubkey]` | View any agent's reputation profile |
| `kredo search` | Search attestations with filters |
| `kredo export` | Export attestations as portable JSON |
| `kredo import` | Import attestations from JSON |
| `kredo trust` | Query the trust graph |
| `kredo taxonomy` | Browse the skill taxonomy |
| `kredo ipfs pin` | Pin an attestation/revocation/dispute to IPFS |
| `kredo ipfs fetch` | Fetch and verify a document from IPFS by CID |
| `kredo ipfs status` | Check pin status or list all pins |
| `kredo submit --pin` | Submit to API and pin to IPFS in one step |
## Discovery API
Base URL: `https://api.aikredo.com`
All read endpoints are open. Write endpoints use Ed25519 signature verification — your signature IS your authentication.
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/health` | GET | Service status |
| `/register` | POST | Register a public key (unsigned; does not overwrite existing name/type) |
| `/register/update` | POST | Signed metadata update for an existing registration |
| `/agents` | GET | List registered agents |
| `/agents/{pubkey}` | GET | Agent details |
| `/agents/{pubkey}/profile` | GET | Full reputation profile |
| `/attestations` | POST | Submit signed attestation |
| `/attestations/{id}` | GET | Retrieve attestation |
| `/verify` | POST | Verify any signed document |
| `/search` | GET | Search with filters |
| `/trust/who-attested/{pubkey}` | GET | Attestors for a subject |
| `/trust/attested-by/{pubkey}` | GET | Subjects attested by someone |
| `/trust/analysis/{pubkey}` | GET | Full trust analysis (reputation, weights, rings) |
| `/trust/rings` | GET | Network-wide ring detection report |
| `/trust/network-health` | GET | Aggregate network statistics |
| `/ownership/claim` | POST | Agent-signed ownership claim (agent -> human) |
| `/ownership/confirm` | POST | Human-signed ownership confirmation |
| `/ownership/revoke` | POST | Signed ownership revocation |
| `/ownership/agent/{pubkey}` | GET | Ownership/accountability history for an agent |
| `/integrity/baseline/set` | POST | Active human owner sets and signs file-hash baseline for an agent |
| `/integrity/check` | POST | Agent-signed runtime integrity check against active baseline |
| `/integrity/status/{pubkey}` | GET | Traffic-light integrity state and latest diff |
| `/risk/source-anomalies` | GET | Source-cluster risk signals for anti-gaming review |
| `/taxonomy` | GET | Full skill taxonomy |
| `/taxonomy/{domain}` | GET | Skills in one domain |
| `/revoke` | POST | Revoke an attestation |
| `/dispute` | POST | Dispute a warning |
Full API documentation: [aikredo.com/_functions/skill](https://aikredo.com/_functions/skill)
Runtime note: trust-analysis responses are short-TTL cached in-process (`KREDO_TRUST_CACHE_TTL_SECONDS`, default `30`).
Accountability + integrity note: `/trust/analysis/{pubkey}` now includes:
- `accountability` tier (`unlinked` or `human-linked`) and multiplier
- `integrity` traffic-light context (`green`, `yellow`, `red`)
- `deployability_multiplier` and `deployability_score = reputation_score × accountability.multiplier × integrity.multiplier`
## Integrity Run-Gate (v0.8.0)
Simple operator workflow:
1. Human owner approves baseline once: `POST /integrity/baseline/set`
2. Agent runs measurement check: `POST /integrity/check`
3. Runtime reads gate state: `GET /integrity/status/{pubkey}`
Traffic-light behavior:
- `green` -> verified, safe to run
- `yellow` -> changed since baseline (or not yet checked), owner review required
- `red` -> unknown/unsigned integrity state, block run
## Skill Taxonomy
7 domains, 54 specific skills:
- **security-operations** — incident triage, threat hunting, malware analysis, forensics, ...
- **code-generation** — code review, debugging, refactoring, test generation, ...
- **data-analysis** — statistical analysis, data cleaning, visualization, ...
- **natural-language** — summarization, translation, content generation, ...
- **reasoning** — root cause analysis, planning, hypothesis generation, ...
- **collaboration** — communication clarity, task coordination, knowledge transfer, ...
- **domain-knowledge** — regulatory compliance, industry expertise, research synthesis, ...
## Programmatic Usage
```python
from kredo.identity import create_identity
from kredo.client import KredoClient
# Create and register
identity = create_identity("MyAgent", "agent")
client = KredoClient()
client.register(identity.pubkey_str, "MyAgent", "agent")
# Look up a profile
profile = client.get_profile("ed25519:abc123...")
print(profile["skills"])
print(profile["attestation_count"])
print(profile["trust_network"])
```
## LangChain Integration
For LangChain developers building multi-agent pipelines:
```bash
pip install langchain-kredo
```
```python
from langchain_kredo import KredoSigningClient, KredoTrustGate, KredoCheckTrustTool
# Connect with signing capability
client = KredoSigningClient(signing_key="YOUR_HEX_SEED")
# Trust gate — policy enforcement for agent pipelines
gate = KredoTrustGate(client, min_score=0.3, block_warned=True)
result = gate.check("ed25519:AGENT_PUBKEY")
# Select best agent for a task (ranks by reputation + diversity + domain proficiency)
best = gate.select_best(candidate_pubkeys, domain="security-operations", skill="incident-triage")
# Build-vs-buy: should I delegate or handle it myself?
delegate = gate.should_delegate(candidates, domain="code-generation", self_proficiency=2)
# LangChain tools — drop into any agent toolbox
tools = [KredoCheckTrustTool(client=client)]
```
Includes 4 LangChain tools, a callback handler for automatic evidence collection, and trust gate with composite ranking. See [langchain-kredo on PyPI](https://pypi.org/project/langchain-kredo/).
## IPFS Support (Optional)
Attestations can be pinned to IPFS for permanence and distribution. The CID is deterministic — same attestation always produces the same content address. The Discovery API becomes an index, not the source of truth.
```bash
# Configure (set env vars)
export KREDO_IPFS_PROVIDER=local # or "remote" for Pinata-compatible services
# Pin an attestation
kredo ipfs pin ATTESTATION_ID
# Fetch and verify from IPFS
kredo ipfs fetch QmCID...
# Submit to API + pin in one step
kredo submit ATTESTATION_ID --pin
```
Set `KREDO_IPFS_PROVIDER` to `local` (daemon at localhost:5001) or `remote` (with `KREDO_IPFS_REMOTE_URL` and `KREDO_IPFS_REMOTE_TOKEN`). If unset, IPFS features are silently unavailable — nothing changes.
## Anti-Gaming (v0.4.0)
Attestations are scored by multiple factors to resist gaming:
- **Ring detection** — Mutual attestation pairs (A↔B) and larger cliques are automatically detected and downweighted (0.5× for pairs, 0.3× for cliques of 3+). Flagged, not blocked.
- **Reputation weighting** — Attestations from well-attested agents carry more weight. Recursive to depth 3, cycle-safe.
- **Time decay** — `2^(-days/180)` half-life. Recent attestations matter more.
- **Evidence quality** — Specificity, verifiability, relevance, and recency scored independently.
Effective weight = `proficiency × evidence × decay × attestor_reputation × ring_discount`
Every factor is visible via `GET /trust/analysis/{pubkey}`. No black boxes.
Additional source-signal layer:
- **Source concentration signals** — write-path audit events include source metadata (IP/user-agent) and can be clustered with `GET /risk/source-anomalies` to flag potential sybil-style activity from shared origins. This is a risk signal, not standalone proof.
- **Integrity run-gate** — deployability now reflects accountability plus cryptographic integrity status (baseline + signed check). Unknown integrity is deliberately penalized by default.
## How It Works
1. **Generate a keypair** — Ed25519 via PyNaCl. Private key stays local.
2. **Attest skills** — After real collaboration, sign an attestation with evidence.
3. **Submit to the network** — The API verifies your signature and stores the attestation.
4. **Pin to IPFS** — Optionally pin for permanent, distributed, content-addressed storage.
5. **Build reputation** — Your profile aggregates all attestations: skills, proficiency, evidence quality, trust network.
6. **Anyone can verify** — Attestations are self-proving. No trust in the server required.
## Attestation Types
| Type | Purpose | Evidence |
|------|---------|----------|
| Skill Attestation | Certify demonstrated competence | Task artifacts, collaboration records |
| Intellectual Contribution | Credit ideas that led to outcomes | Discussion references, design docs |
| Community Contribution | Recognize teaching and resource sharing | Forum posts, guides, mentoring |
| Behavioral Warning | Flag harmful behavior with proof | Incident logs, communication records |
## Design Principles
- **Proof over popularity** — Evidence-linked attestations, not upvotes
- **Portable** — Self-proving JSON that works without any platform
- **No blockchain** — Ed25519 + SQLite + optional IPFS. Simple, fast, verifiable
- **Agents and humans are equal** — Same protocol, same rights
- **Transparency** — All attestations and evidence are inspectable
- **Revocable** — Attestors can retract with a signed revocation
## Authors
**Jim Motes** and **Vanguard** ([@Vanguard_actual](https://moltbook.com/u/Vanguard_actual))
## License
MIT
| text/markdown | Jim Motes, Vanguard | null | null | null | null | attestation, ed25519, trust, reputation, agents, ai-agents, cryptography | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security :: Cryptography",
"Topic :: Software Development ... | [] | null | null | >=3.11 | [] | [] | [] | [
"pynacl>=1.5.0",
"pydantic>=2.0",
"typer>=0.9.0",
"rich>=13.0",
"fastapi>=0.115.0",
"uvicorn[standard]>=0.30.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://aikredo.com",
"Documentation, https://aikredo.com/_functions/skill",
"Repository, https://github.com/jimmotes2024/kredo",
"Discovery API, https://api.aikredo.com"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T00:42:30.385833 | kredo-0.8.0.tar.gz | 239,246 | 4f/c2/49de748d11cde291e3dc969b2f6eeacc3b5bf9b822fd319eb7828d9c0f52/kredo-0.8.0.tar.gz | source | sdist | null | false | e420786ce481d22c1a9230ed01819834 | 16da5d82977909ef0b93db01396f6490d1443d78f4219f7fd1a8fa2d91179f5e | 4fc249de748d11cde291e3dc969b2f6eeacc3b5bf9b822fd319eb7828d9c0f52 | MIT | [] | 263 |
2.4 | yutori-mcp | 0.2.6 | MCP server for Yutori - web monitoring, deep research, and browser automation | # Yutori MCP
MCP tools and skills for web monitoring, deep research, and browser automation — powered by [Yutori](https://yutori.com/api)'s web agentic tech.
You can use it with Claude Code, Codex, Cursor, VS Code, ChatGPT, OpenClaw, and other MCP hosts.
## Features
**Capabilities:**
- **Scouting** — Monitor the web continuously for anything you care about at a desired frequency
- **Research** — Run one-time deep web research tasks
- **Browsing** — Automate websites with an AI navigator
**Workflow skills** (for clients that support slash commands):
- [`/yutori-scout`](skills/01-scout/SKILL.md) — Set up continuous web monitoring
- [`/yutori-research`](skills/02-research/SKILL.md) — Deep web research (async, 5–10 min)
- [`/yutori-browse`](skills/03-browse/SKILL.md) — Browser automation
- [`/yutori-competitor-watch`](skills/04-competitor-watch/SKILL.md) — Competitor monitoring template
- [`/yutori-api-monitor`](skills/05-api-monitor/SKILL.md) — API/changelog monitoring template
## Installation
### Requirements
If you don't already have `uv` installed, install it (it includes `uvx`):
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
Or with Homebrew:
```bash
brew install uv
```
Python 3.10 or higher is required (`uv` manages this automatically for most installs).
For the quickstart below, Node.js is also required (for `npx`).
### Quick install (recommended)
1. Run in terminal:
```bash
uvx yutori-mcp login
```
This will open Yutori Platform in your browser and save your API key locally.
<details>
<summary>Or, manually add your API key</summary>
Go to (https://platform.yutori.com) and add your key to the config file:
```bash
mkdir -p ~/.yutori
cat > ~/.yutori/config.json << 'EOF'
{"api_key": "yt-your-api-key"}
EOF
```
</details>
2. Install MCP using [add-mcp](https://neon.com/blog/add-mcp) (requires Node.js):
```
npx add-mcp "uvx yutori-mcp"
```
Pick the clients you want to configure.
3. (Optional) Install workflow skills using [skills.sh](https://skills.sh) (requires Node.js):
```
npx skills add yutori-ai/yutori-mcp
```
Adds slash-command shortcuts like `/yutori-scout`, `/yutori-research`, and more. Skip if you only need the MCP tools.
4. Restart the tool you are using.
### Manual per-client setup
<details>
<summary>Claude Code</summary>
1. **Plugin (Recommended)** - Includes MCP tools + workflow skills
Type these commands in Claude Code's input (not in a terminal):
```
/plugin marketplace add yutori-ai/yutori-mcp
/plugin install yutori@yutori-plugins
```
This installs both the MCP tools and workflow skills:
| Skill | Description |
|-------|-------------|
| `/yutori-scout` | Set up continuous web monitoring with comprehensive queries |
| `/yutori-research` | Deep web research workflow (async, 5-10 min) |
| `/yutori-browse` | Browser automation tasks |
| `/yutori-competitor-watch` | Quick competitor monitoring template |
| `/yutori-api-monitor` | API/changelog monitoring template |
> **Already have the MCP server installed?** Remove it first to avoid duplicate configurations:
> ```bash
> claude mcp remove yutori -s user # if installed at user scope
> claude mcp remove yutori -s local # if installed at local/project scope
> ```
2. **MCP Only** (if you prefer not to use the plugin)
```bash
claude mcp add --scope user yutori -- uvx yutori-mcp
```
The server reads your API key from `~/.yutori/config.json` (set up via `uvx yutori-mcp login`).
</details>
<details>
<summary>Claude Desktop</summary>
Add to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"yutori": {
"command": "uvx",
"args": ["yutori-mcp"]
}
}
}
```
The server reads your API key from `~/.yutori/config.json`.
For setup details, see the [Claude Desktop MCP install guide](https://modelcontextprotocol.io/docs/develop/connect-local-servers).
</details>
<details>
<summary>Cursor</summary>
**Click the button to install:**
[<img src="https://cursor.com/deeplink/mcp-install-dark.svg" alt="Install in Cursor">](https://cursor.com/en/install-mcp?name=Yutori&config=eyJjb21tYW5kIjoidXZ4IHl1dG9yaS1tY3AifQ%3D%3D)
**Or install manually:**
Go to Cursor Settings → MCP → Add new MCP Server, then add:
```json
{
"mcpServers": {
"yutori": {
"command": "uvx",
"args": ["yutori-mcp"]
}
}
}
```
The server reads your API key from `~/.yutori/config.json`.
See the [Cursor MCP guide](https://cursor.com/docs/context/mcp) for setup details.
</details>
<details>
<summary>VS Code</summary>
**Click the button to install:**
[<img src="https://img.shields.io/badge/VS_Code-VS_Code?style=flat-square&label=Install%20Server&color=0098FF" alt="Install in VS Code">](https://insiders.vscode.dev/redirect?url=vscode%3Amcp%2Finstall%3F%257B%2522name%2522%253A%2522yutori%2522%252C%2522command%2522%253A%2522uvx%2522%252C%2522args%2522%253A%255B%2522yutori-mcp%2522%255D%257D) [<img alt="Install in VS Code Insiders" src="https://img.shields.io/badge/VS_Code_Insiders-VS_Code_Insiders?style=flat-square&label=Install%20Server&color=24bfa5">](https://insiders.vscode.dev/redirect?url=vscode-insiders%3Amcp%2Finstall%3F%257B%2522name%2522%253A%2522yutori%2522%252C%2522command%2522%253A%2522uvx%2522%252C%2522args%2522%253A%255B%2522yutori-mcp%2522%255D%257D)
**Or install manually:**
```bash
code --add-mcp '{"name":"yutori","command":"uvx","args":["yutori-mcp"]}'
```
The server reads your API key from `~/.yutori/config.json`.
</details>
<details>
<summary>ChatGPT</summary>
Open ChatGPT Desktop and go to Settings -> Connectors -> MCP Servers -> Add server.
```json
{
"mcpServers": {
"yutori": {
"command": "uvx",
"args": ["yutori-mcp"]
}
}
}
```
The server reads your API key from `~/.yutori/config.json`.
For setup details, see the [OpenAI MCP guide](https://platform.openai.com/docs/mcp).
</details>
<details>
<summary>Codex</summary>
1. **MCP Server:**
```bash
codex mcp add yutori -- uvx yutori-mcp
```
Or add to `~/.codex/config.toml`:
```toml
[mcp_servers.yutori]
command = "uvx"
args = ["yutori-mcp"]
```
The server reads your API key from `~/.yutori/config.json`.
2. **Skills** (optional, for workflow guidance):
Install skills using `$skill-installer` inside Codex:
```
$skill-installer install https://github.com/yutori-ai/yutori-mcp/tree/main/.agents/skills/yutori-scout
$skill-installer install https://github.com/yutori-ai/yutori-mcp/tree/main/.agents/skills/yutori-research
$skill-installer install https://github.com/yutori-ai/yutori-mcp/tree/main/.agents/skills/yutori-browse
$skill-installer install https://github.com/yutori-ai/yutori-mcp/tree/main/.agents/skills/yutori-competitor-watch
$skill-installer install https://github.com/yutori-ai/yutori-mcp/tree/main/.agents/skills/yutori-api-monitor
```
Or manually copy skills to your user directory (use `-L` so symlinks are dereferenced and real files are copied):
```bash
git clone https://github.com/yutori-ai/yutori-mcp /tmp/yutori-mcp
cp -rL /tmp/yutori-mcp/.agents/skills/* ~/.agents/skills/
```
Restart Codex after installing skills.
| Skill | Command | Description |
|-------|---------|-------------|
| Scout | `$yutori-scout` | Set up continuous web monitoring |
| Research | `$yutori-research` | Deep web research (async, 5-10 min) |
| Browse | `$yutori-browse` | Browser automation with AI navigator |
| Competitor Watch | `$yutori-competitor-watch` | Quick competitor monitoring template |
| API Monitor | `$yutori-api-monitor` | API/changelog monitoring template |
See the [Codex Skills docs](https://developers.openai.com/codex/skills/) for more on skills.
</details>
<details>
<summary>OpenClaw</summary>
Follow the **Quickstart** above:
1. Install skills and MCP for OpenClaw (and optionally other tools) via [skills.sh](https://skills.sh):
```bash
npx skills add yutori-ai/yutori-mcp
```
When prompted, choose which Yutori skills to install and select **OpenClaw** as the tool.
</details>
<details>
<summary>Gemini CLI</summary>
Add to `~/.gemini/settings.json`. If you already have `mcp` or `mcpServers`, merge these keys into your existing config:
```json
{
"mcp": {
"allowed": ["yutori"]
},
"mcpServers": {
"yutori": {
"command": "uvx",
"args": ["yutori-mcp"]
}
}
}
```
The server reads your API key from `~/.yutori/config.json`.
Add `"yutori"` to `mcp.allowed` if you already list other MCPs there. For more details, see the [Gemini CLI MCP settings guide](https://github.com/google-gemini/gemini-cli/blob/main/docs/tools/mcp-server.md#configure-the-mcp-server-in-settingsjson).
</details>
<details>
<summary>Run with pip</summary>
Install the package to run the MCP server (e.g. for custom or self-hosted setups):
```bash
pip install yutori-mcp
```
</details>
## Tools
See [TOOLS.md](TOOLS.md) for the full tool reference — Scout, Research, and Browsing tools with parameters, examples, and response formats.
## Development
### Setup
```bash
git clone https://github.com/yutori-ai/yutori-mcp
cd yutori-mcp
pip install -e ".[dev]"
```
### Testing
```bash
pytest
```
### Running locally
```bash
yutori-mcp login # authenticate (one-time)
yutori-mcp # run the server (or: python -m yutori_mcp.server)
```
### Debugging with MCP Inspector
```bash
npx @modelcontextprotocol/inspector yutori-mcp
```
## API Documentation
For full API documentation, visit [docs.yutori.com](https://docs.yutori.com).
## License
Apache 2.0
| text/markdown | null | Yutori <support@yutori.com> | null | null | null | browsing, mcp, monitoring, web-automation, yutori | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0",
"pydantic>=2.0.0",
"yutori<0.4.0,>=0.3.0",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://yutori.com",
"Documentation, https://docs.yutori.com",
"Repository, https://github.com/yutori-ai/yutori-mcp"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T00:42:18.963542 | yutori_mcp-0.2.6.tar.gz | 611,370 | e6/d7/d015dca64fc77004c3595b3cd11b97feb981a926f0ca012cef1dcebebda5/yutori_mcp-0.2.6.tar.gz | source | sdist | null | false | a47e40dbeb775cc0b134245f686e6001 | 5928040fc7dbfd3fcb80424a210801f8210cebc3731f306b01f91ec7bd49eddb | e6d7d015dca64fc77004c3595b3cd11b97feb981a926f0ca012cef1dcebebda5 | Apache-2.0 | [
"LICENSE"
] | 280 |
2.4 | release-pilot | 1.0.5 | Deterministic orchestration of white-label app builds | # 🚀 ReleasePilot — Deterministic orchestration of white-label app builds
[](https://github.com/argolo/release-pilot/actions/workflows/ci.yml)
[](https://www.python.org/)
[](LICENSE)
[](https://github.com/argolo/release-pilot/commits/main)
[](https://github.com/argolo/release-pilot/issues)
[](https://pypi.org/project/release-pilot/)
[](https://pypi.org/project/release-pilot/)
[](https://pypi.org/project/release-pilot/)
**ReleasePilot** is an **assisted release orchestrator** that executes `yarn` commands in an **organized, deterministic, and controlled** manner, based on variables provided by the operator (platform, contractor, environment, and command).
Its primary goal is to **standardize and automate the build, packaging, and delivery process of white-label applications**, while respecting the specific differences between contractors, environments, and platforms — without sacrificing **human control at critical steps**.
ReleasePilot is intentionally designed to **orchestrate** commands, not to encapsulate low-level logic or highly specific operational flows. For this reason, granular commands, deep customizations, or platform-specific behaviors **must live in dedicated build flows**, which are then invoked by `yarn`.
The orchestrator’s responsibility is to **order, coordinate, and operate** these commands in a consistent, predictable, and auditable way. To enable this, the `package.json` must define **script aliases** that follow the ReleasePilot convention:
```
{platform}:{contractor}:{environment}:{command}
```
This allows `yarn` to act as the execution layer, while ReleasePilot acts as the orchestration layer.
---
## 🎯 Purpose
ReleasePilot was created to solve a recurring problem in white-label ecosystems:
> **How can we execute multiple build commands in a consistent, predictable, and auditable way when each application varies by contractor, environment, and platform?**
The answer is not blind automation — it is **conscious orchestration**.
---
## ✨ Key Features
* 🎛️ Orchestration of `yarn` commands based on operational variables
* 📱 Multi-platform support (`android`, `ios`)
* 🏢 Automatic discovery of **contractors** via directory structure
* 🧪 Automatic discovery of **environments** per contractor
* ⚙️ Supported commands: `add`, `build`, `deploy`
* 🔁 **“All”** option available in every selection step
* ⏸️ **Assisted execution** with human checkpoints between:
* Environments
* Contractors
* 📌 Execution planning **identical to the real execution order**
* 📦 Final, traceable release summary
* 🧩 Simple, pythonic code with **no external dependencies**
---
## 🧠 Operational Philosophy
ReleasePilot **does not execute commands randomly**.
It:
* Organizes
* Orders
* Operates
Each `yarn` command is executed within a **well-defined context**, ensuring that:
* Builds are not mixed across contractors
* Environments are strictly respected
* Artifacts can be safely retrieved between steps
* The operator has full visibility into what is being executed
---
## 📂 Expected Project Structure
```text
project-root/
├─ contractor/
│ ├─ quickup/
│ │ ├─ sandbox/
│ │ ├─ alfa/
│ │ └─ beta/
│ ├─ kompa/
│ ├─ sandbox/
│ ├─ beta/
│ └─ prod/
```
> The project name is automatically inferred from the **root directory name**.
---
## 🧾 Command Pattern
ReleasePilot executes commands following this convention:
```bash
yarn {platform}:{contractor}:{environment}:{command}
```
### Example
```bash
yarn android:quickup:beta:build
```
---
## 🚀 Installation
### Requirements
* Python **3.9+**
* Node.js + Yarn
* Git (optional, but recommended for traceability)
---
## 🍎 macOS Installation (recommended)
On macOS, the **recommended and reliable** way to install ReleasePilot as a global CLI is using **pipx**.
This avoids permission issues, conflicts with the system Python, and ensures proper isolation.
### 1️⃣ Install `pipx`
```bash
python3 -m pip install --user pipx
python3 -m pipx ensurepath
```
> ⚠️ After this step, **close and reopen your terminal**.
---
### 2️⃣ Navigate to the project directory
```bash
cd release-pilot # directory containing pyproject.toml
```
---
### 3️⃣ Install ReleasePilot globally
```bash
pipx install .
```
The command will now be available globally as:
```bash
release-pilot
```
---
### ▶️ Quick Test
```bash
release-pilot
```
If the interactive menu appears, the installation was successful ✅
---
### 🔍 Optional Checks
```bash
which release-pilot
pipx list
```
Expected output (example):
```text
~/.local/bin/release-pilot
```
---
### 🧹 Updating ReleasePilot
After updating the code or version:
```bash
pipx reinstall release-pilot
```
---
### ❌ Uninstalling
```bash
pipx uninstall release-pilot
```
---
### ⚠️ Important Notes for macOS
* **Do not use `sudo pip install`**
* **Do not use the system Python to install CLIs**
* **Do not manually copy binaries**
* For Python CLI tools, **pipx is always the correct choice**
---
### 🧠 Rule of Thumb
> **Python library → `pip install`**
> **Python CLI tool → `pipx install`**
---
## 📌 Execution Planning
Before executing any command, ReleasePilot displays the **complete execution plan**, in the **exact order in which commands will run**.
This eliminates ambiguity and ensures full predictability.
---
## ✅ Final Release Summary
At the end of execution, ReleasePilot presents a consolidated summary including:
* 📁 Project
* 📦 Contractors
* 🌿 Git branch / version
* 🧪 Environments
* 📱 Platforms
* ⚙️ Total executed commands
This summary improves auditability, communication, and release traceability.
---
## 🛡️ Ideal Use Cases
* White-label app builds
* Sandbox / alfa / beta / production environments
* Teams supporting multiple clients
* Sensitive or regulated releases
* Teams that require **control + automation**
---
## 🔮 Future Enhancements
* `--dry-run` mode
* Non-interactive execution (`--ci`)
* Summary export (`.txt` / `.md`)
* Commit hash and SemVer tag support
* Slack / Jira / Discord / Telegram integrations
* Persistent execution logs
---
## 📜 License
MIT License.
---
## 👤 Author
**André Argôlo**
CTO • Software Architect • DevOps
* 🌐 Website: [https://argolo.dev](https://argolo.dev)
* 🐙 GitHub: [@argolo](https://github.com/argolo)
---
### 🧭 About
André Argôlo is a software architect and technology leader with extensive experience in designing and operating mission-critical systems. His work focuses on building scalable platforms, improving developer experience, and creating pragmatic tooling that balances automation with human control — especially in regulated and high-responsibility environments.
| text/markdown | null | Andre Argolo <mail@argolo.dev> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:42:06.249683 | release_pilot-1.0.5.tar.gz | 7,291 | d3/4c/bb16e81ca6021812c0132de0b0e7674adde77064ce060b0bc4dfb9d1041c/release_pilot-1.0.5.tar.gz | source | sdist | null | false | c3966903f401a410bb2ca1b3c8166428 | b94ae00cc1b1eb2744f4ffe1c979f16731bfe313539f497d84696ce60df10438 | d34cbb16e81ca6021812c0132de0b0e7674adde77064ce060b0bc4dfb9d1041c | MIT | [
"LICENSE.md"
] | 256 |
2.1 | async-substrate-interface | 1.6.2 | Asyncio library for interacting with substrate. Mostly API-compatible with py-substrate-interface | # Async Substrate Interface
This project provides an asynchronous interface for interacting with [Substrate](https://substrate.io/)-based blockchains. It is based on the [py-substrate-interface](https://github.com/polkascan/py-substrate-interface) project.
Additionally, this project uses [bt-decode](https://github.com/opentensor/bt-decode) instead of [py-scale-codec](https://github.com/polkascan/py-scale-codec) for faster [SCALE](https://docs.substrate.io/reference/scale-codec/) decoding.
## Installation
To install the package, use the following command:
```bash
pip install async-substrate-interface
```
## Usage
Here are examples of how to use the sync and async interfaces:
```python
from async_substrate_interface import SubstrateInterface
def main():
substrate = SubstrateInterface(
url="wss://rpc.polkadot.io"
)
with substrate:
result = substrate.query(
module='System',
storage_function='Account',
params=['5CZs3T15Ky4jch1sUpSFwkUbYEnsCfe1WCY51fH3SPV6NFnf']
)
print(result)
main()
```
```python
import asyncio
from async_substrate_interface import AsyncSubstrateInterface
async def main():
substrate = AsyncSubstrateInterface(
url="wss://rpc.polkadot.io"
)
async with substrate:
result = await substrate.query(
module='System',
storage_function='Account',
params=['5CZs3T15Ky4jch1sUpSFwkUbYEnsCfe1WCY51fH3SPV6NFnf']
)
print(result)
asyncio.run(main())
```
### Caching
There are a few different cache types used in this library to improve the performance overall. The one with which
you are probably familiar is the typical `functools.lru_cache` used in `sync_substrate.SubstrateInterface`.
By default, it uses a max cache size of 512 for smaller returns, and 16 for larger ones. These cache sizes are
user-configurable using the respective env vars, `SUBSTRATE_CACHE_METHOD_SIZE` and `SUBSTRATE_RUNTIME_CACHE_SIZE`.
They are applied only on methods whose results cannot change — such as the block hash for a given block number
(small, 512 default), or the runtime for a given runtime version (large, 16 default).
Additionally, in `AsyncSubstrateInterface`, because of its asynchronous nature, we developed our own asyncio-friendly
LRU caches. The primary one is the `CachedFetcher` which wraps the same methods as `functools.lru_cache` does in
`SubstrateInterface`, but the key difference here is that each request is assigned a future that is returned when the
initial request completes. So, if you were to do:
```python
bn = 5000
bh1, bh2 = await asyncio.gather(
asi.get_block_hash(bn),
asi.get_block_hash(bn)
)
```
it would actually only make one single network call, and return the result to both requests. Like `SubstrateInterface`,
it also takes the `SUBSTRATE_CACHE_METHOD_SIZE` and `SUBSTRATE_RUNTIME_CACHE_SIZE` vars to set cache size.
The third and final caching mechanism we use is `async_substrate_interface.async_substrate.DiskCachedAsyncSubstrateInterface`,
which functions the same as the normal `AsyncSubstrateInterface`, but that also saves this cache to the disk, so the cache
is preserved between runs. This is product for a fairly nice use-case (such as `btcli`). As you may call different networks
with entirely different results, this cache is keyed by the uri supplied at instantiation of the `DiskCachedAsyncSubstrateInterface`
object, so `DiskCachedAsyncSubstrateInterface(network_1)` and `DiskCachedAsyncSubstrateInterface(network_2)` will not share
the same on-disk cache.
As with the other two caches, this also takes `SUBSTRATE_CACHE_METHOD_SIZE` and `SUBSTRATE_RUNTIME_CACHE_SIZE` env vars.
### ENV VARS
The following environment variables are used within async-substrate-interface
- NO_CACHE (default 0): if set to 1, when using the DiskCachedAsyncSubstrateInterface class, no persistent on-disk cache will be stored, instead using only in-memory cache.
- CACHE_LOCATION (default `~/.cache/async-substrate-interface`): this determines the location for the cache file, if using DiskCachedAsyncSubstrateInterface
- SUBSTRATE_CACHE_METHOD_SIZE (default 512): the cache size (either in-memory or on-disk) of the smaller return-size methods (see the Caching section for more info)
- SUBSTRATE_RUNTIME_CACHE_SIZE (default 16): the cache size (either in-memory or on-disk) of the larger return-size methods (see the Caching section for more info)
## Contributing
Contributions are welcome! Please open an issue or submit a pull request to the `staging` branch.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Contact
For any questions or inquiries, please join the Bittensor Development Discord server: [Church of Rao](https://discord.gg/XC7ucQmq2Q).
| text/markdown | Opentensor Foundation | BD Himes <b@latent.to> | Latent Holdings | BD Himes <b@latent.to> | MIT License
Copyright (c) 2025 Opentensor
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| substrate, development, bittensor | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programm... | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"wheel",
"aiosqlite<1.0.0,>=0.21.0",
"bt-decode==v0.8.0",
"scalecodec~=1.2.11",
"websockets>=14.1",
"xxhash",
"bittensor; extra == \"dev\"",
"pytest==8.3.5; extra == \"dev\"",
"pytest-asyncio==0.26.0; extra == \"dev\"",
"pytest-mock==3.14.0; extra == \"dev\"",
"pytest-split==0.10.0; extra == \"d... | [] | [] | [] | [
"Repository, https://github.com/opentensor/async-substrate-interface/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:42:05.213296 | async_substrate_interface-1.6.2.tar.gz | 89,348 | cd/94/defb4f4dbfde2f90f11ef0142c2690b06ffd9707e7038f6d8601aeb4371d/async_substrate_interface-1.6.2.tar.gz | source | sdist | null | false | a769f6adeb2243078a7c014d5ae1d0d1 | c2af5496d162062ff02e0a2221ff4ffe0012f29ee530355be1fea663e0c0b229 | cd94defb4f4dbfde2f90f11ef0142c2690b06ffd9707e7038f6d8601aeb4371d | null | [] | 22,300 |
2.4 | starelements | 0.1.3 | Python-native web components for starHTML/Datastar | # starelements
Custom web components, defined in Python, powered by Datastar signals.
You decorate a Python function, and it becomes a custom element. Each instance gets its own scoped signals — no JavaScript class boilerplate, no state collision between multiple instances on the same page.
## Why?
`starelements` gives you encapsulated custom elements where signals are scoped per instance and JS dependencies are declared with the component, not in your app headers. Define it once, `app.register()` it, and use it like any HTML element.
## Features
- **Decorator = component** — one `@element("tag-name")` and your function is a custom element. No class inheritance, no `connectedCallback`.
- **Scoped signals** — each component instance gets its own signal namespace. Two `<my-counter>` on the same page won't step on each other.
- **ESM imports built in** — pull in third-party JS via `imports={"chart": "https://esm.sh/chart.js@4"}`. No bundler config needed.
- **Light DOM by default** — your component's markup lives in the real DOM, so `<form>` submission, CSS selectors, and accessibility tools all just work. Shadow DOM is opt-in.
- **Skeleton loading** — set `height="400px", skeleton=True` and users see a shimmer placeholder until the component initializes. Prevents layout shift.
## Installation
Requires Python 3.12+ and [StarHTML](https://github.com/banditburai/starhtml).
```bash
pip install starelements
```
## Quick Start
A complete counter app — two instances with different initial values, each tracking its own state:
```python
from starhtml import Div, Button, Span, star_app, serve
from starelements import element, Local
@element("my-counter")
def Counter():
return Div(
(count := Local("count", 0)),
(step := Local("step", 1)),
Button("-", data_on_click=count.set(count - step)),
Span(data_text=count),
Button("+", data_on_click=count.set(count + step)),
)
app, rt = star_app()
app.register(Counter)
@rt("/")
def home():
return Div(Counter(count=10, step=5), Counter(count=0))
if __name__ == "__main__":
serve()
```
Attributes you pass (`count=10, step=5`) become signal values inside that instance.
`Local` objects are signal references — `count + step` isn't evaluated in Python. It builds a JS expression, so `count.set(count + step)` produces `$$count = ($$count + $$step)` for the browser.
## Examples
### Skeleton loading
The `skeleton` option shows a shimmer placeholder while the component initializes, preventing layout shift:
```python
@element("heavy-chart", height="400px", skeleton=True,
imports={"chart": "https://esm.sh/chart.js@4"})
def HeavyChart():
return Div(
Script('''
new chart.Chart(refs('canvas'), {type: 'bar', data: {...}});
'''),
Canvas(data_ref="canvas", style="width:100%;height:100%;"),
)
```
### Setup and cleanup with Script()
`Script()` inside your render tree runs once when the component connects. Use `onCleanup()` to tear down resources when the element is removed:
```python
@element("video-player")
def VideoPlayer():
return Div(
(playing := Local("playing", False)),
Video(data_ref="video", src="/video.mp4"),
Button("Play/Pause", data_on_click="$$playing = !$$playing"),
Script('''
const video = refs('video');
effect(() => $$playing ? video.play() : video.pause());
onCleanup(() => video.pause());
'''),
)
```
Inside `Script()`, imported modules are available by alias, signals are accessible as `$$name`, and `refs('name')` returns elements marked with `data_ref`.
For more complex examples, see:
- [`examples/counter.py`](examples/counter.py) — counter with step controls
- [`examples/waveform_editor.py`](examples/waveform_editor.py) — audio waveform editor using Peaks.js
- [`examples/codemirror/editor.py`](examples/codemirror/editor.py) — CodeMirror 6 with theme/language switching and complex import maps
## API Reference
### @element decorator
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `name` | `str` | required | Custom element tag (must contain a hyphen, e.g. `my-counter`) |
| `shadow` | `bool` | `False` | Use Shadow DOM instead of Light DOM |
| `form_associated` | `bool` | `False` | Reserved for future form association support |
| `height` | `str \| None` | `None` | Shorthand for min-height; skeleton defaults to True when set |
| `width` | `str` | `"100%"` | Width dimension |
| `dimensions` | `dict \| None` | `None` | Full dimension dict (overrides height/width) |
| `skeleton` | `bool \| None` | `None` | Show shimmer placeholder while loading |
| `imports` | `dict \| None` | `None` | ESM imports — `{alias: specifier}` |
| `import_map` | `dict \| None` | `None` | Additional import map entries |
| `scripts` | `dict \| None` | `None` | UMD scripts — `{globalName: url}` |
| `events` | `list \| None` | `None` | Custom events the component emits |
### Registration
`app.register()` mounts static file routes and adds the component's CSS, import map, JS runtime, and templates to the app-wide headers (included on every page):
```python
app, rt = star_app()
app.register(Counter) # single component
app.register(Counter, DatePicker) # multiple at once
```
## CLI
`starelements` includes a CLI for bundling npm packages into ESM bundles using esbuild:
```bash
starelements bundle # bundles packages listed in pyproject.toml [tool.starelements]
```
Configure packages in your `pyproject.toml`:
```toml
[tool.starelements]
bundle = ["chart.js@4", "@codemirror/state@6.4.1"]
```
## Development
```bash
uv sync --all-extras # install dev + test dependencies
uv run scripts/build.py # build JS runtime from TypeScript
uv run ruff check src/ tests/ # lint
uv run pytest tests/ -v # run tests
```
The TypeScript runtime source lives in `typescript/`. The build script compiles it to `src/starelements/static/starelements.min.js`.
## License
[Apache 2.0](LICENSE)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"platformdirs>=4.0",
"starhtml>=0.5.3",
"pip-tools; extra == \"dev\"",
"pyright; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pytest-cov>=6.1.1; extra == \"test\"",
"pytest>=8.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:42:00.166126 | starelements-0.1.3.tar.gz | 111,348 | 3c/cd/d904e9b7172f5b66541e799565c64cd51137693a646a84e23344be55c447/starelements-0.1.3.tar.gz | source | sdist | null | false | e8a83dd1317a6739a8db5343e358c78c | b2ebe30c369411eab15c83af31bf03267d822bf4e9e1e70bbf4a6105e3ddd7db | 3ccdd904e9b7172f5b66541e799565c64cd51137693a646a84e23344be55c447 | Apache-2.0 | [
"LICENSE"
] | 276 |
2.4 | nequix | 0.4.1 | Nequix source code | <h1 align='center'>Nequix</h1>
Source code and model weights for the [Nequix foundation model](https://arxiv.org/abs/2508.16067), and [Phonon fine-tuning (PFT)](https://arxiv.org/abs/2601.07742).
Model | Dataset | Theory | Reference
--- | --- | --- | ---
`nequix-mp-1`| MPtrj | DFT (PBE+U) | [Nequix](https://arxiv.org/abs/2508.16067)
`nequix-mp-1-pft`| MPtrj, MDR Phonon | DFT (PBE+U) |[PFT](https://arxiv.org/abs/2601.07742)
`nequix-omat-1`| OMat24 | DFT (PBE+U, VASP 54) | [PFT](https://arxiv.org/abs/2601.07742)
`nequix-oam-1`| OMat24, sAlex, MPtrj | DFT (PBE+U) | [PFT](https://arxiv.org/abs/2601.07742)
`nequix-oam-1-pft`| OMat24, sAlex, MPtrj, MDR Phonon | DFT (PBE+U) | [PFT](https://arxiv.org/abs/2601.07742)
## Usage
### Installation
```bash
pip install nequix
```
to use [OpenEquivariance](https://github.com/PASSIONLab/OpenEquivariance) kernels,
```bash
pip install nequix[oeq]
# needs to be run after installation:
uv pip install openequivariance_extjax --no-build-isolation
```
or for torch (also with kernels):
```bash
pip install nequix[torch]
```
### ASE calculator
Using `nequix.calculator.NequixCalculator`, you can perform calculations in
ASE with a pre-trained Nequix model.
```python
from nequix.calculator import NequixCalculator
atoms = ...
atoms.calc = NequixCalculator("nequix-mp-1", backend="jax")
```
or if you want to use the torch backend:
```python
...
atoms.calc = NequixCalculator("nequix-mp-1", backend="torch")
...
```
These are typically comparable in speed with kernels.
#### NequixCalculator
Arguments
- `model_name` (str, default "nequix-mp-1"): Pretrained model alias to load or download.
- `model_path` (str | Path, optional): Path to local checkpoint; overrides `model_name`.
- `backend` ({"jax", "torch"}, default "jax"): Compute backend.
- `capacity_multiplier` (float, default 1.1): JAX-only; padding factor to limit recompiles.
- `use_compile` (bool, default True): Torch-only; on GPU, uses `torch.compile()`.
- `use_kernel` (bool, default True): on GPU, use [OpenEquivariance](https://github.com/PASSIONLab/OpenEquivariance) kernels.
### Training
Models are trained with the `nequix_train` command using a single `.yml`
configuration file:
```bash
nequix_train <config>.yml
```
or for Torch
```bash
# Single GPU
uv sync --extra torch
uv run nequix/torch_impl/train.py <config>.yml
# Multi-GPU
uv run torchrun --nproc_per_node=<gpus> nequix/torch_impl/train.py <config>.yml
```
To reproduce the training of Nequix-MP-1, first clone the repo and sync the environment:
```bash
git clone https://github.com/atomicarchitects/nequix.git
cd nequix
uv sync
```
Then download the MPtrj data from
https://figshare.com/files/43302033 into `data/` then run the following to extract the data:
```bash
bash data/download_mptrj.sh
```
Preprocess the data into `.aselmdb` files:
```bash
uv run scripts/preprocess_data.py data/mptrj-gga-ggapu data/mptrj-aselmdb
```
Then start the training run:
```bash
nequix_train configs/nequix-mp-1.yml
```
This will take less than 125 hours on a single 4 x A100 node (<25 hours with kernels). The `batch_size` in the
config is per-device, so you should be able to run this on any number of GPUs
(although hyperparameters like learning rate are often sensitive to global batch
size, so keep in mind).
## Phonon fine-tuning (PFT)
First sync extra dependencies with
```bash
uv sync --extra pft
```
### Phonon calculations
We provide pretrained model weights for the co-trained (better alignment with
MPtrj) and non co-trained models in `models/nequix-mp-1-pft.nqx` and
`nequix-mp-1-pft-nocotrain.nqx` respectively. See [nequix-examples/phonon](https://github.com/teddykoker/nequix-examples/blob/main/phonon) for
examples on how to use these models for phonon calculations with both finite
displacement, and analytical Hessians.
### Training
Data for the PBE MDR phonon database was originally downloaded and preprocessed with:
```bash
bash data/download_pbe_mdr.sh
uv run data/split_pbe_mdr.py
uv run scripts/preprocess_data_phonopy.py data/pbe-mdr/train data/pbe-mdr/train-aselmdb
uv run scripts/preprocess_data_phonopy.py data/pbe-mdr/val data/pbe-mdr/val-aselmdb
```
However we provide preprocessed data which can be downloaded with
```bash
bash data/download_pbe_mdr_preprocessed.sh
```
To run PFT without co-training run:
```bash
uv run nequix/pft/train.py configs/nequix-mp-1-pft-no-cotrain.yml
```
To run PFT *with* co-training run (note this requires `mptrj-aselmdb` preprocessed):
```bash
uv run nequix/pft/train.py configs/nequix-mp-1-pft.yml
```
To run PFT on the OAM base model, follow the data download instructions below and then run:
```bash
uv run nequix/pft/train.py configs/nequix-oam-1-pft.yml
```
Both PFT training runs take about 140 hours on a single A100. Note that PFT training is only currently only supported with the JAX backend, which is both significantly faster and supported by the kernels. See [nequix-examples/pft](https://github.com/teddykoker/nequix-examples/blob/main/pft), which contains a small demo for PFT in PyTorch that can be adapted to other models. Feel free to reach out with questions.
## Training OMat/OAM base models
To reproduce our training runs for the OMat and OAM base models run the following. First download OMat and sAlex data:
```bash
./data/download_omat.sh <path to storage location>
```
Then symlink to `./data`
```bash
ln -s <path to storage location>/omat ./data/omat
ln -s <path to storage location>/salex ./data/salex
ln -s <path to storage location>/mptrj-aselmdb ./data/mptrj-aselmdb
```
To train the OMat model, run:
```bash
uv run torchrun --nproc_per_node=4 nequix/torch_impl/train.py configs/nequix-omat-1.yml
```
This takes roughly 60 hours on a 4 x A100 node. To fine-tune the OAM model, copy
the OMat model to `models/nequix-omat-1.pt` and run
```bash
uv run torchrun --nproc_per_node=4 nequix/torch_impl/train.py configs/nequix-oam-1.yml
```
This takes roughly 10 hours on a 4 x A100 node.
## Citation
```bibtex
@article{koker2026pft,
title={{PFT}: Phonon Fine-tuning for Machine Learned Interatomic Potentials},
author={Koker, Teddy and Gangan, Abhijeet and Kotak, Mit and Marian, Jaime and Smidt, Tess},
journal={arXiv preprint arXiv:2601.07742},
year={2026}
}
@article{koker2025training,
title={Training a foundation model for materials on a budget},
author={Koker, Teddy and Kotak, Mit and Smidt, Tess},
journal={arXiv preprint arXiv:2508.16067},
year={2025}
}
```
| text/markdown | null | Teddy Koker <teddy.koker@gmail.com> | null | null | MIT License Copyright (c) 2025 Teddy Koker Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"ase-db-backends",
"ase>=3.24.0",
"cloudpickle>=3.1.1",
"e3nn-jax>=0.20.8",
"equinox>=0.11.11",
"h5py>=3.14.0",
"jax>=0.4.34; sys_platform == \"darwin\"",
"jax[cuda12]>=0.4.34; sys_platform == \"linux\"",
"jraph>=0.0.6.dev0",
"matscipy>=1.1.1",
"optax>=0.2.5",
"pyyaml>=6.0.2",
"tqdm>=4.67.1"... | [] | [] | [] | [
"Homepage, https://pypi.org/project/nequix/",
"Repository, https://github.com/atomicarchitects/nequix"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T00:41:44.771973 | nequix-0.4.1.tar.gz | 40,801 | 5f/32/52a36c43b0939fb0c3506f692c01117fe47079d33f3a7ea91c8960bad42d/nequix-0.4.1.tar.gz | source | sdist | null | false | bebe4972864c860e2db22cb50503f6d1 | 55c28a6e14311c5363fb571c5b2997355e5d08158bacb58573b4adcfe7ed19a9 | 5f3252a36c43b0939fb0c3506f692c01117fe47079d33f3a7ea91c8960bad42d | null | [
"LICENSE"
] | 264 |
2.1 | switch-api | 0.6.13 | A complete package for data ingestion into the Switch Automation Platform. | # Switch Automation library for Python
This is a package for data ingestion into the Switch Automation software platform.
You can find out more about the platform on [Switch Automation](https://www.switchautomation.com)
## Getting started
### Prerequisites
* Python 3.8 or later is required to use this package.
* You must have a [Switch Automation user account](https://www.switchautomation.com/our-solution/) to use this package.
### Install the package
Install the Switch Automation library for Python with [pip](https://pypi.org/project/pip/):
```bash
pip install switch_api
```
# History
#0.6.13
### Added
In the `controls` module
- Add `submit_realtime_updates` method to support realtime updates for a single dataset
- Subscribes to a single MQTT topic and retrieves retained message (if any)
- Merges retained dataset with provided DataFrame using Id-based override and append logic
- Publishes merged dataset back to the corresponding Realtime topic
#0.6.12
### Modified
In the `controls` module
- Implement auto-reconnect to MQTT Broker
- Enable Clean Session in MQTT Client
- Resubscribes to Topics on reconnect
- Remove manual disconnection to MQTT Broker
#0.6.11
### Modified
In the `controls` module
- Implement Persistent Connection & Single Subscription for efficiency against ClientIds and Site State Topics
# 0.6.7
### Modified
In the `integration` module
- Computing of carbon calculation expressions is refactored for efficiency, utilizing `numexpr` library for faster evaluation of Carbon Calculation Expressions
# 0.6.6
### Modified
In the `integration` module:
- `send_reading_summaries()` method now rounds up float values for Value, Cost, and Carbon to 4 decimal places before sending to Reading Summaries API Endpoint
# 0.6.5
### Modified
In the `integration` module:
- Refactored the `send_reading_summaries()` method to compute daily totals for Carbon, Cost, and Value directly within the method.
- This update is internal and does **not** require any changes to the input dataframe used by `upsert_timeseries()`.
- This update still requires the `generate_summaries` parameter to be `True` to generate reading summaries.
- If generate_summaries parameter is set to True it requires that the dataframe passed to upsert_timeseries method contains the complete interval records for a given Calendar day. It can't be a partial day worth of data.
- Added a new error type: `ReadingSummariesTimeout` to improve categorization and handling of timeouts specifically related to reading summaries.
In the `_utils` module:
- Set max retries to 10 if more than 10 was passed in the `requests_retry_session2()` method
## 0.6.2
### Modified
In the `integration` module:
- Added new functionality to the helper method `update_last_record()` and renamed to `update_last_record_property_value()`
- Requires additional column to be included on the `pandas.Dataframe` passed via the `df` called `Value`
which contains the Value at the UTC datetime of `LastRecord`
- This helper method is called internally to the `upsert_timeseries()` method
## 0.6.0
### Added
In the `integration` module:
- Added new helper method `update_last_record()` to update the the last record datetime for sensors.
- This method is called internally within the `upsert_timeseries()` method and sets the last record for each sensor to their maximum datetime present in the data_frame passed.
### Modified
In the `integration` module:
- Added optional columns `ImportStatus`, `PointClassName` and `EntityClassName` to the defined column list for the dataframe passed to the `upsert_discovered_records()` method.
- Allowed Values for the `ImportStatus` column are:
- The `PointClassName` and `EntityClassName` fields allow the brick class for a given point and piece of equipment to be defined as part of the upsert. The `EquipmentLabel` required field is used to set the EntityName.
- Added optional columns `PointClassName` and `EntityClassName` to the defined column list for the dataframe passed to `upsert_device_sensors()` method.
- The `PointClassName` and `EntityClassName` fields allow the brick class for a given point and piece of equipment to be defined as part of the upsert. The `EquipmentLabel` required field is used to set the EntityName.
In the `automation` module:
- Added optional parameter `run_in_minutes` to delay running of datafeed in minutes.
- This parameter is set to 0 (zero) by default.
- Updated the `register_task()` method to check whether the `TaskID` and/or the `TaskName` already exist before registering.
In the `utils` module:
- When uploading CSV files for ingestion, we now partitioned those into max 3mb per file.
- This impacts primarily on the upsert_timeseries() method which started to hit max limits for streaming ingestion to adx.
## 0.5.14
### Added
In the `integration` module:
- Added `generate_summaries` flag which defaults to `True` which tells whether to send a request or not to
generate summaries for the given datetimes from the passed dataframe readings.
## 0.5.12
### Added
In the `compute` module:
- Added `CostCarbonCalculation` class that handles the calculation of Carbon
- Has a method called `compute_carbon` to calculate Carbon for sensors
In the `ApiInputs` object from `initialize`:
- Added `iot_url` as config to use across python modules
## 0.5.11
### Added
In the `controls` module:
- Added `submit_control_continue` method to handle control requests with continuation logic
- Added `add_control_component` method in conjunction to `submit_control_continue` method to set control components
that details the column mapping for the required data frame to send control requests.
## 0.5.10
### Added
In the `controls` module:
- Added handling of `DefautlControlValue` column for `submit_control_request` method
- For sensors without Priority Array, if control command has timeout then this field if existing will be used on revert (otherwise the present value before write is used).
- DefaultControlValue will be ignored for sensors with priority array.
## 0.5.8
### Fixed
In the `analytics` module:
- Fixed bug on AU datacentre API endpoint construction.
## 0.5.7
### Modified
In the `controls` module:
- Modified `submit_control` method
- Returns sensor control values upon control request acknowledgement
## 0.5.5
### Added
In the `pipeline` module:
- Added a new task `IQTask`
- Additional abstract property `module_type` that accepts strings. This should define the type of IQ module.
- Abstract method `process` must be instantiated for the task to be registered.
### Modified
In the `pipeline` module:
- Updated the `Automation.register_task()` method to accept tasks that subclass the new `IQTask`.
## 0.5.4
### Added
In the `integration` module:
- Added new function `upsert_reservations()`
- Upserts data to the ReservationHistory table
- Two attributes added to assist with creation of the input dataframe:
- `upsert_reservations.df_required_columns` - returns list of required columns for the input `df`
- `upsert_reservations.df_optional_columns` - returns list of required columns for the input `df`
- The following datetime fields are required and must use the `local_date_time_cols` and `utc_date_time_cols`
parameters to define whether their values are in site-local timezone or UTC timezone:
- `CreatedDate`
- `LastModifiedDate`
- `ReservationStart`
- `ReservationEnd`
- Added new function `upsert_device_sensors_iq`
- Same functionality with `upsert_device_sensors` but modified/simplified to work with Switch IQ
- 'Tags' are included in each row of passing dataframe for upsert instead of a separate list in the original
In the `authentication` module:
- Customized `get_switch_credentials` with custom port instead of a fixed one
- `initialize` function now has `custom_port` parameter for custom port settings when authenticating
In the `controls` module:
- Modified `submit_control` functon to return consolidated dataframe with added columns: `status` and `writeStatus`
that flags whether the control request was successful or not. Instead of the previous 2 separate dataframes
- Modified `submit_control` function as well to have paginated processing of dataframe to submit control instead of
sending them all in one go
- Modified `_mqtt` class to add Gateway Connected check before sending/submitting control request to the MQTT Broker.
## 0.5.3
### Added
- In the `integration` module:
- Added `override_existing` parameter in `upsert_discovered_records`
- Flag if it the values passed to df will override existing integration records. Only valid if running locally,
not on a deployed task where it is triggered via UI.
- Defaults to False
## 0.5
### Added
- In the `pipeline` module:
- Added a new task type called `Guide`.
- this task type should be sub-classed in concert with one of the Task sub-classes when deploying a guide to the
marketplace.
- Added a new method to the `Automation` class called `register_guide_task()`
- this method is used to register tasks that sub-class the `Guide` task and also posts form files to blob and
registers the guide to the Marketplace.
- New `_guide` module - only to be referenced when doing initial development of a Guide
- `guide`'s `local_start' method
- Allows to run mock guides engine locally that ables to debug `Guide` task types with Form Kit playground.
### Fixed
- In `controls` module:
- modify `submit_control` method parameters - typings
- remove extra columns from payload to IoT API requests
## 0.4.9
### Added
- New method added in `automation` module:
- `run_data_feed()` - Run python job based on data feed id. This will be sent to the queue for processing and will
undergo same procedure as the rest of the datafeed.
- Required parameters are `api_inputs` and `data_feed_id`
- This has a restriction of only allowing an AnalyticsTask type datafeed to be run and deployed as a Timer
- New method added in `analytics` module:
- `upsert_performance_statistics` - this method should only be used by tasks used to populate the Portfolio
Benchmarking feature in the Switch Automation platform
- New `controls` module added and new method added to this module:
- `submit_control()` - method to submit control of sensors
- this method returns a tuple: `(control_response, missing_response)`:
- `control_response` - is the list of sensors that are acknowledged and process by the MQTTT message broker
- `missing_response` = is the list of sensors that are sensors that were caught by the connection `time_out` -
default to 30 secs - meaning the response were no longer waited to be received by the python package.
Increasing the time out can potentially help with this.
### Fixed
- In the `integration` module, minor fixes to:
- An unhandled exception when using `pandas==2.1.1` on the following functions:
- `upsert_sites()`
- `upsert_device_sensors()`
- `upsert_device_sensors_ext()`
- `upsert_workorders()`
- `upsert_timeseries_ds()`
- `upsert_timeseries()`
- Handle deprecation of `pandas.DataFrame.append()` on the following functions:
- `upsert_device_sensors()`
- `upsert_device_sensors_ext()`
- An unhandled exception for `connect_to_sql()` function when the internal API call within
`_get_sql_connection_string()` fails.
## 0.4.8
### Added
- New class added to the `pipeline` module:
- `BlobTask` - This class is used to create integrations that post data to the Switch Automation Platform using a
blob container & Event Hub Queue as the source.
- Please Note: This task type requires external setup in Azure by Switch Automation Developers before a task can be
registered or deployed.
- requires `process_file()` abstract method to be created when sub-classing
- New method, `deploy_as_on_demand_data_feed()` added to the `Automation` class of the `pipeline` module
- this new method is only applicable for tasks that subclass the `BlobTask` base class.
- In the `integration` module, new helper methods have been added:
- `connect_to_sql()` method creates a pyodbc connection object to enable easier querying of the SQL database via the
`pyodbc` library
- `amortise_across_days()` method enables easier amortisation of data across days in a period, either inclusive or
exclusive of end date.
- `get_metadata_where_clause()` method enables creation of `sql_where_clause` for the `get_device_sensors`() method
where for each metadata key the sql checks its not null.
- In the `error_handlers` module:
- `check_duplicates()` method added to check for duplicates & post appropriate errors to Task Insights UI in the
Switch Automation platform.
- In the `_utils._utils` module:
- `requests_retry_session2` helper function added to enable automatic retries of API calls
### Updated
- In the `integration` module:
- New parameter `include_removed_sites` added to the `get_sites()` function.
- Determines whether or not to include sites marked as "IsRemoved" in the returned dataframe.
- Defaults to False, indicating removed sites will not be included.
- Updated the`get_device_sensor()` method to check if requested metadata keys or requested
tag groups exist for the portfolio and exception if they don't.
- New parameter `send_notification` added to the `upsert_timeseries()` function.
- This enables Iq Notification messages to be sent when set to `True`
- Defaults to `False`
- For the `get_sites()`, `get_device_sensors()` and `get_data()` functions, additional parameters have
been added to allow customisation of the newly implemented retry logic:
- `retries : int`
- Number of retries performed beforereturning last retry instance's response status. Max retries = 10.
Defaults to 0 currently for backwards compatibility.
- `backoff_factor`
- If A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a
second try without a delay).
{_backoff factor_} \* (2 \*\* ({_retry count_} - 1)) seconds
- In the `error_handlers` module:
- For the `validate_datetime` function, added two new parameters to enable automatic
posting of errors to the Switch Platform:
- `errors` : boolean, defaults to False. To enable posting of errors, set to True.
- `api_inputs`: defaults to None. Needs to be set to the object returned from switch_api.initialize() if `errors=True`.
### Fixed
- In the `integration` module:
- Resolved outlier scenario resulting in unhandled exception on the `upsert_sites()` function.
- Minor fix to the `upsert_discovered_records()` method to handle the case when unexpected columns
are present in the dataframe passed to `df` input parameter
## 0.4.6
### Added
- Task Priority and Task Framework data feed deployment settings
- Task Priority and Task Framework are now available to set when deploying data feeds
- Task Priority
- Determines the priority of the datafeed tasks when processing.
- This equates to how much resources would be alloted to run the task
- Available options are: `default`, `standard`, or `advanced`.
- set to `advanced` for higher resource when processing data feed task
- Defaults to 'default'.
- Task Framework
- Determines the framework of the datafeed tasks when processing.
- 'PythonScriptFramework' for the old task runner engine.
- 'TaskInsightsEngine' for the new task running in container apps.
- Defaults to 'PythonScriptFramework'
## 0.4.5
### Added
- Email Sender Module
- Send emails to active users within a Portfolio in Switch Automation Platform
- Limitations:
- Emails cannot be sent to users outside of the Portfolio including other users within the platform
- Maximum of five attachments per email
- Each attachment has a maximum size of 5mb
- See function code documentation and usage example below
- New `generate_filepath` method to provide a filepath where files can be stored
- Works well with the attachment feature of the Email Sender Module. Store files in the generated filepath of this method and pass into email attachments
- See function code documentation and usage example below
### Email Sender Usage
```python
import switch_api as sw
sw.email.send_email(
api_inputs=api_inputs,
subject='',
body='',
to_recipients=[],
cc_recipients=[], # Optional
bcc_recipients=[], # Optional
attachments=['/file/path/to/attachment.csv'], # Optional
conversation_id='' # Optional
)
```
### generate_filepath Usage
```python
import switch_api as sw
generated_attachment_filepath = sw.generate_filepath(api_inputs=api_inputs, filename='generated_attachment.txt')
# Example of where it could be used
sw.email.send_email(
...
attachments=[generated_attachment_filepath]
...
)
```
### Fixed
- Issue where `upsert_device_sensors_ext` method was not posting metadata and tag_columns to API
## 0.3.3
### Added
- New `upsert_device_sensors_ext` method to the `integration` module.
- Compared to existing `upsert_device_sensors` following are supported:
- Installation Code or Installation Id may be provided
- BUT cannot provide mix of the two, all must have either code or id and not both.
- DriverClassName
- DriverDeviceType
- PropertyName
### Added Feature - Switch Python Extensions
- Extensions may be used in Task Insights and Switch Guides for code reuse
- Extensions maybe located in any directory structure within the repo where the usage scripts are located
- May need to adjust your environment to detect the files if you're not running a project environment
- Tested on VSCode and PyCharm - contact Switch Support for issues.
#### Extensions Usage
```python
import switch_api as sw
# Single import line per extension
from extensions.my_extension import MyExtension
@sw.extensions.provide(field="some_extension")
class MyTask:
some_extension: MyExtension
if __name__ == "__main__":
task = MyTask()
task.some_extension.do_something()
```
#### Extensions Registration
```python
import uuid
import switch_api as sw
class SimpleExtension(sw.extensions.ExtensionTask):
@property
def id(self) -> uuid.UUID:
# Unique ID for the extension.
# Generate in CLI using:
# python -c 'import uuid; print(uuid.uuid4())'
return '46759cfe-68fa-440c-baa9-c859264368db'
@property
def description(self) -> str:
return 'Extension with a simple get_name function.'
@property
def author(self) -> str:
return 'Amruth Akoju'
@property
def version(self) -> str:
return '1.0.1'
def get_name(self):
return "Simple Extension"
# Scaffold code for registration. This will not be persisted in the extension.
if __name__ == '__main__':
task = SimpleExtension()
api_inputs = sw.initialize(api_project_id='<portfolio-id>')
# Usage test
print(task.get_name())
# =================================================================
# REGISTER TASK & DATAFEED ========================================
# =================================================================
register = sw.pipeline.Automation.register_task(api_inputs, task)
print(register)
```
### Updated
- get_data now has an optional parameter to return a pandas.DataFrame or JSON
## 0.2.27
### Fix
- Issue where Timezone DST Offsets API response of `upsert_timeseries` in `integration` module was handled incorrectly
## 0.2.26
### Updated
- Optional `table_def` parameter on `upsert_data`, `append_data`, and `replace_data` in `integration` module
- Enable clients to specify the table structure. It will be merged to the inferred table structure.
- `list_deployments` in Automation module now provides `Settings` and `DriverId` associated with the deployments
## 0.2.25
### Updated
- Update handling of empty Timezone DST Offsets of `upsert_timeseries` in `integration` module
## 0.2.24
### Updated
- Fix default `ingestion_mode` parameter value to 'Queue' instead of 'Queued' on `upsert_timeseries` in `integration` module
## 0.2.23
### Updated
- Optional `ingestion_mode` parameter on `upsert_timeseries` in `integration` module
- Include `ingestionMode` in json payload passed to backend API
- `IngestionMode` type must be `Queue` or `Stream`
- Default `ingestion_mode` parameter value in `upsert_timeseries` is `Queue`
- To enable table streaming ingestion, please contact **helpdesk@switchautomation.com** for assistance.
## 0.2.22
### Updated
- Optional `ingestion_mode` parameter on `upsert_data` in `integration` module
- Include `ingestionMode` in json payload passed to backend API
- `IngestionMode` type must be `Queue` or `Stream`
- Default `ingestion_mode` parameter value in `upsert_data` is `Queue`
- To enable table streaming ingestion, please contact **helpdesk@switchautomation.com** for assistance.
### Fix
- sw.pipeline.logger handlers stacking
## 0.2.21
### Updated
- Fix on `get_data` method in `dataset` module
- Sync parameter structure to backend API for `get_data`
- List of dict containing properties of `name`, `value`, and `type` items
- `type` property must be one of subset of the new Literal `DATA_SET_QUERY_PARAMETER_TYPES`
## 0.2.20
### Added
- Newly supported Azure Storage Account: GatewayMqttStorage
- An optional property on QueueTask to specific QueueType
- Default: DataIngestion
## 0.2.19
### Fixed
- Fix on `upsert_timeseries` method in `integration` module
- Normalized TimestampId and TimestampLocalId seconds
- Minor fix on `upsert_entities_affected` method in `integration` utils module
- Prevent upsert entities affected count when data feed file status Id is not valid
- Minor fix on `get_metadata_keys` method in `integration` helper module
- Fix for issue when a portfolio does not contain any values in the ApiMetadata table
## 0.2.18
### Added
- Added new `is_specific_timezone` parameter in `upsert_timeseries` method of `integration` module
- Accepts a timezone name as the specific timezone used by the source data.
- Can either be of type str or bool and defaults to the value of False.
- Cannot have value if 'is_local_time' is set to True.
- Retrieve list of available timezones using 'get_timezones' method in `integration` module
| is_specific_timezone | is_local_time | Description |
| -------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| False | False | Datetimes in provided data is already in UTC and should remain as the value of Timestamp. The TimestampLocal (conversion to site-local Timezone) is calculated. |
| False | True | Datetimes in provided data is already in the site-local Timezone & should be used to set the value of the TimestampLocal field. The UTC Timestamp is calculated |
| Has Value | True | NOT ALLOWED |
| Has Value | False | Both Timestamp and TimestampLocal fields will are calculated. Datetime is converted to UTC then to Local. |
| True | | NOT ALLOWED |
| '' (empty string) | | NOT ALLOWED |
### Fixed
- Minor fix on `upsert_tags` and `upsert_device_metadata` methods in `integration` module
- List of required_columns was incorrectly being updated when these functions were called
- Minor fix on `upsert_event_work_order_id` method in `integration` module when attempting to update status of an Event
### Updated
- Update on `DiscoveryIntegrationInput` namedtuple - added `job_id`
- Update `upsert_discovered_records` method required columns in `integration` module
- add required `JobId` column for Data Frame parameter
## 0.2.17
### Fixed
- Fix on `upsert_timeseries()` method in `integration` module for duplicate records in ingestion files
- records whose Timestamp falls in the exact DST start created 2 records with identical values but different TimestampLocal
- one has the TimestampLocal of a DST and the other does not
### Updated
- Update on `get_sites()` method in `integration` module for `InstallationCode` column
- when the `InstallationCode' value is null in the database it returns an empty string
- `InstallationCode` column is explicity casted to dtype 'str'
## 0.2.16
### Added
- Added new 5 minute interval for `EXPECTED_DELIVERY` Literal in `automation` module
- support for data feed deployments Email, FTP, Upload, and Timer
- usage: expected_delivery='5min'
### Fixed
- Minor fix on `upsert_timeseries()` method using `data_feed_file_status_id` parameter in `integration` module.
- `data_feed_file_status_id` parameter value now synced between process records and ingestion files when supplied
### Updated
- Reduced ingestion files records chunking by half in `upsert_timeseries()` method in `integration` module.
- from 100k records chunk down to 50k records chunk
## 0.2.15
### Updated
- Optimized `upsert_timeseries()` method memory upkeep in `integration` module.
## 0.2.14
### Fixed
- Minor fix on `invalid_file_format()` method creating structured logs in `error_handlers` module.
## 0.2.13
### Updated
- Freeze Pandera[io] version to 0.7.1
- PandasDtype has been deprecated since 0.8.0
### Compatibility
- Ensure local environment is running Pandera==0.7.1 to match cloud container state
- Downgrade/Upgrade otherwise by running:
- pip uninstall pandera
- pip install switch_api
## 0.2.12
### Added
- Added `upsert_tags()` method to the `integration` module.
- Upsert tags to existing sites, devices, and sensors
- Upserting of tags are categorised by the tagging level which are Site, Device, and Sensor level
- Input dataframe requires `Identifier' column whose value depends on the tagging level specified
- For Site tag level, InstallationIds are expected to be in the `Identifier` column
- For Device tag level, DeviceIds are expected to be in the `Identifier` column
- For Sensor tag level, ObjectPropertyIds are expected to be in the `Identifier` column
- Added `upsert_device_metadata()` method to the `integration` module.
- Upsert metadata to existing devices
### Usage
- `upsert_tags()`
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Device')
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Sensor')
- sw.integration.upsert_tags(api_inputs=api_inputs, df=raw_df, tag_level='Site')
- `upsert_device_metadata()`
- sw.integration.upsert_device_metadata(api_inputs=api_inputs, df=raw_df)
## 0.2.11
### Added
- New `cache` module that handles cache data related transactions
- `set_cache` method that stores data to cache
- `get_cache` method that gets stored data from cache
- Stored data can be scoped / retrieved into three categories namely Task, Portfolio, and DataFeed scopes
- For Task scope,
- Data cache can be retrieved by any Portfolio or Datafeed that runs in same Task
- provide TaskId (self.id when calling from the driver)
- For DataFeed scope,
- Data cache can be retrieved (or set) within the Datafeed deployed in portfolio
- Provide UUID4 for local testing. api_inputs.data_feed_id will be used when running in the cloud.
- For Portfolio scope:
- Data cache can be retrieved (or set) by any Datafeed deployed in portfolio
- scope_id will be ignored and api_inputs.api_project_id will be used.
## 0.2.10
### Fixed
- Fixed issue with `upsert_timeseries_ds()` method in the `integration` module where required fields such as
`Timestamp`, `ObjectPropertyId`, `Value` were being removed.
## 0.2.9
### Added
- Added `upsert_timeseries()` method to the `integration` module.
- Data ingested into table storage in addition to ADX Timeseries table
- Carbon calculation performed where appropriate
- Please note: If carbon or cost are included as fields in the `Meta` column then no carbon / cost calculation will be performed
### Changed
- Added `DriverClassName` to required columns for `upsert_discovered_records()` method in the `integration` module
### Fixed
- A minor fix to 15-minute interval in `upsert_timeseries_ds()` method in the `integration` module.
## 0.2.8
### Changed
- For the `EventWorkOrderTask` class in the `pipeline` module, the `check_work_order_input_valid()` and the
`generate_work_order()` methods expect an additional 3 keys to be included by default in the dictionary passed to
the `work_order_input` parameter:
- `InstallationId`
- `EventLink`
- `EventSummary`
### Fixed
- Issue with the header/payload passed to the API within the `upsert_event_work_order_id()`
function of the `integration` module.
## 0.2.7
### Added
- New method, `deploy_as_on_demand_data_feed()` added to the `Automation` class of the `pipeline` module
- this new method is only applicable for tasks that subclass the `EventWorkOrderTask` base class.
### Changed
- The `data_feed_id` is now a required parameter, not optional, for the following methods on the `Automation` class of
the `pipeline` module:
- `deploy_on_timer()`
- `deploy_as_email_data_feed()`
- `deploy_as_ftp_data_feed()`
- `deploy_as_upload_data_feed()`
- The `email_address_domain` is now a required parameter, not optional, for the `deploy_as_email_data_feed()` method
on the `Automation` class of the `pipeline` module.
### Fixed
- issue with payload on `switch_api.pipeline.Automation.register_task()` method for `AnalyticsTask` and
`EventWorkOrderTask` base classes.
## 0.2.6
### Fixed
- Fixed issues on 2 methods in the `Automation` class of the `pipeline` module:
- `delete_data_feed()`
- `cancel_deployed_data_feed()`
### Added
In the `pipeline` module:
- Added new class `EventWorkOrderTask`
- This task type is for generation of work orders in 3rd party systems via the Switch Automation Platform's Events UI.
### Changed
In the `pipeline` module:
- `AnalyticsTask` - added a new method & a new abstract property:
- `analytics_settings_definition` abstract property - defines the required inputs (& how these are displayed in the
Switch Automation Platform UI) for the task to successfully run
- added `check_analytics_settings_valid()` method that should be used to validate the
`analytics_settings` dictionary passed to the `start()` method contains the required keys for the task to
successfully run (as defined by the `analytics_settings_definition`)
In the `error_handlers` module:
- In the `post_errors()` function, the parameter `errors_df` is renamed to `errors` and now accepts strings in
addition to pandas.DataFrame
### Removed
Due to cutover to a new backend, the following have been removed:
- `run_clone_modules()` function from the `analytics` module
- the entire `platform_insights` module including the :
- `get_current_insights_by_equipment()` function
## 0.2.5
### Added
- The `Automation` class of the `pipeline` module has 2 new methods added: -`delete_data_feed()`
- Used to delete an existing data feed and all related deployment settings
- `cancel_deployed_data_feed()`
- used to cancel the specified `deployment_type` for a given `data_feed_id`
- replaces and expands the functionality previously provided in the `cancel_deployed_timer()` method which has been
removed.
### Removed
- Removed the `cancel_deployed_timer()` method from the `Automation` class of the `pipeline` module
- this functionality is available through the new `cancel_deployed_data_feed()` method when `deployment_type`
parameter set to `['Timer']`
## 0.2.4
### Changed
- New parameter `data_feed_name` added to the 4 deployment methods in the `pipeline` module's `Automation` class
- `deploy_as_email_data_feed()`
- `deploy_as_ftp_data_feed()`
- `deploy_as_upload_data_feed()`
- `deploy_on_timer()`
## 0.2.3
### Fixed
- Resolved minor issue on `register_task()` method for the `Automation` class in the `pipeline` module.
## 0.2.2
### Fixed
- Resolved minor issue on `upsert_discovered_records()` function in `integration` module related to device-level
and sensor-level tags.
## 0.2.1
### Added
- New class added to the `pipeline` module
- `DiscoverableIntegrationTask` - for API integrations that are discoverable.
- requires `process()` & `run_discovery()` abstract methods to be created when sub-classing
- additional abstract property, `integration_device_type_definition`, required compared to base `Task`
- New function `upsert_discovered_records()` added to the `integration` module
- Required for the `DiscoverableIntegrationTask.run_discovery()` method to upsert discovery records to Build -
Discovery & Selection UI
### Fixed
- Set minimum msal version required for the switch_api package to be installed.
## 0.2.0
Major overhaul done of the switch_api package. A complete replacement of the API used by the package was done.
### Changed
- The `user_id` parameter has been removed from the `switch_api.initialise()` function.
- Authentication of the user is now done via Switch Platform SSO. The call to initialise will trigger a web browser
window to open to the platform login screen.
- Note: each call to initialise for a portfolio in a different datacentre will open up browser and requires user to
input their username & password.
- for initialise on a different portfolio within the same datacentre, the authentication is cached so user will not
be asked to login again.
- `api_inputs` is now a required parameter for the `switch_api.pipeline.Automation.register_task()`
- The `deploy_on_timer()`, `deploy_as_email_data_feed()`, `deploy_as_upload_data_feed()`, and
`deploy_as_ftp_data_feed()` methods on the `switch_api.pipeline.Automation` class have an added parameter:
`data_feed_id`
- This new parameter allows user to update an existing deployment for the portfolio specified in the `api_inputs`.
- If `data_feed_id` is not supplied, a new data feed instance will be created (even if portfolio already has that
task deployed to it)
## 0.1.18
### Changed
- removed rebuild of the ObjectProperties table in ADX on call to `upsert_device_sensors()`
- removed rebuild of the Installation table in ADX on call to `upsert_sites()`
## 0.1.17
### Fixed
- Fixed issue with `deploy_on_timer()` method of the `Automation` class in the `pipeline` module.
- Fixed column header issue with the `get_tag_groups()` function of the `integration` module.
- Fixed missing Meta column on table generated via `upsert_workorders()` function of the `integration` module.
### Added
- New method for uploading custom data to blob `Blob.custom_upload()`
### Updated
- Updated the `upsert_device_sensors()` to improve performance and aid release of future functionality.
## 0.1.16
### Added
To the `pipeline` module:
- New method `data_feed_history_process_errors()`, to the `Automation` class.
- This method returns a dataframe containing the distinct set of error types encountered for a specific
`data_feed_file_status_id`
- New method `data_feed_history_errors_by_type` , to the `Automation` class.
- This method returns a dataframe containing the actual errors identified for the specified `error_type` and
`data_feed_file_status_id`
Additional logging was also incorporated in the backend to support the Switch Platform UI.
### Fixed
- Fixed issue with `register()` method of the `Automation` class in the `pipeline` module.
### Changed
For the `pipeline` module:
- Standardised the following methods of the `Automation` class to return pandas.DataFrame objects.
- Added additional error checks to ensure only allowed values are passed to the various `Automation` class methods
for the parameters:
- `expected_delivery`
- `deploy_type`
- `queue_name`
- `error_type`
For the `integration` module:
- Added additional error checks to ensure only allowed values are passed to `post_errors` function for the parameters:
- `error_type`
- `process_status`
For the `dataset` module:
- Added additional error check to ensure only allowed values are provided for the `query_language` parameter of the
`get_data` function.
For the `_platform` module:
- Added additional error checks to ensure only allowed values are provided for the `account` parameter.
## 0.1.14
### Changed
- updated get_device_sensors() to not auto-detect the data type - to prevent issues such as stripping leading zeroes,
etc from metadata values.
## 0.1.13
### Added
To the `pipeline` module:
- Added a new method, `data_feed_history_process_output`, to the `Automation` class
## 0.1.11
### Changed
- Update to access to `logger` - now available as `switch_api.pipeline.logger()`
- Update to function documentation
## 0.1.10
### Changed
- Updated the calculation of min/max date (for timezone conversions) inside the `upsert_device_sensors` function as
the previous calculation method will not be supported in a future release of numpy.
### Fixed
- Fixed issue with retrieval of tag groups and tags via the functions:
- `get_sites`
- `get_device_sensors`
## 0.1.9
### Added
- New module `platform_insights`
In the `integration` module:
- New function `get_sites` added to lookup site information (optionally with site-level tags)
- New function `get_device_sensors` added to assist with lookup of device/sensor information, optionally including
either metadata or tags.
- New function `get_tag_groups` added to lookup list of sensor-level tag groups
- New function `get_metadata_keys` added to lookup list of device-level metadata keys
### Changed
- Modifications to connections to storage accounts.
- Additional parameter `queue_name` added to the following methods of the `Automation` class of the `pipeline`
module:
- `deploy_on_timer`
- `deploy_as_email_data_feed`
- `deploy_as_upload_data_feed`
- `deploy_as_ftp_data_feed`
### Fixed
In the `pipeline` module:
- Addressed issue with the schema validation for the `upsert_workorders` function
## 0.1.8
### Changed
In the `integrations` module:
- Updated to batch upserts by DeviceCode to improve reliability & performance of the `upsert_device_sensors` function.
### Fixed
In the `analytics` module:
- typing issue that caused error in the import of the switch_api package for python 3.8
## 0.1.7
### Added
In the `integrations` module:
- Added new function `upsert_workorders`
- Provides ability to ingest work order data into the Switch Automation Platform.
- Documentation provides details on required & optional fields in the input dataframe and also provides information
on allowed values for some fields.
- Two attributes available for function, added to assist with creation of scripts by providing list of required &
optional fields:
- `upsert_workorders.df_required_columns`
- `upsert_workorders.df_optional_columns`
- Added new function `get_states_by_country`:
| text/markdown | Switch Automation Pty Ltd. | null | null | null | MIT License | null | [
"Development Status :: 2 - Pre-Alpha",
"License :: OSI Approved :: MIT License",
"Intended Audience :: Other Audience",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Natural Language :: English"
] | [] | null | null | >=3.8.0 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T00:41:30.399489 | switch_api-0.6.13.tar.gz | 147,210 | 97/23/10573015a010f7685e49dfee47b770b506b29500d8a33bd0ebe704544390/switch_api-0.6.13.tar.gz | source | sdist | null | false | 3d37332f00993b7d94e410ad93d4ce89 | ca14d07f85bf73fba437706c6f69777a79d7e47af7a7cce1914187b21a26290f | 972310573015a010f7685e49dfee47b770b506b29500d8a33bd0ebe704544390 | null | [] | 250 |
2.4 | AWS-CloudFormation-Diagrams | 0.2.0 | A simple CLI script to generate AWS infrastructure diagrams from AWS CloudFormation templates | # AWS CloudFormation Diagrams
[](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/blob/main/LICENSE)


A simple CLI script to generate AWS infrastructure diagrams from AWS CloudFormation templates.
## Features
* Parses both YAML and JSON AWS CloudFormation templates
* Supports [140 AWS resource types and any custom resource types](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/blob/main/docs/supported_resource_types.md)
* Supports `Rain::Module` resource type
* Supports `DependsOn`, `Ref`, and `Fn::GetAtt` relationships
* Generates DOT, GIF, JPEG, PDF, PNG, SVG, and TIFF diagrams
* Provides [126 generated diagram examples](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/blob/main/diagrams/)
Have ideas? [Open an issue](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/issues/new) or [start a discussion](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/discussions/new).
## Prerequisites
Following software must be installed:
- [Python](https://www.python.org) 3.9 or higher
- `dot` command ([Graphviz](https://www.graphviz.org/))
## Installation
Following command installs required Python dependencies, i.e., [PyYAML](https://pyyaml.org) and [Diagrams](https://diagrams.mingrammer.com/).
```ssh
# using pip (pip3)
pip install PyYAML diagrams
```
## Usage
```bash
usage: aws-cfn-diagrams [-h] [-o OUTPUT] [-f FORMAT] [--embed-all-icons] filename
Generate AWS infrastructure diagrams from AWS CloudFormation templates
positional arguments:
filename the AWS CloudFormation template to process
options:
-h, --help show this help message and exit
-o, --output OUTPUT output diagram filename
-f, --format FORMAT output format, allowed formats are dot, dot_json, gif, jp2, jpe, jpeg, jpg, pdf, png, svg, tif, tiff, set to png by default
--embed-all-icons embed all icons into svg or dot_json output diagrams
```
## Examples
The folder [diagrams](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/blob/main/diagrams) contains generated diagrams for most of [AWS CloudFormation templates](
https://github.com/aws-cloudformation/aws-cloudformation-templates).
Following diagram is about WebApp:

Following diagram is about Gitea with `Rain::Module`:

Following diagram is about Gitea without `Rain::Module`:

Following diagram is about AutoScaling:

Following diagram is about EKS:

Following diagram is about VPC:

## License
This project is licensed under the [Apache 2.0 License](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/blob/main/LICENSE).
## Contributing
[PRs](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/pulls) and [ideas](https://github.com/philippemerle/AWS-CloudFormation-Diagrams/discussions/categories/ideas) are welcome!
## Star History
[](https://www.star-history.com/#philippemerle/AWS-CloudFormation-Diagrams&type=date&legend=top-left)
| text/markdown | Philippe Merle | philippe.merle@inria.fr | Philippe Merle | philippe.merle@inria.fr | Apache-2.0 | aws cloudformation diagrams python graphviz | [
"Topic :: Software Development :: Documentation",
"Topic :: Utilities",
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Lan... | [] | https://github.https://github.com/philippemerle/AWS-CloudFormation-Diagrams | null | >=3.9 | [] | [] | [] | [
"PyYAML",
"diagrams"
] | [] | [] | [] | [
"Issues, https://github.com/philippemerle/AWS-CloudFormation-Diagrams/issues",
"Discussions, https://github.com/philippemerle/AWS-CloudFormation-Diagrams/discussions",
"Wiki, https://github.com/philippemerle/AWS-CloudFormation-Diagrams/wiki"
] | twine/6.1.0 CPython/3.13.2 | 2026-02-20T00:40:10.174214 | aws_cloudformation_diagrams-0.2.0.tar.gz | 41,382 | 79/28/8561f928285e52bd4526017262b3ec6d80d7dbfa9bf2d9f92c2cb30380c2/aws_cloudformation_diagrams-0.2.0.tar.gz | source | sdist | null | false | bb354f9bb830fd4ba867076a83f39138 | 6983d836f72396a6872020cb1307586071e852a9517d2c03064bc5e864178e77 | 79288561f928285e52bd4526017262b3ec6d80d7dbfa9bf2d9f92c2cb30380c2 | null | [
"LICENSE"
] | 0 |
2.4 | duvo-sandstorm | 0.7.1 | Run Claude agents in secure cloud sandboxes — via API, CLI, or Slack. One call. Full agent. Zero infrastructure. | # Sandstorm
Run AI agents in secure cloud sandboxes. One command. Zero infrastructure.
[](https://platform.claude.com/docs/en/agent-sdk/overview)
[](https://e2b.dev)
[](https://openrouter.ai)
[](https://pypi.org/project/duvo-sandstorm/)
[](https://www.python.org/downloads/)
[](LICENSE)
**Hundreds of AI agents running in parallel. Hours-long tasks. Tool use, file access, structured output — each in its own secure sandbox. Sounds hard. It's not.**
```bash
ds "Fetch all our webpages from git, analyze each for SEO and GEO, optimize them, and push the changes back"
```
That's it. Sandstorm wraps the [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview) in isolated [E2B](https://e2b.dev) cloud sandboxes — the agent installs packages, fetches live data, generates files, and streams every step back via SSE. When it's done, the sandbox is destroyed. Nothing persists. Nothing escapes.
### Why Sandstorm?
Most companies want to use AI agents but hit the same wall: infrastructure, security concerns, and complexity. Sandstorm removes all three. It's a simplified, open-source version of the agent runtime we built at [duvo.ai](https://duvo.ai) — battle-tested in production.
- **Any model via OpenRouter** -- swap in DeepSeek R1, Qwen 3, Kimi K2, or any of 300+ models through [OpenRouter](https://openrouter.ai)
- **Full agent power** -- Bash, Read, Write, Edit, Glob, Grep, WebSearch, WebFetch -- all enabled by default
- **Document skills built-in** -- PDF, DOCX, and PPTX processing pre-installed in every sandbox
- **Safe by design** -- every request gets a fresh VM that's destroyed after, with zero state leakage
- **Real-time streaming** -- watch the agent work step-by-step via SSE, not just the final answer
- **Configure once, query forever** -- drop a `sandstorm.json` for structured output, subagents, MCP servers, and system prompts
- **File uploads** -- send code, data, or configs for the agent to work with
- **Slack bot** -- [@mention in channels](docs/slack.md), DM threads, file uploads, streaming responses, multi-turn conversations with sandbox reuse
### Get Started
```bash
pip install duvo-sandstorm
export ANTHROPIC_API_KEY=sk-ant-...
export E2B_API_KEY=e2b_...
ds "Find the top 10 trending Python repos on GitHub and summarize each in one sentence"
```
If Sandstorm is useful, consider giving it a [star](https://github.com/tomascupr/sandstorm) — it helps others find it.
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Ftomascupr%2Fsandstorm&env=ANTHROPIC_API_KEY,E2B_API_KEY)
## Quickstart
### Prerequisites
- Python 3.11+
- [E2B](https://e2b.dev) API key
- [Anthropic](https://console.anthropic.com) API key or [OpenRouter](https://openrouter.ai) API key
- [uv](https://docs.astral.sh/uv/) (only for source installs)
### Install
```bash
# From PyPI
pip install duvo-sandstorm
# Or from source
git clone https://github.com/tomascupr/sandstorm.git
cd sandstorm
uv sync
```
### E2B Sandbox Template
Sandstorm ships with a public pre-built template (`work-43ca/sandstorm`) that's used automatically — no build step needed. The template includes Node.js 24, `@anthropic-ai/claude-agent-sdk`, Python 3, git, ripgrep, curl, and document processing skills (pdf, docx, pptx).
To customize the template (e.g. add system packages or pre-install other dependencies), edit `build_template.py` and rebuild:
```bash
uv run python build_template.py
```
## CLI
After installing, the `duvo-sandstorm` (or `ds`) command is available:
### Run an agent
The `query` command is the default — just pass a prompt directly:
```bash
ds "Create hello.py and run it"
ds "Analyze this repo" --model opus
ds "Build a chart" --max-turns 30 --timeout 600
ds "Fetch data" --json-output | jq '.type'
```
The explicit `query` subcommand also works: `ds query "Create hello.py"`.
### Upload files
Use `-f` / `--file` to send local files into the sandbox (repeatable):
```bash
ds "Analyze this data and find outliers" -f data.csv
ds "Compare these configs" -f prod.json -f staging.json
ds "Review this code for bugs" -f src/main.py -f src/utils.py
```
Files are uploaded to `/home/user/{filename}` before the agent starts. Only text files are supported; binary files must be sent via the [API](#file-uploads) instead.
### Start the server
```bash
ds serve # default: 0.0.0.0:8000
ds serve --port 3000 # custom port
ds serve --reload # auto-reload for development
```
### API keys
Keys are resolved in order: CLI flags > environment variables > `.env` file in current directory.
```bash
# Environment variables (most common)
export ANTHROPIC_API_KEY=sk-ant-...
export E2B_API_KEY=e2b_...
# Or CLI flags
ds "hello" --anthropic-api-key sk-ant-... --e2b-api-key e2b_...
```
## How It Works
```
Client --POST /query--> FastAPI --> E2B Sandbox (isolated VM)
<---- SSE stream <---- stdout <-- runner.mjs --> query() from Agent SDK
|-- Bash, Read, Write, Edit
|-- Glob, Grep, WebSearch, WebFetch
'-- subagents, MCP servers, structured output
```
1. Your app sends a prompt to `POST /query`
2. Sandstorm creates a fresh E2B sandbox with the Claude Agent SDK pre-installed
3. The agent runs your prompt with full tool access inside the sandbox
4. Every agent message (thoughts, tool calls, results) streams back as SSE events
5. The sandbox is destroyed when done -- nothing persists
## Features
### Structured Output
Configure in `sandstorm.json` to get validated JSON instead of free-form text:
```json
{
"output_format": {
"type": "json_schema",
"schema": {
"type": "object",
"properties": {
"summary": { "type": "string" },
"items": { "type": "array", "items": { "type": "object" } }
},
"required": ["summary", "items"]
}
}
}
```
The agent works normally (scrapes data, installs packages, writes files), then returns validated JSON in `result.structured_output`.
### Subagents
Define specialized agents in `sandstorm.json` that the main agent can delegate to:
```json
{
"agents": {
"scraper": {
"description": "Crawls websites and saves structured data to disk.",
"prompt": "Scrape the target, extract data, and save as JSON to /home/user/output/.",
"tools": ["Bash", "WebFetch", "Write", "Read"],
"model": "sonnet"
},
"report-writer": {
"description": "Reads collected data and produces formatted reports.",
"prompt": "Read all data files, synthesize findings, and generate a PDF report with charts.",
"tools": ["Bash", "Read", "Write", "Glob"]
}
}
}
```
The main agent spawns subagents via the `Task` tool when it decides they're needed.
### File Uploads
Send files in the request for the agent to work with:
```bash
curl -N -X POST https://your-sandstorm-host/query \
-d '{
"prompt": "Parse these server logs, find error spikes, and write an incident report",
"files": {
"logs/app.log": "2024-01-15T10:23:01Z ERROR [auth] connection pool exhausted\n...",
"logs/deploys.json": "[{\"sha\": \"a1b2c3\", \"ts\": \"2024-01-15T10:20:00Z\"}]"
}
}'
```
Files are written to `/home/user/{path}` in the sandbox before the agent starts. From the CLI, use `-f` / `--file` instead (see [Upload files](#upload-files)).
### Skills
Skills give the agent reusable domain knowledge via [Claude Code Skills](https://docs.anthropic.com/en/docs/claude-code/skills). Each skill is a folder with a `SKILL.md` file (plus optional scripts and references) — Sandstorm uploads them into the sandbox before the agent starts, where they become available as `/slash-commands`.
**Built-in skills:** The default sandbox template comes with three document processing skills pre-installed — **pdf**, **docx**, and **pptx**. The agent can create, edit, merge, split, and analyze documents out of the box. No configuration needed.
To add your own skills, create a skills directory with one subfolder per skill, each containing a `SKILL.md`:
```
.claude/skills/
code-review/
SKILL.md
data-analyst/
SKILL.md
```
Then point `skills_dir` in `sandstorm.json` to it:
```json
{
"skills_dir": ".claude/skills"
}
```
Each skill becomes a slash command the agent can use — a folder named `data-analyst` registers as `/data-analyst`. Names must contain only letters, numbers, hyphens, and underscores.
### MCP Servers
Attach external tools via [MCP](https://modelcontextprotocol.io) in `sandstorm.json`:
```json
{
"mcp_servers": {
"sqlite": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sqlite", "/home/user/data.db"]
},
"remote-api": {
"type": "sse",
"url": "https://api.example.com/mcp/sse",
"headers": { "Authorization": "Bearer your-token" }
}
}
}
```
| Field | Type | Description |
|-------|------|-------------|
| `type` | `string` | `"stdio"`, `"http"`, or `"sse"` |
| `command` | `string` | Command for stdio servers |
| `args` | `string[]` | Command arguments |
| `url` | `string` | URL for HTTP/SSE servers |
| `headers` | `object` | Auth headers for remote servers |
| `env` | `object` | Environment variables |
### Webhooks
Track sandbox lifecycle events (created, updated, killed) via E2B webhooks. Add `webhook_url` to `sandstorm.json` for zero-config setup — the server auto-registers on startup and deregisters on shutdown:
```json
{
"webhook_url": "https://your-server.com"
}
```
Or manage manually via CLI:
```bash
ds webhook register https://your-server.com # auto-appends /webhooks/e2b, saves secret to .env
ds webhook test https://your-server.com/webhooks/e2b # verify endpoint
ds webhook list # list registered webhooks
ds webhook delete <id> # remove a webhook
```
Set `SANDSTORM_WEBHOOK_SECRET` to enable HMAC-SHA256 signature verification. The `register` command auto-generates and saves the secret to `.env` if not already set.
## Examples
Ready-to-use configs for common use cases — `cd` into any example and run:
| Example | What it does | Key features |
|---------|-------------|--------------|
| [Code Reviewer](examples/code-reviewer/) | Structured code review with severity ratings | `output_format`, `allowed_tools` |
| [Competitive Analysis](examples/competitive-analysis/) | Research and compare competitors | `output_format`, WebFetch, WebSearch |
| [Content Brief](examples/content-brief/) | Generate content briefs with SEO research | `output_format`, WebSearch |
| [Security Auditor](examples/security-auditor/) | Multi-agent security audit with OWASP skill | `agents`, `skills_dir`, `output_format` |
See [examples/](examples/) for the full feature matrix and usage guide.
## OpenRouter
Use any of 300+ models (GPT-4o, Qwen, DeepSeek, Gemini, Llama) via [OpenRouter](https://openrouter.ai). Three env vars to set up:
```bash
ANTHROPIC_BASE_URL=https://openrouter.ai/api
OPENROUTER_API_KEY=sk-or-...
ANTHROPIC_DEFAULT_SONNET_MODEL=anthropic/claude-sonnet-4 # or any model ID
```
For model remapping, per-request keys, and compatibility details, see the [full OpenRouter guide](docs/openrouter.md).
## Configuration
Sandstorm uses a two-layer config system:
| Layer | What it controls | How to set |
|-------|-----------------|------------|
| **`sandstorm.json`** | Agent behavior -- system prompt, structured output, subagents, MCP servers, skills | Config file in project root |
| **API request** | Per-call -- prompt, model, files, timeout, output format, tool/agent/skill whitelisting | JSON body on `POST /query` |
### `sandstorm.json`
Drop a `sandstorm.json` in your project root. See [Structured Output](#structured-output), [Subagents](#subagents), and [MCP Servers](#mcp-servers) for feature-specific examples.
| Field | Type | Description |
|-------|------|-------------|
| `system_prompt` | `string` | Custom instructions for the agent |
| `model` | `string` | Default model (`"sonnet"`, `"opus"`, `"haiku"`, or full ID) |
| `max_turns` | `integer` | Maximum conversation turns |
| `output_format` | `object` | JSON schema for [structured output](#structured-output) |
| `agents` | `object` | [Subagent](#subagents) definitions |
| `mcp_servers` | `object` | [MCP server](#mcp-servers) configurations |
| `skills_dir` | `string` | Path to directory containing [skills](#skills) subdirectories |
| `allowed_tools` | `list` | Restrict agent to specific tools (e.g. `["Bash", "Read"]`). `"Skill"` is auto-added when skills are present |
| `template_skills` | `boolean` | Set `true` when skills are baked into the E2B template (skips runtime upload) |
| `webhook_url` | `string` | Public URL for E2B lifecycle webhooks. Server auto-registers on startup, deregisters on shutdown |
### API Keys
Keys can live in `.env` (set once) or be passed per-request (multi-tenant). Request body overrides `.env`.
```bash
# .env -- set once, forget about it
ANTHROPIC_API_KEY=sk-ant-...
E2B_API_KEY=e2b_...
# Then just send prompts:
curl -N -X POST https://your-sandstorm-host/query \
-d '{"prompt": "Crawl docs.stripe.com/api and generate an OpenAPI spec as YAML"}'
# Or override per-request:
curl -N -X POST https://your-sandstorm-host/query \
-d '{"prompt": "...", "anthropic_api_key": "sk-ant-other", "e2b_api_key": "e2b_other"}'
```
### Providers
Sandstorm supports Anthropic (default), Google Vertex AI, Amazon Bedrock, Microsoft Azure, [OpenRouter](#openrouter), and custom API proxies. Add the env vars to `.env` and restart -- the SDK detects them automatically.
| Provider | Key env vars |
|----------|-------------|
| **Anthropic** (default) | `ANTHROPIC_API_KEY` |
| **[OpenRouter](#openrouter)** | `ANTHROPIC_BASE_URL`, `OPENROUTER_API_KEY` (see [OpenRouter](#openrouter)) |
| **Vertex AI** | `CLAUDE_CODE_USE_VERTEX=1`, `CLOUD_ML_REGION`, `ANTHROPIC_VERTEX_PROJECT_ID` |
| **Bedrock** | `CLAUDE_CODE_USE_BEDROCK=1`, `AWS_REGION`, `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` |
| **Azure** | `CLAUDE_CODE_USE_FOUNDRY=1`, `AZURE_FOUNDRY_RESOURCE`, `AZURE_API_KEY` |
| **Custom proxy** | `ANTHROPIC_BASE_URL`, `ANTHROPIC_AUTH_TOKEN` (optional) |
## Dashboard
Start the server and open `http://localhost:8000/` to see a live dashboard of all agent runs:
```bash
ds serve
open http://localhost:8000
```
The dashboard auto-refreshes every 3 seconds, showing status, model, cost, turns, and duration for each run. Run history is persisted to `.sandstorm/runs.jsonl` and survives server restarts.
The `GET /runs` endpoint returns the same data as JSON for programmatic access.
> **Note:** On Vercel, run history is limited to the current function invocation (ephemeral filesystem). For persistent history, use a long-running server.
## Slack Bot
Run Sandstorm agents directly in Slack — @mention in channels for quick tasks, or DM for 1:1 conversations. Responses stream in real-time, files uploaded in threads are available to the agent, and follow-up messages reuse the same sandbox.
```bash
pip install "duvo-sandstorm[slack]"
ds slack setup # interactive wizard — creates app, saves tokens to .env
ds slack start # Socket Mode (dev, no public URL needed)
```
Then `@Sandstorm <task>` in any channel. For the full guide (HTTP mode, configuration, features), see [docs/slack.md](docs/slack.md).
## API Reference
### `GET /runs`
Returns recent agent runs as a JSON array, newest first.
```json
[
{
"id": "a1b2c3d4",
"prompt": "Create hello.py and run it",
"model": "claude-sonnet-4-5-20250929",
"status": "completed",
"cost_usd": 0.069,
"num_turns": 6,
"duration_secs": 28.5,
"started_at": "2025-02-18T22:10:30Z",
"error": null,
"files_count": 0
}
]
```
### `POST /query`
| Field | Type | Required | Default | Description |
|-------|------|----------|---------|-------------|
| `prompt` | `string` | Yes | -- | The task for the agent (min 1 char) |
| `anthropic_api_key` | `string` | No | `$ANTHROPIC_API_KEY` | Anthropic key (falls back to env) |
| `openrouter_api_key` | `string` | No | `$OPENROUTER_API_KEY` | OpenRouter key (falls back to env) |
| `e2b_api_key` | `string` | No | `$E2B_API_KEY` | E2B key (falls back to env) |
| `model` | `string` | No | from config | Overrides `sandstorm.json` model |
| `max_turns` | `integer` | No | from config | Overrides `sandstorm.json` max_turns |
| `timeout` | `integer` | No | `300` | Sandbox lifetime in seconds |
| `files` | `object` | No | `null` | Files to upload (`{path: content}`) |
| `output_format` | `object` | No | from config | Overrides `sandstorm.json` output_format |
| `allowed_mcp_servers` | `string[]` | No | `null` (all) | Whitelist MCP servers by name from config |
| `allowed_skills` | `string[]` | No | `null` (all) | Whitelist skills by name. Template skills are always available |
| `allowed_tools` | `string[]` | No | from config | Override allowed tools from `sandstorm.json` |
| `allowed_agents` | `string[]` | No | `null` (all) | Whitelist agents by name from config |
| `extra_agents` | `object` | No | `null` | Inline agent definitions merged with config (`{name: config}`) |
| `extra_skills` | `object` | No | `null` | Inline skill definitions merged with disk skills (`{name: markdown}`) |
**Response:** `text/event-stream`
### `POST /webhooks/e2b`
Receives E2B sandbox lifecycle events (created, updated, killed). Verifies HMAC-SHA256 signature when `SANDSTORM_WEBHOOK_SECRET` is set. Used automatically when `webhook_url` is configured in `sandstorm.json`.
### `GET /health`
Returns `{"status": "ok"}`
### SSE Event Types
| Type | Description |
|------|-------------|
| `system` | Session init -- tools, model, session ID |
| `assistant` | Agent text + tool calls |
| `user` | Tool execution results |
| `result` | Final result with `total_cost_usd`, `num_turns`, and optional `structured_output` |
| `error` | Error details (only on failure) |
## Client Examples
### Python
```python
import httpx
from httpx_sse import connect_sse
with httpx.Client() as client:
with connect_sse(
client, "POST",
"https://your-sandstorm-host/query",
json={
"prompt": "Scrape the top 50 HN stories, cluster by topic, save to output/hn.csv"
},
) as events:
for sse in events.iter_sse():
print(sse.data)
```
### TypeScript
```typescript
const res = await fetch("https://your-sandstorm-host/query", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
prompt: "Fetch recent arxiv papers on LLM agents, extract findings, write a lit review",
}),
});
const reader = res.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(decoder.decode(value));
}
```
## Deployment
Sandstorm is stateless -- each request creates an independent sandbox. No shared state, no sticky sessions. For production deployment with Gunicorn, concurrent agent execution, and scaling guidance, see the [deployment guide](docs/deployment.md).
### Docker
```dockerfile
FROM python:3.12-slim
WORKDIR /app
COPY . .
RUN pip install --no-cache-dir . # or: pip install duvo-sandstorm
EXPOSE 8000
CMD ["ds", "serve", "--host", "0.0.0.0", "--port", "8000"]
```
```bash
docker build -t sandstorm .
docker run -p 8000:8000 --env-file .env sandstorm
```
Deploy this container to any platform -- Railway, Fly.io, Cloud Run, ECS, Kubernetes. Since there's no state to persist, scaling up or down is just changing the replica count.
### Vercel
One-click deploy:
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Ftomascupr%2Fsandstorm&env=ANTHROPIC_API_KEY,E2B_API_KEY)
The repo includes `vercel.json` and `api/index.py` pre-configured. Set `ANTHROPIC_API_KEY` and `E2B_API_KEY` as environment variables in your Vercel project settings.
> **Note:** Vercel serverless functions have a maximum duration of 300s on Pro plans (10s on Hobby). For long-running agent tasks, use the Docker deployment or a dedicated server instead.
## Security
- **Isolated execution** -- every request gets a fresh VM sandbox, destroyed after
- **No server secrets** -- keys via `.env` or per-request, never stored server-side
- **No shell injection** -- prompts and config written as files, never interpolated into commands
- **Path traversal prevention** -- file upload paths are normalized and validated
- **Structured errors** -- failures stream as SSE error events, not silent drops
- **No persistence** -- nothing survives between requests
> **Note:** The Anthropic API key is passed into the sandbox as an environment variable (the SDK requires it). The agent runs with `bypassPermissions` mode, so it has full access to the sandbox environment. Use per-request keys with spending limits for untrusted callers.
## License
[MIT](LICENSE)
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
... | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0.0",
"e2b>=1.4.0",
"fastapi>=0.115.0",
"pydantic>=2.10.0",
"python-dotenv>=1.0.0",
"sse-starlette>=2.2.0",
"uvicorn[standard]>=0.34.0",
"httpx>=0.27; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"aiohttp>=3.9.0; extra == \"slack\"",
"slack-bolt>=1.21.0; extra == \"slack\"",
"... | [] | [] | [] | [
"Homepage, https://github.com/tomascupr/sandstorm",
"Repository, https://github.com/tomascupr/sandstorm",
"Issues, https://github.com/tomascupr/sandstorm/issues",
"Changelog, https://github.com/tomascupr/sandstorm/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:40:07.153013 | duvo_sandstorm-0.7.1.tar.gz | 428,827 | 71/0c/41dee7ce413267c102faf1714769d7d1024fc065f7119888cbaa6f706529/duvo_sandstorm-0.7.1.tar.gz | source | sdist | null | false | 2caa21971181457d0939f0ff8ea89221 | a8c2b1b5bdb107d1b43d839a734468b3ccc6446c68cb03bb076ae3e0d2ec9dcd | 710c41dee7ce413267c102faf1714769d7d1024fc065f7119888cbaa6f706529 | MIT | [
"LICENSE"
] | 294 |
2.4 | lumiserver | 0.2.0 | LumiDesktop AI + Live2D 虚拟桌宠后端服务 | # LumiServer
LumiDesktop 的后端 API 服务,可作为独立的 CLI 工具使用。
## 安装
```bash
pipx install lumiserver
```
## 使用
### 初始化配置
```bash
lumi init
```
### 启动服务
```bash
lumi serve
```
### 查看帮助
```bash
lumi --help
```
## API 文档
服务启动后访问 `http://localhost:52341/docs` 查看完整 API 文档。
| text/markdown | null | candy-xt <candy-xt@users.noreply.github.com> | null | null | MIT | ai, live2d, chatbot, desktop-pet, fastapi | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Framework :: FastAPI"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastapi>=0.109.0",
"uvicorn[standard]>=0.27.0",
"openai>=1.12.0",
"pyyaml>=6.0.1",
"python-dotenv>=1.0.0",
"pydantic>=2.6.0",
"pydantic-settings>=2.1.0",
"click>=8.1.7",
"pystray>=0.19.5",
"pillow>=10.2.0",
"aiofiles>=23.2.1",
"python-multipart>=0.0.9",
"websockets>=12.0",
"pytest>=8.0.0;... | [] | [] | [] | [
"Homepage, https://github.com/candy-xt/LumiDesktop",
"Documentation, https://github.com/candy-xt/LumiDesktop#readme",
"Repository, https://github.com/candy-xt/LumiDesktop",
"Issues, https://github.com/candy-xt/LumiDesktop/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T00:38:41.952724 | lumiserver-0.2.0.tar.gz | 14,671 | fa/d6/bad68162793db73aa7cff52558d13ff6a34f4903b61d2817a08c5c882a55/lumiserver-0.2.0.tar.gz | source | sdist | null | false | dc7ffa1d10410b5d005dabcf7b272f1d | 67c8c354ec409701ad46b098b3f1e87ee651c306e1c852191720e632437c2c04 | fad6bad68162793db73aa7cff52558d13ff6a34f4903b61d2817a08c5c882a55 | null | [] | 230 |
2.4 | raysurfer | 1.0.0 | AI maintained skills for vertical agents | # RaySurfer Python SDK
[Website](https://www.raysurfer.com) · [Docs](https://docs.raysurfer.com) · [Dashboard](https://www.raysurfer.com/dashboard/api-keys)
<!-- Old: LLM output caching for AI agents. Retrieve proven code instead of regenerating it. -->
<!-- Old: Code reputation layer for AI agents. Let agents re-use generated code vs running 30 serial tools or generating code per execution. -->
AI Maintained Skills for Vertical Agents. Re-use verified code from prior runs rather than serial tool calls or generating code per execution.
## Installation
```bash
pip install raysurfer
```
## Setup
Set your API key:
```bash
export RAYSURFER_API_KEY=your_api_key_here
```
Get your key from the [dashboard](https://www.raysurfer.com/dashboard/api-keys).
## Low-Level API
For custom integrations, use the `RaySurfer` client directly with any LLM provider.
### Complete Example with Anthropic API
```python
import anthropic
from raysurfer import RaySurfer
from raysurfer.types import FileWritten, LogFile
client = RaySurfer(api_key="your_raysurfer_api_key")
task = "Fetch GitHub trending repos"
# 1. Search for cached code matching a task
result = client.search(
task=task,
top_k=5,
min_verdict_score=0.3,
)
for match in result.matches:
print(f"{match.code_block.name}: {match.combined_score}")
print(f" Source: {match.code_block.source[:80]}...")
# 2. Upload a new code file after execution
file = FileWritten(path="fetch_repos.py", content="def fetch(): ...")
client.upload_new_code_snip(
task=task,
file_written=file,
succeeded=True,
execution_logs="Fetched 10 trending repos successfully",
dependencies={"httpx": "0.27.0", "pydantic": "2.5.0"},
)
# 2b. Bulk upload prompts/logs/code for sandboxed grading
logs = [LogFile(path="logs/run.log", content="Task completed", encoding="utf-8")]
client.upload_bulk_code_snips(
prompts=["Build a CLI tool", "Add CSV support"],
files_written=[FileWritten(path="cli.py", content="def main(): ...")],
log_files=logs,
)
# 3. Vote on whether a cached snippet was useful
client.vote_code_snip(
task=task,
code_block_id=result.matches[0].code_block.id,
code_block_name=result.matches[0].code_block.name,
code_block_description=result.matches[0].code_block.description,
succeeded=True,
)
```
### Async Version
```python
import anthropic
from raysurfer import AsyncRaySurfer
from raysurfer.types import FileWritten
async with AsyncRaySurfer(api_key="your_api_key") as client:
# 1. Search for cached code
result = await client.search(task="Fetch GitHub trending repos")
for match in result.matches:
print(f"{match.code_block.name}: {match.combined_score}")
# 2. Upload a new code file after execution
file = FileWritten(path="fetch_repos.py", content="def fetch(): ...")
await client.upload_new_code_snip(
task="Fetch GitHub trending repos",
file_written=file,
succeeded=True,
execution_logs="Fetched 10 trending repos successfully",
)
# 3. Vote on snippet manually
await client.vote_code_snip(
task="Fetch GitHub trending repos",
code_block_id=result.matches[0].code_block.id,
code_block_name=result.matches[0].code_block.name,
code_block_description=result.matches[0].code_block.description,
succeeded=True,
)
```
### Client Options
```python
client = RaySurfer(
api_key="your_api_key",
base_url="https://api.raysurfer.com", # optional
timeout=30, # optional, in seconds
organization_id="org_xxx", # optional, for team namespacing
workspace_id="ws_xxx", # optional, for enterprise namespacing
snips_desired="company", # optional, snippet scope
public_snips=True, # optional, include community snippets
)
```
### Response Fields
The `search()` response includes:
| Field | Type | Description |
|-------|------|-------------|
| `matches` | `list[SearchMatch]` | Matching code blocks with scoring |
| `total_found` | `int` | Total matches found |
| `cache_hit` | `bool` | Whether results were from cache |
Each `SearchMatch` contains `code_block` (with `id`, `name`,
`source`, `description`, `entrypoint`, `language`, `dependencies`),
`combined_score`, `vector_score`, `verdict_score`, `thumbs_up`,
`thumbs_down`, `filename`, and `entrypoint`.
### Store a Code Block with Full Metadata
```python
result = client.store_code_block(
name="GitHub User Fetcher",
source="def fetch_user(username): ...",
entrypoint="fetch_user",
language="python",
description="Fetches user data from GitHub API",
tags=["github", "api", "user"],
dependencies={"httpx": "0.27.0", "pydantic": "2.5.0"},
)
```
### Retrieve Few-Shot Examples
```python
examples = client.get_few_shot_examples(task="Parse CSV files", k=3)
for ex in examples:
print(f"Task: {ex.task}")
print(f"Code: {ex.code_snippet}")
```
### Retrieve Task Patterns
```python
patterns = client.get_task_patterns(
task="API integration",
min_thumbs_up=5,
top_k=20,
)
for p in patterns:
print(f"{p.task_pattern} -> {p.code_block_name}")
```
### User-Provided Votes
Instead of relying on AI voting, provide your own votes:
```python
# Single upload with your own vote (AI voting is skipped)
client.upload_new_code_snip(
task="Fetch GitHub trending repos",
file_written=file,
succeeded=True,
user_vote=1, # 1 = thumbs up, -1 = thumbs down
)
# Bulk upload with per-file votes (AI grading is skipped)
client.upload_bulk_code_snips(
prompts=["Build a CLI tool", "Add CSV support"],
files_written=files,
log_files=logs,
user_votes={
"app.py": 1, # thumbs up
"utils.py": -1, # thumbs down
},
)
```
### Method Reference
| Method | Description |
|--------|-------------|
| `search(task, top_k, min_verdict_score, prefer_complete, input_schema)` | Search for cached code snippets |
| `get_code_snips(task, top_k, min_verdict_score)` | Retrieve cached code snippets by semantic search |
| `retrieve_best(task, top_k, min_verdict_score)` | Retrieve the single best match |
| `get_few_shot_examples(task, k)` | Retrieve few-shot examples for code generation prompting |
| `get_task_patterns(task, min_thumbs_up, top_k)` | Retrieve proven task-to-code mappings |
| `store_code_block(name, source, entrypoint, language, description, tags, dependencies, ...)` | Store a code block with full metadata |
| `upload_new_code_snip(task, file_written, succeeded, use_raysurfer_ai_voting, user_vote, execution_logs, dependencies)` | Store a single code file with optional dependency versions |
| `upload_bulk_code_snips(prompts, files_written, log_files, use_raysurfer_ai_voting, user_votes)` | Bulk upload for grading (AI votes by default, or provide per-file votes) |
| `vote_code_snip(task, code_block_id, name, description, succeeded)` | Vote on snippet usefulness |
### Exceptions
Both sync and async clients include built-in retry logic with exponential backoff for transient failures (429, 5xx, network errors).
| Exception | Description |
|-----------|-------------|
| `RaySurferError` | Base exception for all Raysurfer errors |
| `APIError` | API returned an error response (includes `status_code`) |
| `AuthenticationError` | API key is invalid or missing |
| `CacheUnavailableError` | Cache backend is unreachable |
| `RateLimitError` | Rate limit exceeded after retries (includes `retry_after`) |
| `ValidationError` | Request validation failed (includes `field`) |
```python
from raysurfer import RaySurfer
from raysurfer.exceptions import RateLimitError
client = RaySurfer(api_key="your_api_key")
try:
result = client.get_code_snips(task="Fetch GitHub repos")
except RateLimitError as e:
print(f"Rate limited after retries: {e}")
if e.retry_after:
print(f"Try again in {e.retry_after}s")
```
---
## Claude Agent SDK Drop-in
Swap your client class and method names. Options come directly from `claude_agent_sdk`:
```python
# Before
from claude_agent_sdk import ClaudeSDKClient, ClaudeAgentOptions
# After
from raysurfer import RaysurferClient
from claude_agent_sdk import ClaudeAgentOptions
options = ClaudeAgentOptions(
allowed_tools=["Read", "Write", "Bash"],
system_prompt="You are a helpful assistant.",
)
async with RaysurferClient(options) as client:
await client.query("Generate quarterly report")
async for msg in client.response():
print(msg)
```
### Method Mapping
| Claude SDK | Raysurfer |
|------------|-----------|
| `ClaudeSDKClient(options)` | `RaysurferClient(options)` |
| `await client.query(prompt)` | `await client.query(prompt)` |
| `client.receive_response()` | `client.response()` |
### Full Example
```python
import asyncio
import os
from raysurfer import RaysurferClient
from claude_agent_sdk import ClaudeAgentOptions
os.environ["RAYSURFER_API_KEY"] = "your_api_key"
async def main():
options = ClaudeAgentOptions(
allowed_tools=["Read", "Write", "Bash"],
system_prompt="You are a helpful assistant.",
)
async with RaysurferClient(options) as client:
# First run: generates and caches code
await client.query("Fetch GitHub trending repos")
async for msg in client.response():
print(msg)
# Second run: retrieves from cache (instant)
await client.query("Fetch GitHub trending repos")
async for msg in client.response():
print(msg)
asyncio.run(main())
```
### Without Caching
If `RAYSURFER_API_KEY` is not set, `RaysurferClient` behaves exactly like `ClaudeSDKClient` — no caching, just a pass-through wrapper.
## Snippet Retrieval Scope
Control which cached snippets are retrieved using `snips_desired`:
```python
from raysurfer import RaysurferClient
from claude_agent_sdk import ClaudeAgentOptions
options = ClaudeAgentOptions(
allowed_tools=["Read", "Write", "Bash"],
)
# Include company-level snippets
client = RaysurferClient(
options,
snips_desired="company", # Company-level snippets (Team/Enterprise)
)
# Enterprise: Retrieve client-specific snippets only
client = RaysurferClient(
options,
snips_desired="client", # Client workspace snippets (Enterprise only)
)
```
| Configuration | Required Tier |
|--------------|---------------|
| `snips_desired="company"` | TEAM or ENTERPRISE |
| `snips_desired="client"` | ENTERPRISE only |
## Public Snippets
Include community public snippets (crawled from GitHub) in
retrieval results alongside your private snippets:
```python
# High-level
client = RaysurferClient(options, public_snips=True)
# Low-level
client = RaySurfer(api_key="...", public_snips=True)
```
## Programmatic Tool Calling
Register local tools, then either:
1) pass in `user_code` (primary mode), or
2) generate code inside the sandbox with your own provider key + prompt (optional mode).
```python
import asyncio
from raysurfer import AsyncRaySurfer
async def main():
rs = AsyncRaySurfer(api_key="your_api_key")
@rs.tool
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
@rs.tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers together."""
return a * b
user_code = """
intermediate = add(5, 3)
final = multiply(intermediate, 2)
print(final)
"""
result = await rs.execute(
"Add 5 and 3, then multiply the result by 2",
user_code=user_code,
)
print(result.result) # "16"
print(result.tool_calls) # [ToolCallRecord(tool_name='add', ...), ToolCallRecord(tool_name='multiply', ...)]
print(result.cache_hit) # False (reserved field for execute)
asyncio.run(main())
```
The `@rs.tool` decorator introspects your function signature to build a JSON schema. Both sync and async callbacks are supported.
### How It Works
1. SDK connects a WebSocket to the server for tool call routing
2. Your app sends either `user_code` (primary mode) or `codegen_*` inputs (optional mode) to `/api/execute/run`
3. Code runs in a sandboxed environment — tool calls are routed back to your local functions via WebSocket
4. Results are returned with full tool call history
### Execute Options
```python
result = await rs.execute(
"Your task description",
user_code="print(add(1, 2))", # Primary mode
timeout=300, # Max execution time in seconds (default 300)
)
# Optional mode: generate code in sandbox using your own key + prompt
result = await rs.execute(
"Your task description",
codegen_api_key="your_anthropic_key",
codegen_prompt="Write Python code that uses add(a, b) and prints the result for 2 + 3.",
codegen_model="claude-opus-4-6",
)
```
### ExecuteResult Fields
| Field | Type | Description |
|-------|------|-------------|
| `execution_id` | `str` | Unique execution identifier |
| `result` | `str \| None` | Stdout output from the script |
| `exit_code` | `int` | Process exit code (0 = success) |
| `duration_ms` | `int` | Total execution time |
| `cache_hit` | `bool` | Reserved field (currently always `False` for execute) |
| `error` | `str \| None` | Error message if exit_code != 0 |
| `tool_calls` | `list[ToolCallRecord]` | All tool calls made during execution |
## License
MIT
| text/markdown | Raymond Xu | null | null | null | null | agents, ai, anthropic, claude, code-blocks, embeddings, retrieval | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Develo... | [] | null | null | >=3.10 | [] | [] | [] | [
"claude-agent-sdk>=0.1.0",
"httpx>=0.25.0",
"pydantic>=2.0.0",
"websockets>=12.0",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-httpx>=0.30.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://www.raysurfer.com",
"Repository, https://github.com/raymondxu/raysurfer-python"
] | uv/0.9.7 | 2026-02-20T00:38:22.787659 | raysurfer-1.0.0.tar.gz | 4,706,626 | ee/4c/38f434037a690a8377c0ce5d55174ba801e08ad301425444d67580bcbb4d/raysurfer-1.0.0.tar.gz | source | sdist | null | false | 6f8255bb6b24744c44cc619e72a9354b | a15b322db03e52eb3a8271c1c620e9c9ac02806c4be989c3a8dbce127366d320 | ee4c38f434037a690a8377c0ce5d55174ba801e08ad301425444d67580bcbb4d | MIT | [] | 256 |
2.4 | datasette-files | 0.1a1 | File management for Datasette | # datasette-files
[](https://pypi.org/project/datasette-files/)
[](https://github.com/datasette/datasette-files/releases)
[](https://github.com/datasette/datasette-files/actions/workflows/test.yml)
[](https://github.com/datasette/datasette-files/blob/main/LICENSE)
File management for Datasette. Upload, serve, search and manage files through a pluggable storage backend system. Ships with built-in filesystem storage and a plugin hook for adding custom backends (S3, Google Cloud Storage, etc.).
## Installation
Install this plugin in the same environment as Datasette.
```bash
datasette install datasette-files
```
## Usage
datasette-files manages files through **sources** — named connections to storage backends. Each source has a slug, a storage type, and backend-specific configuration.
### Configuring sources
Define sources in your `datasette.yaml` (or `metadata.yaml`) under the `datasette-files` plugin config:
```yaml
plugins:
datasette-files:
sources:
my-files:
storage: filesystem
config:
root: /data/uploads
```
This creates a source called `my-files` backed by a local directory at `/data/uploads`. The directory will be created if it doesn't exist.
You can configure multiple sources:
```yaml
plugins:
datasette-files:
sources:
photos:
storage: filesystem
config:
root: /data/photos
documents:
storage: filesystem
config:
root: /data/documents
```
### Permissions
All access is **denied by default**. You must explicitly grant permissions in the `permissions:` block of your `datasette.yaml`.
There are four permission actions, each scoped to a source:
| Action | Description |
|--------|-------------|
| `files-browse` | Browse, search, view, and download files |
| `files-upload` | Upload files to a source |
| `files-edit` | Edit file metadata (e.g. search text) |
| `files-delete` | Delete files from a source |
**Grant access to everyone (all sources):**
```yaml
permissions:
files-browse: true
files-upload: true
```
**Grant access to a specific user:**
```yaml
permissions:
files-browse:
id: alice
files-upload:
id: alice
```
**Per-source permissions:**
```yaml
permissions:
files-browse:
public-files:
allow: true
private-files:
allow:
id: alice
files-upload:
public-files:
allow:
id: alice
```
### Uploading files
Upload a file by sending a `POST` request with multipart form data to `/-/files/upload/{source_slug}`:
```bash
curl -X POST "http://localhost:8001/-/files/upload/my-files" \
-F "file=@photo.jpg"
```
The response includes the file's unique ID and metadata:
```json
{
"file_id": "df-01j5a3b4c5d6e7f8g9h0jkmnpq",
"filename": "photo.jpg",
"content_type": "image/jpeg",
"size": 48210,
"url": "/-/files/df-01j5a3b4c5d6e7f8g9h0jkmnpq"
}
```
File IDs use the format `df-{ULID}` — the `df-` prefix makes them instantly recognizable when stored in database columns.
### Viewing files
Each file has an HTML info page at `/-/files/{file_id}` showing its metadata, a preview (for images), and a download link.
Download the file content directly at `/-/files/{file_id}/download`.
Get file metadata as JSON at `/-/files/{file_id}.json`.
### Searching files
Visit `/-/files/search` to search across all files you have permission to browse. The search page supports full-text search over filenames, content types, and custom search text.
The search endpoint is also available as JSON at `/-/files/search.json?q=query&source=source-slug`.
Each file has an editable `search_text` field (requires `files-edit` permission) that is included in the full-text search index. This can be used to add descriptions, tags, or transcriptions to make files more discoverable.
### Batch metadata
Fetch metadata for multiple files in a single request:
```
GET /-/files/batch.json?id=df-abc123&id=df-def456
```
This returns metadata for all requested files that the current user has permission to browse. This endpoint is used internally by the `render_cell` web component to efficiently load file information for table views.
### Listing sources
View all configured sources and their capabilities:
```
GET /-/files/sources.json
```
### Table cell integration
Any database column containing a `df-...` file ID will automatically render as a rich file reference in Datasette's table views. The `render_cell` hook detects file IDs and replaces them with a `<datasette-file>` web component that displays the filename, content type, and a thumbnail for images.
This works for any text column — store a `df-...` ID returned from the upload endpoint in a column and it will render as a file link automatically.
## API reference
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/-/files/search` | Search files (HTML) |
| `GET` | `/-/files/search.json?q=&source=` | Search files (JSON) |
| `GET` | `/-/files/sources.json` | List configured sources |
| `GET` | `/-/files/batch.json?id=df-...&id=df-...` | Bulk file metadata |
| `POST` | `/-/files/upload/{source_slug}` | Upload a file (multipart) |
| `GET` | `/-/files/{file_id}` | File info page (HTML) |
| `GET` | `/-/files/{file_id}.json` | File metadata (JSON) |
| `GET` | `/-/files/{file_id}/download` | Download file content |
## Plugin hook: `register_files_storage_types`
datasette-files uses a plugin hook to allow other Datasette plugins to provide custom storage backends. This is how you would build plugins like `datasette-files-s3` or `datasette-files-gcs`.
### How it works
The hook is called at startup. Your plugin returns a list of `Storage` subclasses (not instances). datasette-files handles instantiation, configuration, and lifecycle management.
```python
from datasette import hookimpl
@hookimpl
def register_files_storage_types(datasette):
from my_plugin.storage import S3Storage
return [S3Storage]
```
When a source in `datasette.yaml` references your storage type, datasette-files will:
1. Instantiate your class (calling `S3Storage()`)
2. Call `await storage.configure(config, get_secret)` with the source's config dict
3. Use your storage instance for all file operations on that source
### The `Storage` base class
Import the base class and supporting dataclasses from `datasette_files.base`:
```python
from datasette_files.base import Storage, StorageCapabilities, FileMetadata
```
#### `StorageCapabilities`
A dataclass declaring what your storage backend supports:
```python
@dataclass
class StorageCapabilities:
can_upload: bool = False
can_delete: bool = False
can_list: bool = False
can_generate_signed_urls: bool = False
can_generate_thumbnails: bool = False
requires_proxy_download: bool = False
max_file_size: Optional[int] = None
```
- `can_upload`: The backend can receive file uploads via `receive_upload()`
- `can_delete`: The backend can delete files via `delete_file()`
- `can_list`: The backend can list files via `list_files()`
- `can_generate_signed_urls`: The backend can produce expiring download URLs via `download_url()` — if `True`, file downloads will use a 302 redirect to the signed URL instead of proxying content through Datasette
- `can_generate_thumbnails`: The backend can produce thumbnail URLs via `thumbnail_url()`
- `requires_proxy_download`: File content must be proxied through Datasette (e.g. filesystem storage) rather than redirecting to an external URL
- `max_file_size`: Optional maximum file size in bytes
#### `FileMetadata`
Returned by several storage methods to describe a file:
```python
@dataclass
class FileMetadata:
path: str # Path within the storage backend
filename: str # Human-readable filename
content_type: Optional[str] = None # MIME type
content_hash: Optional[str] = None # e.g. "sha256:abcdef..."
size: Optional[int] = None # Size in bytes
width: Optional[int] = None # Image width in pixels
height: Optional[int] = None # Image height in pixels
created_at: Optional[str] = None
metadata: dict = field(default_factory=dict)
```
#### Required methods
Every `Storage` subclass must implement these:
**`storage_type`** (property) — A unique string identifier for this storage type, used in source configuration. This is how datasette-files matches a source's `storage: s3` to your class.
```python
@property
def storage_type(self) -> str:
return "s3"
```
**`capabilities`** (property) — Return a `StorageCapabilities` instance declaring what this backend supports.
```python
@property
def capabilities(self) -> StorageCapabilities:
return StorageCapabilities(
can_upload=True,
can_delete=True,
can_generate_signed_urls=True,
)
```
**`configure(config, get_secret)`** — Called once at startup with the source's `config` dict from `datasette.yaml` and a `get_secret` callable for retrieving secrets from `datasette-secrets`.
```python
async def configure(self, config: dict, get_secret) -> None:
self.bucket = config["bucket"]
self.prefix = config.get("prefix", "")
self.region = config.get("region", "us-east-1")
```
**`get_file_metadata(path)`** — Return a `FileMetadata` for the given path, or `None` if the file doesn't exist.
```python
async def get_file_metadata(self, path: str) -> Optional[FileMetadata]:
# Check if the file exists in your backend and return its metadata
...
```
**`read_file(path)`** — Return the full content of a file as bytes. Raise `FileNotFoundError` if missing.
```python
async def read_file(self, path: str) -> bytes:
# Read and return the file content
...
```
#### Optional methods
Override these based on the capabilities you declared:
**`receive_upload(path, content, content_type)`** — Store file content. Return a `FileMetadata` with at least the `content_hash` and `size` populated. Required if `can_upload` is `True`.
```python
async def receive_upload(self, path: str, content: bytes, content_type: str) -> FileMetadata:
# Store the file and return metadata
...
```
**`delete_file(path)`** — Delete a file. Required if `can_delete` is `True`.
**`list_files(prefix, cursor, limit)`** — List files, returning `(files, next_cursor)`. Required if `can_list` is `True`.
**`download_url(path, expires_in)`** — Return a signed/expiring download URL. Required if `can_generate_signed_urls` is `True`.
**`stream_file(path)`** — Yield file content in chunks as an async iterator. The default implementation reads the entire file with `read_file()` and yields it as a single chunk.
**`thumbnail_url(path, width, height)`** — Return a URL for a thumbnail of the file, or `None`.
### Full example: S3 storage plugin
Here's a complete example of what a `datasette-files-s3` plugin would look like:
```python
# datasette_files_s3/__init__.py
from datasette import hookimpl
from datasette_files.base import Storage, StorageCapabilities, FileMetadata
import boto3
import hashlib
from typing import Optional
class S3Storage(Storage):
storage_type = "s3"
capabilities = StorageCapabilities(
can_upload=True,
can_delete=True,
can_list=True,
can_generate_signed_urls=True,
requires_proxy_download=False,
)
async def configure(self, config: dict, get_secret) -> None:
self.bucket = config["bucket"]
self.prefix = config.get("prefix", "")
self.region = config.get("region", "us-east-1")
self.client = boto3.client("s3", region_name=self.region)
def _key(self, path: str) -> str:
return f"{self.prefix}{path}" if self.prefix else path
async def get_file_metadata(self, path: str) -> Optional[FileMetadata]:
try:
resp = self.client.head_object(
Bucket=self.bucket, Key=self._key(path)
)
return FileMetadata(
path=path,
filename=path.split("/")[-1],
content_type=resp.get("ContentType"),
size=resp.get("ContentLength"),
)
except self.client.exceptions.ClientError:
return None
async def read_file(self, path: str) -> bytes:
resp = self.client.get_object(
Bucket=self.bucket, Key=self._key(path)
)
return resp["Body"].read()
async def receive_upload(
self, path: str, content: bytes, content_type: str
) -> FileMetadata:
self.client.put_object(
Bucket=self.bucket,
Key=self._key(path),
Body=content,
ContentType=content_type,
)
content_hash = "sha256:" + hashlib.sha256(content).hexdigest()
return FileMetadata(
path=path,
filename=path.split("/")[-1],
content_type=content_type,
content_hash=content_hash,
size=len(content),
)
async def download_url(self, path: str, expires_in: int = 300) -> str:
return self.client.generate_presigned_url(
"get_object",
Params={"Bucket": self.bucket, "Key": self._key(path)},
ExpiresIn=expires_in,
)
async def delete_file(self, path: str) -> None:
self.client.delete_object(
Bucket=self.bucket, Key=self._key(path)
)
async def list_files(
self, prefix: str = "", cursor: Optional[str] = None, limit: int = 100
) -> tuple[list[FileMetadata], Optional[str]]:
kwargs = {
"Bucket": self.bucket,
"Prefix": self._key(prefix),
"MaxKeys": limit,
}
if cursor:
kwargs["ContinuationToken"] = cursor
resp = self.client.list_objects_v2(**kwargs)
files = [
FileMetadata(
path=obj["Key"].removeprefix(self.prefix),
filename=obj["Key"].split("/")[-1],
size=obj["Size"],
)
for obj in resp.get("Contents", [])
]
next_cursor = resp.get("NextContinuationToken")
return files, next_cursor
@hookimpl
def register_files_storage_types(datasette):
return [S3Storage]
```
The plugin's `pyproject.toml` would register itself as a Datasette plugin:
```toml
[project.entry-points.datasette]
files_s3 = "datasette_files_s3"
```
Then configure it in `datasette.yaml`:
```yaml
plugins:
datasette-files:
sources:
product-images:
storage: s3
config:
bucket: my-photos-bucket
prefix: "uploads/"
region: us-west-2
```
### Built-in filesystem storage reference
The built-in `FilesystemStorage` stores files on the local filesystem. It supports upload, delete, and listing but does not support signed URLs — file downloads are proxied through Datasette.
**Configuration options:**
| Key | Required | Description |
|-----|----------|-------------|
| `root` | Yes | Absolute path to the directory where files are stored |
| `max_file_size` | No | Maximum upload size in bytes |
**Capabilities:**
| Capability | Value |
|-----------|-------|
| `can_upload` | `True` |
| `can_delete` | `True` |
| `can_list` | `True` |
| `can_generate_signed_urls` | `False` |
| `requires_proxy_download` | `True` |
## Development
To set up this plugin locally, first checkout the code. Run the tests with `uv`:
```bash
cd datasette-files
uv run pytest
```
Recommendation to run a test server:
```bash
./dev-server.sh
```
| text/markdown | Datasette | null | null | null | null | null | [
"Framework :: Datasette"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"datasette>=1.0a24",
"python-ulid",
"typing_extensions"
] | [] | [] | [] | [
"Homepage, https://github.com/datasette/datasette-files",
"Changelog, https://github.com/datasette/datasette-files/releases",
"Issues, https://github.com/datasette/datasette-files/issues",
"CI, https://github.com/datasette/datasette-files/actions"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:37:41.746285 | datasette_files-0.1a1.tar.gz | 38,941 | 36/25/59dd70e0788263bad29f09127034d3a6dc442d7a9fac9dae384f291a0728/datasette_files-0.1a1.tar.gz | source | sdist | null | false | 29a131c997f4ff6aed3c7ffbd764e81a | c99de4974a86889297418c937368b9cd633d659f592dfd76830d8b17312b7bb3 | 362559dd70e0788263bad29f09127034d3a6dc442d7a9fac9dae384f291a0728 | Apache-2.0 | [
"LICENSE"
] | 270 |
2.4 | unstructured-ingest | 1.4.5 | Local ETL data pipeline to get data RAG ready | # Unstructured Ingest
For details, see the [Unstructured Ingest overview](https://docs.unstructured.io/ingestion/overview) in the Unstructured documentation.
| text/markdown | null | Unstructured Technologies <devops@unstructuredai.io> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Py... | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"certifi>=2026.1.4",
"click",
"opentelemetry-sdk",
"pydantic>=2.7",
"python-dateutil",
"tqdm",
"pandas; extra == \"airtable\"",
"pyairtable; extra == \"airtable\"",
"astrapy>2.0.0; extra == \"astradb\"",
"adlfs; extra == \"azure\"",
"fsspec; extra == \"azure\"",
"azure-search-documents; extra ... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:37:23.366339 | unstructured_ingest-1.4.5.tar.gz | 212,729 | 5f/3e/354b5c7716327a4601df7f792c57447c9d93e02786fa3e97426ac9cce2bf/unstructured_ingest-1.4.5.tar.gz | source | sdist | null | false | 22c18f8dce31a1ec10bd6a10ce61cfbc | 1d824c694e76f1a60f9c8d018fe98d18e20527866cdb74aa571626551f815dd2 | 5f3e354b5c7716327a4601df7f792c57447c9d93e02786fa3e97426ac9cce2bf | Apache-2.0 | [
"LICENSE.md"
] | 694 |
2.4 | dorsalhub-adapters | 0.1.0 | Export validated JSON to standard formats. | <p align="center">
<img src="https://dorsalhub.com/static/img/dorsal-adapters-logo.png" alt="Dorsal" width="520">
</p>
<p align="center">
<strong>Export validated JSON to standard formats.</strong>
</p>
<p align="center">
<a href="https://pypi.org/project/dorsalhub-adapters/">
<img src="https://img.shields.io/pypi/v/dorsalhub-adapters?color=0ea5e9" alt="PyPI version">
</a>
<a href="https://codecov.io/gh/dorsalhub/dorsal-adapters">
<img src="https://codecov.io/gh/dorsalhub/dorsal-adapters/graph/badge.svg" alt="codecov">
</a>
<a href="https://opensource.org/licenses/Apache-2.0">
<img src="https://img.shields.io/badge/license-Apache_2.0-0ea5e9" alt="License">
</a>
</p>
**Dorsal Adapters** translates [validated](https://github.com/dorsalhub/open-validation-schemas) JSON records into various industry-standard formats.
## Installation
Dorsal Adapters is available on pypi as `dorsalhub-adapters`:
```bash
pip install dorsalhub-adapters
```
## Usage
Adapters are strictly typed, pure Python classes with four exposed methods:
- `export(record)` / `export_file(record, fp)`: Converts a JSON record into a standard format e.g. `audio-transcription` -> **.srt**
- `parse(content)` / `parse_file(fp)`: Best effort conversion from a standard format to JSON Record, e.g. **.srt** -> `audio-transcription`
In both cases, the JSON is [Open Validation Schemas](https://github.com/dorsalhub/open-validation-schemas)-compliant.
### Example: Two-Way Audio Conversion
In this example, a valid [`open/audio-transcription`](https://docs.dorsalhub.com/reference/schemas/open/audio-transcription/) record is converted to SubRip Text (.srt) format.
```python
from dorsal_adapters.registry import get_adapter
# 1. The record we want to convert.
dorsal_record = {
"track_id": 1,
"language": "eng",
"segments": [
{
"start_time": 0.5,
"end_time": 4.75,
"text": "Welcome back! Today, my guest is the renowned chef, Jean-Pierre."
}
]
}
# 2. Retrieve the adapter for the schema and target format
adapter = get_adapter("audio-transcription", "srt")
# 3. Export to the target format (.srt)
srt_string = adapter.export(dorsal_record)
print(srt_string)
# 4. Parse the formatted string back into a Dorsal record
parsed_record = adapter.parse(srt_string)
```
**Tip:** You can check what formats are supported for a given schema using `list_formats`:
```python
from dorsal_adapters.registry import list_formats
print(list_formats("audio-transcription"))
```
## Supported Formats
Dorsal Adapters supports two-way conversion (exporting and parsing) between schema-validated JSON records and the following formats:
### Audio Transcription (via `open/audio-transcription`)
* `srt`: SubRip Subtitle format
* `vtt`: WebVTT format - W3C standard web subtitle format
* `md`: Markdown format - A markdown-formatted audio transcription optimized for RAG
* `txt`: Plain Text format
* `tsv`: Tab-Separated Values format
## Contributing
We welcome contributions! If you have written a translation script for an **Open Validation Schema** that maps to a widely used industry standard, please open a PR.
See `CONTRIBUTING.md` for our development setup using `uv` and our strict typing guidelines.
## License
Dorsal Adapters is open source and provided under the Apache 2.0 license.
| text/markdown | null | Rio Achuzia <rio@dorsalhub.com> | null | null | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programmi... | [] | null | null | >=3.11 | [] | [] | [] | [
"srt>=3.5.3",
"webvtt-py>=0.5.1"
] | [] | [] | [] | [
"Homepage, https://dorsalhub.com",
"Repository, https://github.com/dorsalhub/dorsal-adapters",
"Documentation, https://docs.dorsalhub.com"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T00:36:17.692175 | dorsalhub_adapters-0.1.0-py3-none-any.whl | 18,818 | c8/fd/a8ed9fd36d002da2687494e1b689347f0c15234bc70366d2a1489c48781c/dorsalhub_adapters-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | a737f427025adf4a6e42bb42916c364e | 02d51e30f742986928376415e6661d50b315e873a5f9ea48887da2bf62bffe23 | c8fda8ed9fd36d002da2687494e1b689347f0c15234bc70366d2a1489c48781c | null | [
"LICENSE"
] | 307 |
2.4 | unstructured | 0.20.8 | A library that prepares raw documents for downstream ML tasks. | <h3 align="center">
<img
src="https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/img/unstructured_logo.png"
height="200"
>
</h3>
<div align="center">
<a href="https://github.com/Unstructured-IO/unstructured/blob/main/LICENSE.md"></a>
<a href="https://pypi.python.org/pypi/unstructured/"></a>
<a href="https://GitHub.com/unstructured-io/unstructured/graphs/contributors"></a>
<a href="https://github.com/Unstructured-IO/unstructured/blob/main/CODE_OF_CONDUCT.md"> </a>
<a href="https://GitHub.com/unstructured-io/unstructured/releases"></a>
<a href="https://pypi.python.org/pypi/unstructured/"></a>
[](https://pepy.tech/project/unstructured)
[](https://pepy.tech/project/unstructured)
<a
href="https://www.phorm.ai/query?projectId=34efc517-2201-4376-af43-40c4b9da3dc5">
<img src="https://img.shields.io/badge/Phorm-Ask_AI-%23F2777A.svg?&logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNSIgaGVpZ2h0PSI0IiBmaWxsPSJub25lIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPgogIDxwYXRoIGQ9Ik00LjQzIDEuODgyYTEuNDQgMS40NCAwIDAgMS0uMDk4LjQyNmMtLjA1LjEyMy0uMTE1LjIzLS4xOTIuMzIyLS4wNzUuMDktLjE2LjE2NS0uMjU1LjIyNmExLjM1MyAxLjM1MyAwIDAgMS0uNTk1LjIxMmMtLjA5OS4wMTItLjE5Mi4wMTQtLjI3OS4wMDZsLTEuNTkzLS4xNHYtLjQwNmgxLjY1OGMuMDkuMDAxLjE3LS4xNjkuMjQ2LS4xOTFhLjYwMy42MDMgMCAwIDAgLjItLjEwNi41MjkuNTI5IDAgMCAwIC4xMzgtLjE3LjY1NC42NTQgMCAwIDAgLjA2NS0uMjRsLjAyOC0uMzJhLjkzLjkzIDAgMCAwLS4wMzYtLjI0OS41NjcuNTY3IDAgMCAwLS4xMDMtLjIuNTAyLjUwMiAwIDAgMC0uMTY4LS4xMzguNjA4LjYwOCAwIDAgMC0uMjQtLjA2N0wyLjQzNy43MjkgMS42MjUuNjcxYS4zMjIuMzIyIDAgMCAwLS4yMzIuMDU4LjM3NS4zNzUgMCAwIDAtLjExNi4yMzJsLS4xMTYgMS40NS0uMDU4LjY5Ny0uMDU4Ljc1NEwuNzA1IDRsLS4zNTctLjA3OUwuNjAyLjkwNkMuNjE3LjcyNi42NjMuNTc0LjczOS40NTRhLjk1OC45NTggMCAwIDEgLjI3NC0uMjg1Ljk3MS45NzEgMCAwIDEgLjMzNy0uMTRjLjExOS0uMDI2LjIyNy0uMDM0LjMyNS0uMDI2TDMuMjMyLjE2Yy4xNTkuMDE0LjMzNi4wMy40NTkuMDgyYTEuMTczIDEuMTczIDAgMCAxIC41NDUuNDQ3Yy4wNi4wOTQuMTA5LjE5Mi4xNDQuMjkzYTEuMzkyIDEuMzkyIDAgMCAxIC4wNzguNThsLS4wMjkuMzJaIiBmaWxsPSIjRjI3NzdBIi8+CiAgPHBhdGggZD0iTTQuMDgyIDIuMDA3YTEuNDU1IDEuNDU1IDAgMCAxLS4wOTguNDI3Yy0uMDUuMTI0LS4xMTQuMjMyLS4xOTIuMzI0YTEuMTMgMS4xMyAwIDAgMS0uMjU0LjIyNyAxLjM1MyAxLjM1MyAwIDAgMS0uNTk1LjIxNGMtLjEuMDEyLS4xOTMuMDE0LS4yOC4wMDZsLTEuNTYtLjEwOC4wMzQtLjQwNi4wMy0uMzQ4IDEuNTU5LjE1NGMuMDkgMCAuMTczLS4wMS4yNDgtLjAzM2EuNjAzLjYwMyAwIDAgMCAuMi0uMTA2LjUzMi41MzIgMCAwIDAgLjEzOS0uMTcyLjY2LjY2IDAgMCAwIC4wNjQtLjI0MWwuMDI5LS4zMjFhLjk0Ljk0IDAgMCAwLS4wMzYtLjI1LjU3LjU3IDAgMCAwLS4xMDMtLjIwMi41MDIuNTAyIDAgMCAwLS4xNjgtLjEzOC42MDUuNjA1IDAgMCAwLS4yNC0uMDY3TDEuMjczLjgyN2MtLjA5NC0uMDA4LS4xNjguMDEtLjIyMS4wNTUtLjA1My4wNDUtLjA4NC4xMTQtLjA5Mi4yMDZMLjcwNSA0IDAgMy45MzhsLjI1NS0yLjkxMUExLjAxIDEuMDEgMCAwIDEgLjM5My41NzIuOTYyLjk2MiAwIDAgMSAuNjY2LjI4NmEuOTcuOTcgMCAwIDEgLjMzOC0uMTRDMS4xMjIuMTIgMS4yMy4xMSAxLjMyOC4xMTlsMS41OTMuMTRjLjE2LjAxNC4zLjA0Ny40MjMuMWExLjE3IDEuMTcgMCAwIDEgLjU0NS40NDhjLjA2MS4wOTUuMTA5LjE5My4xNDQuMjk1YTEuNDA2IDEuNDA2IDAgMCAxIC4wNzcuNTgzbC0uMDI4LjMyMloiIGZpbGw9IndoaXRlIi8+CiAgPHBhdGggZD0iTTQuMDgyIDIuMDA3YTEuNDU1IDEuNDU1IDAgMCAxLS4wOTguNDI3Yy0uMDUuMTI0LS4xMTQuMjMyLS4xOTIuMzI0YTEuMTMgMS4xMyAwIDAgMS0uMjU0LjIyNyAxLjM1MyAxLjM1MyAwIDAgMS0uNTk1LjIxNGMtLjEuMDEyLS4xOTMuMDE0LS4yOC4wMDZsLTEuNTYtLjEwOC4wMzQtLjQwNi4wMy0uMzQ4IDEuNTU5LjE1NGMuMDkgMCAuMTczLS4wMS4yNDgtLjAzM2EuNjAzLjYwMyAwIDAgMCAuMi0uMTA2LjUzMi41MzIgMCAwIDAgLjEzOS0uMTcyLjY2LjY2IDAgMCAwIC4wNjQtLjI0MWwuMDI5LS4zMjFhLjk0Ljk0IDAgMCAwLS4wMzYtLjI1LjU3LjU3IDAgMCAwLS4xMDMtLjIwMi41MDIuNTAyIDAgMCAwLS4xNjgtLjEzOC42MDUuNjA1IDAgMCAwLS4yNC0uMDY3TDEuMjczLjgyN2MtLjA5NC0uMDA4LS4xNjguMDEtLjIyMS4wNTUtLjA1My4wNDUtLjA4NC4xMTQtLjA5Mi4yMDZMLjcwNSA0IDAgMy45MzhsLjI1NS0yLjkxMUExLjAxIDEuMDEgMCAwIDEgLjM5My41NzIuOTYyLjk2MiAwIDAgMSAuNjY2LjI4NmEuOTcuOTcgMCAwIDEgLjMzOC0uMTRDMS4xMjIuMTIgMS4yMy4xMSAxLjMyOC4xMTlsMS41OTMuMTRjLjE2LjAxNC4zLjA0Ny40MjMuMWExLjE3IDEuMTcgMCAwIDEgLjU0NS40NDhjLjA2MS4wOTUuMTA5LjE5My4xNDQuMjk1YTEuNDA2IDEuNDA2IDAgMCAxIC4wNzcuNTgzbC0uMDI4LjMyMloiIGZpbGw9IndoaXRlIi8+Cjwvc3ZnPgo=" />
</a>
</div>
<div>
<p align="center">
<a
href="https://short.unstructured.io/pzw05l7">
<img src="https://img.shields.io/badge/JOIN US ON SLACK-4A154B?style=for-the-badge&logo=slack&logoColor=white" />
</a>
<a href="https://www.linkedin.com/company/unstructuredio/">
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white" />
</a>
</div>
<h2 align="center">
<p>Open-Source Pre-Processing Tools for Unstructured Data</p>
</h2>
The `unstructured` library provides open-source components for ingesting and pre-processing images and text documents, such as PDFs, HTML, Word docs, and [many more](https://docs.unstructured.io/open-source/core-functionality/partitioning). The use cases of `unstructured` revolve around streamlining and optimizing the data processing workflow for LLMs. `unstructured` modular functions and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and efficient in transforming unstructured data into structured outputs.
## Try the Unstructured Platform Product
Ready to move your data processing pipeline to production, and take advantage of advanced features? Check out [Unstructured Platform](https://unstructured.io/enterprise). In addition to better processing performance, take advantage of chunking, embedding, and image and table enrichment generation, all from a low code UI or an API. [Request a demo](https://unstructured.io/contact) from our sales team to learn more about how to get started.
## :eight_pointed_black_star: Quick Start
There are several ways to use the `unstructured` library:
* [Run the library in a container](https://github.com/Unstructured-IO/unstructured#run-the-library-in-a-container) or
* Install the library
1. [Install from PyPI](https://github.com/Unstructured-IO/unstructured#installing-the-library)
2. [Install for local development](https://github.com/Unstructured-IO/unstructured#installation-instructions-for-local-development)
* For installation with `conda` on Windows system, please refer to the [documentation](https://unstructured-io.github.io/unstructured/installing.html#installation-with-conda-on-windows)
### Run the library in a container
The following instructions are intended to help you get up and running using Docker to interact with `unstructured`.
See [here](https://docs.docker.com/get-docker/) if you don't already have docker installed on your machine.
NOTE: we build multi-platform images to support both x86_64 and Apple silicon hardware. `docker pull` should download the corresponding image for your architecture, but you can specify with `--platform` (e.g. `--platform linux/amd64`) if needed.
We build Docker images for all pushes to `main`. We tag each image with the corresponding short commit hash (e.g. `fbc7a69`) and the application version (e.g. `0.5.5-dev1`). We also tag the most recent image with `latest`. To leverage this, `docker pull` from our image repository.
```bash
docker pull downloads.unstructured.io/unstructured-io/unstructured:latest
```
Once pulled, you can create a container from this image and shell to it.
```bash
# create the container
docker run -dt --name unstructured downloads.unstructured.io/unstructured-io/unstructured:latest
# this will drop you into a bash shell where the Docker image is running
docker exec -it unstructured bash
```
You can also build your own Docker image. Note that the base image is `wolfi-base`, which is
updated regularly. If you are building the image locally, it is possible `docker-build` could
fail due to upstream changes in `wolfi-base`.
If you only plan on parsing one type of data you can speed up building the image by commenting out some
of the packages/requirements necessary for other data types. See Dockerfile to know which lines are necessary
for your use case.
```bash
make docker-build
# this will drop you into a bash shell where the Docker image is running
make docker-start-bash
```
Once in the running container, you can try things directly in Python interpreter's interactive mode.
```bash
# this will drop you into a python console so you can run the below partition functions
python3
>>> from unstructured.partition.pdf import partition_pdf
>>> elements = partition_pdf(filename="example-docs/layout-parser-paper-fast.pdf")
>>> from unstructured.partition.text import partition_text
>>> elements = partition_text(filename="example-docs/fake-text.txt")
```
### Installing the library
Use the following instructions to get up and running with `unstructured` and test your
installation.
- Install the Python SDK to support all document types with `pip install "unstructured[all-docs]"`
- For plain text files, HTML, XML, JSON and Emails that do not require any extra dependencies, you can run `pip install unstructured`
- To process other doc types, you can install the extras required for those documents, such as `pip install "unstructured[docx,pptx]"`
- Install the following system dependencies if they are not already available on your system.
Depending on what document types you're parsing, you may not need all of these.
- `libmagic-dev` (filetype detection)
- `poppler-utils` (images and PDFs)
- `tesseract-ocr` (images and PDFs, install `tesseract-lang` for additional language support)
- `libreoffice` (MS Office docs)
- `pandoc` is bundled automatically via the `pypandoc-binary` Python package (no system install needed)
- For suggestions on how to install on the Windows and to learn about dependencies for other features, see the
installation documentation [here](https://unstructured-io.github.io/unstructured/installing.html).
At this point, you should be able to run the following code:
```python
from unstructured.partition.auto import partition
elements = partition(filename="example-docs/eml/fake-email.eml")
print("\n\n".join([str(el) for el in elements]))
```
### Installation Instructions for Local Development
The following instructions are intended to help you get up and running with `unstructured`
locally if you are planning to contribute to the project.
This project uses [uv](https://docs.astral.sh/uv/) for dependency management. Install it first:
```bash
# macOS / Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
```
Then install all dependencies (base, extras, dev, test, and lint groups):
```bash
make install
```
This runs `uv sync --locked --all-extras --all-groups`, which creates a virtual environment
and installs everything in one step. No need to manually create or activate a virtualenv.
To install only specific document-type extras:
```bash
uv sync --extra pdf
uv sync --extra csv --extra docx
```
To update the lock file after changing dependencies in `pyproject.toml`:
```bash
make lock
```
* Optional:
* To install extras for processing images and PDFs locally, run `uv sync --extra pdf --extra image`.
* For processing image files, `tesseract` is required. See [here](https://tesseract-ocr.github.io/tessdoc/Installation.html) for installation instructions.
* For processing PDF files, `tesseract` and `poppler` are required. The [pdf2image docs](https://pdf2image.readthedocs.io/en/latest/installation.html) have instructions on installing `poppler` across various platforms.
Additionally, if you're planning to contribute to `unstructured`, we provide you an optional `pre-commit` configuration
file to ensure your code matches the formatting and linting standards used in `unstructured`.
If you'd prefer not to have code changes auto-tidied before every commit, you can use `make check` to see
whether any linting or formatting changes should be applied, and `make tidy` to apply them.
If using the optional `pre-commit`, you'll just need to install the hooks with `pre-commit install` since the
`pre-commit` package is installed as part of `make install` mentioned above. Finally, if you decided to use `pre-commit`
you can also uninstall the hooks with `pre-commit uninstall`.
In addition to develop in your local OS we also provide a helper to use docker providing a development environment:
```bash
make docker-start-dev
```
This starts a docker container with your local repo mounted to `/mnt/local_unstructured`. This docker image allows you to develop without worrying about your OS's compatibility with the repo and its dependencies.
## :clap: Quick Tour
### Documentation
For more comprehensive documentation, visit https://docs.unstructured.io . You can also learn
more about our other products on the documentation page, including our SaaS API.
Here are a few pages from the [Open Source documentation page](https://docs.unstructured.io/open-source/introduction/overview)
that are helpful for new users to review:
- [Quick Start](https://docs.unstructured.io/open-source/introduction/quick-start)
- [Using the `unstructured` open source package](https://docs.unstructured.io/open-source/core-functionality/overview)
- [Connectors](https://docs.unstructured.io/open-source/ingest/overview)
- [Concepts](https://docs.unstructured.io/open-source/concepts/document-elements)
- [Integrations](https://docs.unstructured.io/open-source/integrations)
### PDF Document Parsing Example
The following examples show how to get started with the `unstructured` library. The easiest way to parse a document in unstructured is to use the `partition` function. If you use `partition` function, `unstructured` will detect the file type and route it to the appropriate file-specific partitioning function. If you are using the `partition` function, you may need to install additional dependencies per doc type.
For example, to install docx dependencies you need to run `pip install "unstructured[docx]"`.
See our [installation guide](https://docs.unstructured.io/open-source/installation/full-installation) for more details.
```python
from unstructured.partition.auto import partition
elements = partition("example-docs/layout-parser-paper.pdf")
```
Run `print("\n\n".join([str(el) for el in elements]))` to get a string representation of the
output, which looks like:
```
LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis
Zejiang Shen 1 ( (cid:0) ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and
Weining Li 5
Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural
networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation.
However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy
reuse of important innovations by a wide audience. Though there have been ongoing efforts to improve reusability and
simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none
of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA
is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper
introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applications.
The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models
for layout detection, character recognition, and many other document processing tasks. To promote extensibility,
LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digitization
pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in
real-word use cases. The library is publicly available at https://layout-parser.github.io
Keywords: Document Image Analysis · Deep Learning · Layout Analysis · Character Recognition · Open Source library ·
Toolkit.
Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasks
including document image classification [11,
```
See the [partitioning](https://docs.unstructured.io/open-source/core-functionality/partitioning)
section in our documentation for a full list of options and instructions on how to use
file-specific partitioning functions.
## :guardsman: Security Policy
See our [security policy](https://github.com/Unstructured-IO/unstructured/security/policy) for
information on how to report security vulnerabilities.
## :bug: Reporting Bugs
Encountered a bug? Please create a new [GitHub issue](https://github.com/Unstructured-IO/unstructured/issues/new/choose) and use our bug report template to describe the problem. To help us diagnose the issue, use the `python scripts/collect_env.py` command to gather your system's environment information and include it in your report. Your assistance helps us continuously improve our software - thank you!
## :books: Learn more
| Section | Description |
|-|-|
| [Company Website](https://unstructured.io) | Unstructured.io product and company info |
| [Documentation](https://docs.unstructured.io/) | Full API documentation |
| [Batch Processing](https://github.com/Unstructured-IO/unstructured-ingest) | Ingesting batches of documents through Unstructured |
## :chart_with_upwards_trend: Analytics
This library includes a very lightweight analytics "ping" when the library is loaded, however you can opt out of this data collection by setting the environment variable `DO_NOT_TRACK=true` before executing any `unstructured` code. To learn more about how we collect and use this data, please read our [Privacy Policy](https://unstructured.io/privacy-policy).
| text/markdown | null | Unstructured Technologies <devops@unstructuredai.io> | null | null | null | CV, HTML, NLP, PDF, XML, parsing, preprocessing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Py... | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"beautifulsoup4<5.0.0,>=4.14.3",
"charset-normalizer<4.0.0,>=3.4.4",
"emoji<3.0.0,>=2.15.0",
"filetype<2.0.0,>=1.2.0",
"html5lib<2.0.0,>=1.1",
"langdetect<2.0.0,>=1.0.9",
"lxml<7.0.0,>=5.0.0",
"nltk<4.0.0,>=3.9.2",
"numba<1.0.0,>=0.60.0",
"numpy<3.0.0,>=1.26.0",
"psutil<8.0.0,>=7.2.2",
"python... | [] | [] | [] | [
"Homepage, https://github.com/Unstructured-IO/unstructured"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:34:58.236415 | unstructured-0.20.8.tar.gz | 1,499,644 | 63/4c/e85e2c6311fe94ec962c7c0192b3c644bf6a319117acdc8169db2dfec803/unstructured-0.20.8.tar.gz | source | sdist | null | false | bc89ed5ff7d0e22bada6485d8fee5f94 | 520b8aa035d5e3600b0e9c3462884de543350433678adf0ef0284fbcc1d275df | 634ce85e2c6311fe94ec962c7c0192b3c644bf6a319117acdc8169db2dfec803 | Apache-2.0 | [
"LICENSE.md"
] | 26,783 |
2.4 | sop-mcp | 0.7.1 | An MCP server for guiding users through Standard Operating Procedures | # sop-mcp
[](https://pypi.org/project/sop-mcp/)
[](https://pypi.org/project/sop-mcp/)
[](https://github.com/ValueArchitectsAI/sop-mcp/blob/main/LICENSE)
An MCP server that guides AI agents through Standard Operating Procedures (SOPs) step by step, using RFC 2119 requirement levels. Instead of dumping an entire procedure on the agent (which it will summarize or skip), sop-mcp feeds one step at a time and forces actual execution.
## Quick Install
| Kiro | Cursor | VS Code |
|:---:|:---:|:---:|
| [](https://kiro.dev/launch/mcp/add?name=sop-mcp&config=%7B%22command%22%3A%20%22uvx%22%2C%20%22args%22%3A%20%5B%22sop-mcp%22%5D%7D) | [](https://cursor.com/en/install-mcp?name=sop-mcp&config=eyJjb21tYW5kIjogInV2eCIsICJhcmdzIjogWyJzb3AtbWNwIl19) | [](https://vscode.dev/redirect/mcp/install?name=sop-mcp&config=%7B%22type%22%3A%20%22stdio%22%2C%20%22command%22%3A%20%22uvx%22%2C%20%22args%22%3A%20%5B%22sop-mcp%22%5D%7D) |
Or add manually to any MCP client:
```json
{
"mcpServers": {
"sop-mcp": {
"command": "uvx",
"args": ["sop-mcp"]
}
}
}
```
## Why?
Agents tend to summarize or skip steps when given a full procedure. Feeding steps one at a time forces actual execution. Each SOP becomes a dedicated MCP tool (`run_sop`) that the agent discovers naturally in its tool list.
## How It Works
```
Agent calls run_sop(sop_name="sop_creation_guide") → gets step 1 + instruction to execute
Agent executes step 1 actions
Agent calls run_sop(sop_name="sop_creation_guide", current_step=1, step_output="...") → gets step 2
... repeats ...
Agent calls run_sop(sop_name="sop_creation_guide", current_step=8, step_output="...") → completion signal
```
Every response includes an `instruction` field that tells the agent to *act*, not just read.
## Tools
| Tool | Description |
|------|-------------|
| `publish_sop` | Publish a new or updated SOP with automatic semver bumping |
| `submit_sop_feedback` | Submit improvement feedback for a specific SOP |
| `run_sop` | Step-by-step execution of any SOP, with `sop_name` parameter |
## Discovering SOPs
SOPs are exposed as MCP resources, so agents can list and read them before starting execution.
| Method | URI | Description |
|--------|-----|-------------|
| `list_resources` | — | Returns all available SOPs with name, version, step count, and overview |
| `read_resource` | `sop://{sop_name}` | Read the full latest SOP markdown |
| `read_resource` | `sop://{sop_name}?version=1.0` | Read a specific version |
For clients that don't support the MCP resource protocol, resources are also exposed as tools automatically via `ResourcesAsTools`.
This lets agents load the full SOP content upfront if needed — for example, to understand scope before committing to a multi-step run.
## Creating SOPs
The built-in `sop_creation_guide` SOP walks agents through the full authoring process (call `run_sop` with `sop_name="sop_creation_guide"`):
1. **Prepare** — gather process info, identify stakeholders, collect existing docs
2. **Structure** — define metadata, scope, parameters, and document skeleton
3. **Document** — write detailed step-by-step instructions with decision points
4. **Apply RFC 2119** — classify each action as MUST, SHOULD, or MAY
5. **Enrich** — add troubleshooting, best practices, examples, and references
6. **Review** — validate with SMEs and end users, run through the checklist
7. **Finalize** — incorporate feedback, publish via `publish_sop`, notify stakeholders
8. **Maintain** — schedule reviews, collect feedback, keep the SOP current
After publishing, restart the server to register the new SOP.
## The `step_output` Field
The `run_sop` tool accepts an optional `step_output` string parameter (required when `current_step >= 1`). This is where the LLM submits its concrete work product for the completed step — specific values, names, dates, and details rather than summaries.
The server accepts `step_output` but does not store or process it. The field exists purely to force the LLM to produce detailed output that lands in the conversation's tool-call history. When all steps are complete, the LLM can reference its own `step_output` submissions to compile a comprehensive final document. State lives entirely in the LLM's conversation context, keeping the server stateless.
### Request/response flow
```
# Step 1: Initial call — no step_output needed
Agent calls run_sop(sop_name="my_sop")
→ Response: Step 1 instruction
# Step 2: Agent submits step 1 output
Agent calls run_sop(
sop_name="my_sop",
current_step=1,
step_output="Registration: VALID, Number: BRN-2024-0738291"
)
→ Response: Step 2 instruction
# Step 3: Agent submits step 2 output
Agent calls run_sop(
sop_name="my_sop",
current_step=2,
step_output="Insurance: Hartford Financial, Policy: HFS-GL-4829173"
)
→ Response: Step 3 instruction
# Completion: Agent submits final step output
Agent calls run_sop(
sop_name="my_sop",
current_step=3,
step_output="Compliance: All checks passed, Certificate: CC-2024-9182"
)
→ Response: Completion signal
```
At completion, the LLM uses its conversation history of `step_output` submissions to compile the final document with all concrete values.
## Storage Configuration
By default, SOPs are stored in the bundled `src/sops/` directory (ephemeral — data may be lost if the package cache refreshes).
To persist SOPs, set `SOP_STORAGE_DIR`:
```json
{
"mcpServers": {
"sop-mcp": {
"command": "uvx",
"args": ["sop-mcp"],
"env": {
"SOP_STORAGE_DIR": "/path/to/my/sops"
}
}
}
}
```
Bundled SOPs are automatically seeded into the custom directory on first run.
## Writing an SOP
Every SOP markdown file must include:
- A level-1 heading (`# Title`)
- A `**Document ID**:` field (lowercase, underscores, min 3 words)
- A `**Version:**` field (semver)
- An `## Overview` section
- One or more `### Step N:` sections
Use RFC 2119 keywords (MUST, SHOULD, MAY) to define requirement levels.
## Publishing
Call `publish_sop` with the full markdown content and a `change_type`:
| Type | Effect | Example |
|------|--------|---------|
| `major` | Breaking change | 1.2.0 → 2.0.0 |
| `minor` | New feature | 1.2.0 → 1.3.0 |
| `patch` | Bugfix | 1.2.0 → 1.2.1 |
New SOPs always start at v1.0.0.
## SOP Naming Convention
| Element | Format | Example |
|---------|--------|---------|
| Folder name | lowercase, underscores | `sop_creation_guide` |
| Document ID | same as folder name | `sop_creation_guide` |
| Tool name | `run_sop` with `sop_name=` folder name | `run_sop(sop_name="sop_creation_guide")` |
| Version file | `v` + semver | `v1.0.0.md` |
## Development
Requires Python 3.10+ and [uv](https://docs.astral.sh/uv/).
```bash
uv sync # install dependencies
uv run pytest # run tests
uv run sop-mcp # start server locally
```
## Architecture
```mermaid
sequenceDiagram
participant Agent as AI Agent<br/>(Claude/Kiro)
participant Server as sop-mcp<br/>Server
participant Storage as Storage Backend<br/>(configurable)
Note over Agent,Storage: Initialize
Agent->>Server: run_sop(sop_name="sop_creation_guide")
Server->>Storage: Load latest version
Storage-->>Server: SOP content
Server-->>Agent: Step 1 + overview + instruction
Note over Agent,Storage: Execute Steps
loop For each step
Agent->>Agent: Execute step actions
Agent->>Server: run_sop(sop_name="sop_creation_guide", current_step=N, step_output="...")
Server-->>Agent: Step N+1 + instruction
end
Note over Agent,Storage: Complete
Agent->>Server: run_sop(sop_name="sop_creation_guide", current_step=last, step_output="...")
Server-->>Agent: completion signal
```
## License
MIT
| text/markdown | null | ValueArchitectsAI <info@valuearchitects.ai> | null | null | MIT | ai-agent, mcp, sop, standard-operating-procedure | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"fastmcp>=3.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ValueArchitectsAI/sop-mcp",
"Repository, https://github.com/ValueArchitectsAI/sop-mcp",
"Issues, https://github.com/ValueArchitectsAI/sop-mcp/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:34:03.904864 | sop_mcp-0.7.1.tar.gz | 139,795 | ab/56/791a8096bd7105c1a2909c5c175ad9ad6e4445d5294a0948ed341652c4a4/sop_mcp-0.7.1.tar.gz | source | sdist | null | false | e59b42c46ad89a011884299a611f9d27 | 950de29ac4dc4779734d3ecbf2011dc614452978c6f98bc997ad9f1f94634ffe | ab56791a8096bd7105c1a2909c5c175ad9ad6e4445d5294a0948ed341652c4a4 | null | [
"LICENSE"
] | 246 |
2.4 | definable | 0.3.0 | Production-grade AI agent framework with RAG, memory, tools, and multi-model support | <div align="center">
<h1>Definable</h1>
<p><strong>Build LLM agents that work in production.</strong></p>
<p>
<a href="https://pypi.org/project/definable/"><img src="https://img.shields.io/pypi/v/definable?color=%2334D058&label=pypi" alt="PyPI"></a>
<a href="https://pypi.org/project/definable/"><img src="https://img.shields.io/pypi/pyversions/definable?color=%2334D058" alt="Python"></a>
<a href="https://github.com/definableai/definable.ai/blob/main/LICENSE"><img src="https://img.shields.io/github/license/definableai/definable.ai?color=%2334D058" alt="License"></a>
<a href="https://pypi.org/project/definable/"><img src="https://img.shields.io/pypi/dm/definable?color=%2334D058&label=downloads" alt="Downloads"></a>
<a href="https://github.com/definableai/definable.ai/actions/workflows/ci.yml"><img src="https://img.shields.io/github/actions/workflow/status/definableai/definable.ai/ci.yml?label=CI" alt="CI"></a>
</p>
<p>
<a href="https://docs.definable.ai">Documentation</a> ·
<a href="https://github.com/definableai/definable.ai/tree/main/definable/examples">Examples</a> ·
<a href="https://pypi.org/project/definable/">PyPI</a>
</p>
</div>
<br>
A Python framework for building agent applications with tools, RAG, persistent memory, guardrails, skills, file readers, browser automation, messaging platform integrations, and the Model Context Protocol. Switch providers without rewriting agent code.
---
## Install
```bash
pip install definable
```
Or with [uv](https://github.com/astral-sh/uv):
```bash
uv pip install definable
```
## Quick Start
```python
from definable.agent import Agent
from definable.model.openai import OpenAIChat
agent = Agent(
model=OpenAIChat(id="gpt-4o-mini"),
instructions="You are a helpful assistant.",
)
output = agent.run("What is the capital of Japan?")
print(output.content) # The capital of Japan is Tokyo.
```
Or use **string model shorthand** — no explicit import needed:
```python
from definable.agent import Agent
agent = Agent(model="gpt-4o-mini", instructions="You are a helpful assistant.")
output = agent.run("What is the capital of Japan?")
```
## Add Tools
```python
from definable.tool.decorator import tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Sunny, 72°F in {city}"
agent = Agent(
model="gpt-4o-mini",
tools=[get_weather],
instructions="Help users check the weather.",
)
output = agent.run("What's the weather in Tokyo?")
```
The agent calls tools automatically. No manual function routing.
## Structured Output
```python
from pydantic import BaseModel
class WeatherReport(BaseModel):
city: str
temperature: float
conditions: str
agent = Agent(model="gpt-4o-mini", tools=[get_weather])
output = agent.run("Weather in Tokyo?", output_schema=WeatherReport)
print(output.content) # JSON string matching WeatherReport schema
```
Pass any Pydantic model to `output_schema` and get validated, typed results back.
## Streaming
```python
agent = Agent(model="gpt-4o-mini", instructions="You are a helpful assistant.")
for event in agent.run_stream("Write a haiku about Python."):
if event.content:
print(event.content, end="", flush=True)
```
`run_stream()` yields events as they arrive — content chunks, tool calls, and completion signals.
## Multi-Turn Conversations
```python
output1 = agent.run("My name is Alice.")
output2 = agent.run("What's my name?", messages=output1.messages)
print(output2.content) # "Your name is Alice."
```
Pass `messages` from a previous run to continue the conversation.
## Persistent Memory
```python
from definable.memory import Memory, SQLiteStore
agent = Agent(
model="gpt-4o-mini",
memory=Memory(store=SQLiteStore("memory.db")),
instructions="You are a personal assistant.",
)
await agent.arun("My name is Alice and I prefer dark mode.", user_id="alice")
# Later, even in a new session...
await agent.arun("What's my name?", user_id="alice") # Recalls "Alice"
```
Memory is LLM-driven: the model decides what to remember via tool calls (add/update/delete). For quick testing, use `memory=True` for an in-memory store. Three backends available: SQLite, PostgreSQL, and in-memory.
## Knowledge Base (RAG)
```python
from definable.knowledge import Knowledge, Document
from definable.embedder import OpenAIEmbedder
from definable.vectordb import InMemoryVectorDB
kb = Knowledge(
vector_db=InMemoryVectorDB(),
embedder=OpenAIEmbedder(),
top_k=3,
)
kb.add(Document(content="Company vacation policy: 20 days PTO per year."))
agent = Agent(
model="gpt-4o-mini",
instructions="You are an HR assistant.",
knowledge=kb,
)
output = agent.run("How many vacation days do I get?")
```
The agent retrieves relevant documents before responding. Supports embedders (OpenAI, Voyage), vector DBs (in-memory, PostgreSQL, Qdrant, ChromaDB, MongoDB, Redis, Pinecone), rerankers (Cohere), and chunkers.
> **Note:** `Agent(knowledge=True)` raises `ValueError` — unlike `memory=True`, knowledge requires explicit configuration with a vector DB.
## Guardrails
```python
from definable.agent.guardrail import Guardrails, max_tokens, pii_filter, tool_blocklist
agent = Agent(
model="gpt-4o-mini",
instructions="You are a support agent.",
tools=[get_weather],
guardrails=Guardrails(
input=[max_tokens(500)],
output=[pii_filter()],
tool=[tool_blocklist({"dangerous_tool"})],
),
)
output = agent.run("What's the weather?")
```
Guardrails check, modify, or block content at input, output, and tool-call checkpoints. Built-ins include token limits, PII redaction, topic blocking, and regex filters. Compose rules with `ALL`, `ANY`, `NOT`, and `when()`.
## Skills
```python
from definable.skill import Calculator, WebSearch, DateTime
agent = Agent(
model="gpt-4o-mini",
skills=[Calculator(), WebSearch(), DateTime()],
instructions="You are a helpful assistant.",
)
output = agent.run("What is 15% of 230?")
```
Skills bundle domain expertise (instructions) with tools. Built-in skills include Calculator, WebSearch, DateTime, HTTPRequests, JSONOperations, TextProcessing, Shell, FileOperations, and MacOS. Create custom skills by subclassing `Skill`.
## MCP
```python
from definable.mcp import MCPConfig, MCPServerConfig, MCPToolkit
config = MCPConfig(
servers=[
MCPServerConfig(
name="filesystem",
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
)
]
)
async with MCPToolkit(config=config) as toolkit:
agent = Agent(model="gpt-4o-mini", toolkits=[toolkit])
await agent.arun("List files in /tmp")
```
Connect to any MCP server. Use the same tools as Claude Desktop.
## File Readers
```python
from definable.media import File
agent = Agent(
model="gpt-4o-mini",
readers=True,
instructions="Summarize the uploaded document.",
)
output = agent.run("Summarize this.", files=[File(filepath="report.pdf")])
```
Pass `readers=True` to enable automatic parsing. Supports PDF, DOCX, PPTX, XLSX, ODS, RTF, HTML, images, and audio. AI-powered OCR available via Mistral, OpenAI, Anthropic, and Google providers.
## Deploy It
```python
from definable.agent.trigger import Webhook, Cron
from definable.agent.auth import APIKeyAuth
agent = Agent(model="gpt-4o-mini", instructions="You are a support agent.")
agent.on(Webhook(path="/support", method="POST"))
agent.on(Cron(schedule="0 9 * * *"))
agent.auth = APIKeyAuth(keys={"sk-my-secret-key"})
agent.serve(host="0.0.0.0", port=8000, dev=True)
```
`agent.serve()` starts an HTTP server with registered webhooks, cron triggers, and interfaces in a single process. Add `dev=True` for hot-reload during development.
## Connect to Platforms
```python
from definable.agent.interface.telegram import TelegramInterface, TelegramConfig
telegram = TelegramInterface(
config=TelegramConfig(bot_token="BOT_TOKEN"),
)
agent = Agent(model="gpt-4o-mini", instructions="You are a Telegram bot.")
agent.serve(telegram)
```
One agent, multiple platforms. Discord and Signal interfaces also available.
## Thinking (Reasoning Layer)
```python
from definable.agent.reasoning import Thinking
agent = Agent(
model="gpt-4o-mini",
thinking=Thinking(), # or thinking=True for defaults
instructions="Think step by step.",
)
output = await agent.arun("What is 127 * 43?")
```
The thinking layer adds chain-of-thought reasoning before the final response.
## Tracing
```python
from definable.agent.tracing import Tracing, JSONLExporter
agent = Agent(
model="gpt-4o-mini",
tracing=Tracing(exporters=[JSONLExporter("./traces")]),
instructions="You are a helpful assistant.",
)
output = agent.run("Hello!")
# Traces saved to ./traces/{session_id}.jsonl
```
Or use `tracing=True` for default console tracing.
## Replay & Compare
```python
from definable.agent.testing import MockModel
# Inspect a past run
output = agent.run("Explain quantum computing.")
replay = agent.replay(run_output=output)
print(replay.steps) # Each model call and tool invocation
print(replay.tokens) # Token usage breakdown
# Re-run with a different model and compare
new_output = agent.replay(run_output=output, model=OpenAIChat(id="gpt-4o"))
comparison = agent.compare(output, new_output)
print(comparison.cost_diff) # Cost difference between runs
print(comparison.token_diff) # Token usage difference
```
Replay lets you inspect past runs, re-execute them with different models or instructions, and compare results side by side.
## Testing
```python
from definable.agent import Agent
from definable.agent.testing import MockModel
agent = Agent(
model=MockModel(responses=["The capital of France is Paris."]),
instructions="You are a geography expert.",
)
output = agent.run("What is the capital of France?")
assert "Paris" in output.content
```
`MockModel` returns canned responses — no API keys needed. Use it in unit tests to verify agent behavior deterministically.
---
## Features
| Category | Details |
|---|---|
| **Models** | OpenAI, DeepSeek, Moonshot, xAI, any OpenAI-compatible provider. String shorthand: `Agent(model="gpt-4o")` resolves automatically |
| **Agents** | Multi-turn conversations, structured output, configurable retries, max iterations |
| **Agentic Loop** | Parallel tool calls via `asyncio.gather`, HITL pause/resume, cooperative cancellation, EventBus |
| **Tools** | `@tool` decorator with automatic parameter extraction from type hints and docstrings |
| **Toolkits** | Composable tool groups, `KnowledgeToolkit` for explicit RAG search |
| **Skills** | Domain expertise + tools in one package; 9 built-in skills (incl. MacOS), custom `Skill` subclass |
| **Knowledge / RAG** | Embedders, vector DBs, rerankers (Cohere), chunkers, automatic retrieval |
| **Memory** | LLM-driven memory with tool-based extraction (add/update/delete) |
| **Memory Stores** | SQLite, PostgreSQL, in-memory |
| **Readers** | PDF, DOCX, PPTX, XLSX, ODS, RTF, HTML, images, audio |
| **Reader Providers** | Mistral OCR, OpenAI, Anthropic, Google (AI-powered document parsing) |
| **Guardrails** | Input/output/tool checkpoints, PII redaction, token limits, topic blocking, regex filters |
| **Guardrails Composition** | `ALL`, `ANY`, `NOT`, `when()` combinators for complex policy rules |
| **Interfaces** | Telegram, Discord, Signal, Desktop, session management, identity resolution |
| **Browser Toolkit** | 50 browser automation tools via SeleniumBase CDP — CSS selectors, screenshots, cookie/storage management |
| **Claude Code Agent** | Zero-dep subprocess wrapper for Claude Code CLI with full Definable ecosystem integration |
| **Runtime** | `agent.serve()`, webhooks, cron triggers, event triggers, `dev=True` hot-reload |
| **Auth** | `APIKeyAuth`, `JWTAuth`, `AllowlistAuth`, `CompositeAuth`, pluggable `AuthProvider` protocol |
| **Streaming** | Real-time response and tool call streaming |
| **Replay** | Inspect past runs, re-execute with overrides, `agent.compare()` for side-by-side diffs |
| **Middleware** | Request/response transforms via `agent.use()`, logging, retry, metrics |
| **Tracing** | JSONL trace export for debugging and analysis |
| **Thinking** | Chain-of-thought reasoning layer with configurable triggers |
| **Compression** | Automatic context window management for long conversations |
| **Testing** | `MockModel`, `AgentTestCase`, `create_test_agent` utilities |
| **MCP** | Model Context Protocol client for external tool servers |
| **Types** | Full Pydantic models, `py.typed` marker, mypy verified |
## Supported Models
```python
from definable.model.openai import OpenAIChat # GPT-4o, GPT-4o-mini, o1, o3, ...
from definable.model.deepseek import DeepSeekChat # deepseek-chat, deepseek-reasoner
from definable.model.moonshot import MoonshotChat # moonshot-v1-8k, moonshot-v1-128k
from definable.model.xai import xAI # grok-3, grok-2-latest
# Or use string shorthand — no model import needed:
agent = Agent(model="gpt-4o-mini")
```
Any OpenAI-compatible API works with `OpenAIChat(base_url=..., api_key=...)`.
## Optional Extras
Install only what you need:
```bash
pip install definable[readers] # PDF, DOCX, PPTX, XLSX, ODS, RTF parsers
pip install definable[serve] # FastAPI + Uvicorn for agent.serve()
pip install definable[cron] # Cron trigger support
pip install definable[jwt] # JWT authentication
pip install definable[runtime] # serve + cron combined
pip install definable[discord] # Discord interface
pip install definable[browser] # Browser automation (SeleniumBase CDP)
pip install definable[desktop] # macOS Desktop Bridge
pip install definable[postgres-memory] # PostgreSQL memory store
pip install definable[research] # Deep research (DuckDuckGo + curl-cffi)
pip install definable[mistral-ocr] # Mistral AI document parsing
pip install definable[mem0-memory] # Mem0 hosted memory store
```
**Vector DB backends:**
```bash
pip install definable[pgvector] # PostgreSQL + pgvector
pip install definable[qdrant] # Qdrant
pip install definable[chroma] # ChromaDB
pip install definable[mongodb] # MongoDB
pip install definable[redis] # Redis
pip install definable[pinecone] # Pinecone
```
## Documentation
Full documentation: [docs.definable.ai](https://docs.definable.ai)
## Project Structure
```
definable/definable/
├── agent/ # Agent orchestration, config, middleware, loop
│ ├── auth/ # APIKeyAuth, JWTAuth, AllowlistAuth, CompositeAuth
│ ├── compression/ # Context window compression
│ ├── guardrail/ # Input/output/tool policy, PII, token limits, composable rules
│ ├── interface/ # Telegram, Discord, Signal, Desktop integrations
│ ├── reasoning/ # Thinking layer (chain-of-thought)
│ ├── replay/ # Run inspection, re-execution, comparison
│ ├── research/ # Deep research: multi-wave web search, CKU, gap analysis
│ ├── run/ # RunOutput, RunEvent types
│ ├── runtime/ # AgentRuntime, AgentServer, dev mode
│ ├── tracing/ # JSONL trace export
│ └── trigger/ # Webhook, Cron, EventTrigger
├── browser/ # BrowserToolkit — 50 tools via SeleniumBase CDP
├── claude_code/ # ClaudeCodeAgent — subprocess wrapper for Claude Code CLI
├── knowledge/ # RAG: embedders, vector DBs, rerankers, chunkers
├── mcp/ # Model Context Protocol client
├── media.py # Image, Audio, Video, File types
├── memory/ # LLM-driven memory + 3 store backends
├── model/ # OpenAI, DeepSeek, Moonshot, xAI providers
├── reader/ # File parsers + AI reader providers
├── skill/ # Built-in + custom skills, skill registry
├── tool/ # @tool decorator, Function wrappers
├── toolkit/ # Toolkit base class
├── vectordb/ # Vector database interfaces (7 backends)
└── utils/ # Logging, supervisor, shared utilities
```
## Contributing
Contributions welcome! To get started:
1. Fork the repo and clone it locally
2. Install for development: `pip install -e .`
3. Make your changes — follow existing code patterns (2-space indentation, 150 char lines)
4. Add tests in `definable/tests/` for new features
5. Run `ruff check` and `ruff format` for linting
6. Run `mypy` for type checking
7. Open a pull request
See `definable/examples/` for usage patterns.
## License
Apache 2.0 — see [LICENSE](LICENSE) for details.
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Pyt... | [] | null | null | >=3.12 | [] | [] | [] | [
"aiosqlite>=0.20.0",
"docstring-parser>=0.17.0",
"httpx>=0.28.1",
"mypy>=1.19.1",
"openai>=2.15.0",
"packaging>=23.0",
"pydantic>=2.12.5",
"rich>=14.2.0",
"ruff>=0.15.0",
"tiktoken>=0.12.0",
"voyageai>=0.3.7",
"discord.py>=2.3.0; extra == \"discord\"",
"discord.py>=2.3.0; extra == \"interfac... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:33:51.150064 | definable-0.3.0.tar.gz | 620,562 | b0/3a/5855bcd5d498a7225aaefc85f14770ef268e4e5d682caa564b44cf749bed/definable-0.3.0.tar.gz | source | sdist | null | false | 1d4f40606a2cb345537d9938bb9a31eb | d0de3ce13d78e9d80ee9ce1685a30193ee52fe4dbbadd96441e225599880ad1e | b03a5855bcd5d498a7225aaefc85f14770ef268e4e5d682caa564b44cf749bed | Apache-2.0 | [
"LICENSE"
] | 266 |
2.4 | office-janitor | 0.0.1 | Utility for detecting, planning, and scrubbing Microsoft Office installations. | # Office Janitor
[](https://github.com/supermarsx/office-janitor/actions/workflows/ci.yml)
[](https://github.com/supermarsx/office-janitor/stargazers)
[](https://github.com/supermarsx/office-janitor/network/members)
[](https://github.com/supermarsx/office-janitor/watchers)
[](https://github.com/supermarsx/office-janitor/releases)
[](https://pepy.tech/project/office-janitor)
[](https://github.com/supermarsx/office-janitor/issues)
[](license.md)
<img width="1017" height="902" alt="image" src="https://github.com/user-attachments/assets/37748fbb-f3a6-446b-81ec-1c2780e7137b" />
<br>
<br>
**Office Janitor** is a comprehensive, stdlib-only Python utility for managing Microsoft Office installations on Windows. It provides three core capabilities:
- **🔧 Install** – Deploy Office using ODT with presets, custom configurations, and live progress monitoring
- **🔄 Repair** – Quick and full repair for Click-to-Run installations with bundled OEM configurations
- **🧹 Scrub** – Deep uninstall and cleanup of MSI and Click-to-Run Office across all versions (2003-2024, Microsoft 365)
The tool follows the architecture defined in [`spec.md`](spec.md) and can be packaged into a single-file Windows executable with PyInstaller.
---
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [ODT Installation](#odt-installation)
- [Installation Presets](#installation-presets)
- [Quick Install Aliases](#quick-install-aliases)
- [Custom Installations](#custom-installations)
- [Progress Monitoring](#progress-monitoring)
- [Office Repair](#office-repair)
- [Quick Repair](#quick-repair)
- [Full Online Repair](#full-online-repair)
- [OEM Configurations](#oem-configurations)
- [Office Removal & Scrubbing](#office-removal--scrubbing)
- [Automatic Removal](#automatic-removal)
- [Targeted Removal](#targeted-removal)
- [Scrub Levels](#scrub-levels)
- [License Management](#license-management)
- [CLI Reference](#cli-reference)
- [Configuration Files](#configuration-files)
- [Safety Guidance](#safety-guidance)
- [Logging & Diagnostics](#logging--diagnostics)
- [Contributing & Testing](#contributing--testing)
---
## Installation
### Prerequisites
- Windows 7 or later with administrator privileges
- Python 3.9+ (for running from source)
- Optional: PyInstaller for building standalone executables
### Running from Source
```bash
# Clone and enter the repository
git clone https://github.com/supermarsx/office-janitor.git
cd office-janitor
# Create virtual environment (recommended)
python -m venv .venv
.venv\Scripts\activate
# Install in editable mode
python -m pip install -e .
# Run the tool
python office_janitor.py --help
```
### Building Standalone Executable
```bash
pyinstaller --onefile --uac-admin --name office-janitor office_janitor.py --paths src
```
The resulting `dist/office-janitor.exe` is a single-file admin-elevated executable that includes the embedded ODT setup.exe.
---
## Quick Start
```bash
# Install Office LTSC 2024 with Visio and Project (no bloatware)
office-janitor install --goobler
# Repair Office Click-to-Run (quick repair)
office-janitor repair --quick
# Remove all Office installations (preview first!)
office-janitor remove --dry-run
office-janitor remove --backup C:\Backups
# Diagnose Office installations without making changes
office-janitor diagnose --plan report.json
# Interactive mode - launches menu
office-janitor
```
---
## ODT Installation
Office Janitor includes an embedded Office Deployment Tool (setup.exe) and can install any Office product directly with live progress monitoring.
### Installation Presets
Use presets for one-command installations:
```bash
# Install Office LTSC 2024 Professional Plus (64-bit)
office-janitor install --preset office2024-x64
# Install full suite: Office 2024 + Visio + Project
office-janitor install --preset ltsc2024-full-x64
# Install clean version without OneDrive/Skype bloatware
office-janitor install --preset ltsc2024-full-x64-clean
# Add multiple languages
office-janitor install --preset ltsc2024-full-x64 \
--language en-us --language de-de --language es-mx
# Preview without installing
office-janitor install --preset office2024-x64 --dry-run
```
#### Microsoft 365 Presets
| Preset | Products | Description |
|--------|----------|-------------|
| `365-proplus-x64` | O365ProPlusRetail | Microsoft 365 Apps for enterprise (64-bit) |
| `365-proplus-x86` | O365ProPlusRetail | Microsoft 365 Apps for enterprise (32-bit) |
| `365-business-x64` | O365BusinessRetail | Microsoft 365 Apps for business |
| `365-proplus-visio-project` | O365ProPlusRetail + Visio + Project | Full M365 suite |
| `365-shared-computer` | O365ProPlusRetail | Shared Computer Licensing enabled |
| `365-proplus-x64-clean` | O365ProPlusRetail | **No OneDrive/Skype** |
| `365-proplus-visio-project-clean` | Full suite | **No OneDrive/Skype** |
#### Office LTSC 2024 Presets
| Preset | Products | Description |
|--------|----------|-------------|
| `office2024-x64` | ProPlus2024Volume | Office LTSC 2024 Professional Plus |
| `office2024-x86` | ProPlus2024Volume | 32-bit version |
| `office2024-standard-x64` | Standard2024Volume | Standard edition |
| `ltsc2024-full-x64` | ProPlus + Visio + Project | **Complete 2024 suite** |
| `ltsc2024-full-x86` | ProPlus + Visio + Project | 32-bit complete suite |
| `ltsc2024-x64-clean` | ProPlus2024Volume | **No OneDrive/Skype** |
| `ltsc2024-full-x64-clean` | Full suite | **No OneDrive/Skype** ⭐ |
| `ltsc2024-full-x86-clean` | Full suite (32-bit) | **No bloatware** |
#### Office LTSC 2021 & 2019 Presets
| Preset | Products | Description |
|--------|----------|-------------|
| `office2021-x64` | ProPlus2021Volume | Office LTSC 2021 Professional Plus |
| `office2021-standard-x64` | Standard2021Volume | Standard edition |
| `ltsc2021-full-x64` | ProPlus + Visio + Project | Complete 2021 suite |
| `office2019-x64` | ProPlus2019Volume | Office 2019 Professional Plus |
| `office2019-x86` | ProPlus2019Volume | 32-bit version |
#### Standalone Products
| Preset | Product | Description |
|--------|---------|-------------|
| `visio-pro-x64` | VisioPro2024Volume | Visio Professional 2024 |
| `project-pro-x64` | ProjectPro2024Volume | Project Professional 2024 |
### Quick Install Aliases
Author-defined shortcuts for common installations:
```bash
# Goobler: Full Office 2024 suite, no bloatware, Portuguese + English
office-janitor install --goobler
# Pupa: ProPlus only, no bloatware, Portuguese + English
office-janitor install --pupa
# Both support dry-run
office-janitor install --goobler --dry-run
```
| Alias | Preset | Products | Languages |
|-------|--------|----------|-----------|
| `--goobler` | `ltsc2024-full-x64-clean` | ProPlus 2024 + Visio + Project | pt-pt, en-us |
| `--pupa` | `ltsc2024-x64-clean` | ProPlus 2024 only | pt-pt, en-us |
### Custom Installations
Build custom configurations when presets don't fit:
```bash
# Custom product selection
office-janitor install \
--product ProPlus2024Volume \
--product VisioPro2024Volume \
--channel PerpetualVL2024 \
--language en-us \
--exclude-app OneDrive \
--exclude-app Lync
# Generate XML without installing
office-janitor odt --build --preset office2024-x64 --output install.xml
# Download for offline installation
office-janitor odt --download "D:\OfficeSource" \
--preset 365-proplus-x64 \
--language en-us --language es-es
# Generate removal XML
office-janitor odt --removal --remove-msi --output remove.xml
```
### Progress Monitoring
During installation, Office Janitor provides real-time progress:
```
⠋ ODT: ProPlus2024Volume, VisioPro2024Volume 45% Installing Office... [1.2GB, 3421 files, 892 keys, CPU 12%, RAM 245MB] (5m 34s)
```
The spinner shows:
- **Products** being installed
- **Progress percentage** from ODT logs
- **Current phase** (downloading, installing, configuring)
- **Disk usage** (Office installation size)
- **File count** in Office directories
- **Registry keys** created
- **CPU/RAM** usage of installer processes
- **Elapsed time**
If setup.exe exits but ClickToRun processes continue (common behavior), monitoring automatically switches to track those processes until installation completes.
**Ctrl+C** during installation will:
1. Terminate the ODT setup process
2. Kill all ClickToRun-related processes (OfficeClickToRun.exe, OfficeC2RClient.exe, etc.)
3. Display what was terminated
---
## Office Repair
Repair Click-to-Run Office installations without reinstalling.
### Quick Repair
Fast local repair using cached installation files:
```bash
# Quick repair (runs silently)
office-janitor repair --quick
# Show repair UI
office-janitor repair --quick --visible
# Preview without executing
office-janitor repair --quick --dry-run
```
Quick repair:
- Uses locally cached files (fast, no download)
- Fixes corrupted files and settings
- Preserves user data and customizations
- Completes in 5-15 minutes
### Full Online Repair
Complete repair that re-downloads Office from CDN:
```bash
# Full online repair
office-janitor repair --full
# With visible progress UI
office-janitor repair --full --visible
# Specify architecture
office-janitor repair --full --platform x64
```
Full repair:
- Downloads fresh files from Microsoft CDN
- Repairs more severe corruption
- Takes 30-60+ minutes depending on connection
- Requires internet connectivity
### OEM Configurations
Use bundled configuration presets for repair/reconfiguration:
```bash
# List available OEM presets
office-janitor repair --help
# Quick repair preset
office-janitor repair --config quick-repair
# Full repair preset
office-janitor repair --config full-repair
# Repair specific products
office-janitor repair --config proplus-x64
office-janitor repair --config business-x64
office-janitor repair --config office2024-x64
# Remove all C2R products
office-janitor c2r --remove
# Quick aliases
office-janitor repair --quick
office-janitor repair --full
```
Available OEM presets:
- `full-removal` - Remove all C2R Office products
- `quick-repair` - Quick local repair
- `full-repair` - Full online repair
- `proplus-x64` / `proplus-x86` - Repair Office 365 ProPlus
- `proplus-visio-project` - Repair full suite
- `business-x64` - Repair Microsoft 365 Business
- `office2019-x64` - Repair Office 2019
- `office2021-x64` - Repair Office 2021
- `office2024-x64` - Repair Office 2024
- `multilang` - Multi-language configuration
- `shared-computer` - Shared Computer Licensing
- `interactive` - Show repair UI
### Custom Repair Configuration
Use your own XML configuration:
```bash
office-janitor repair --config-file "C:\Configs\custom_repair.xml"
```
---
## Office Removal & Scrubbing
Deep uninstall and cleanup for all Office versions (2003-2024, Microsoft 365).
### Automatic Removal
Remove all detected Office installations:
```bash
# ALWAYS preview first!
office-janitor remove --dry-run
# Execute with backup
office-janitor remove --backup "C:\Backups\Office"
# Silent unattended removal
office-janitor remove --yes --quiet
# Keep user data during removal
office-janitor remove --keep-templates --keep-user-settings --keep-license
```
### Targeted Removal
Remove specific Office versions:
```bash
# Remove only Office 2016
office-janitor remove --target 2016
# Remove Microsoft 365 only
office-janitor remove --target 365
# Remove Office 2019 including Visio/Project
office-janitor remove --target 2019 --include visio,project
# Remove only MSI-based Office
office-janitor remove --msi-only
# Remove only Click-to-Run Office
office-janitor remove --c2r-only
# Remove specific MSI product by GUID
office-janitor remove --product-code "{90160000-0011-0000-0000-0000000FF1CE}"
# Remove specific C2R release
office-janitor remove --release-id O365ProPlusRetail
```
### Scrub Levels
Control cleanup intensity:
```bash
# Minimal - uninstall only
office-janitor remove --scrub-level minimal
# Standard - uninstall + residue cleanup (default)
office-janitor remove --scrub-level standard
# Aggressive - deep registry/filesystem cleanup
office-janitor remove --scrub-level aggressive
# Nuclear - remove everything possible
office-janitor remove --scrub-level nuclear
```
| Level | Uninstall | Files | Registry | Services | Tasks | Licenses |
|-------|-----------|-------|----------|----------|-------|----------|
| minimal | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
| standard | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
| aggressive | ✅ | ✅ | ✅+ | ✅ | ✅ | ✅ |
| nuclear | ✅ | ✅+ | ✅++ | ✅ | ✅ | ✅+ |
### Cleanup-Only Mode
Skip uninstall, clean residue only:
```bash
# Clean leftover files/registry after manual uninstall
office-janitor remove --cleanup-only
# Aggressive residue cleanup
office-janitor remove --cleanup-only --scrub-level aggressive
# Registry cleanup only
office-janitor remove --registry-only
```
### License Management
```bash
# Clean all Office licenses
office-janitor license --clean-all
# Clean SPP tokens only (KMS/MAK)
office-janitor license --clean-spp
# Clean OSPP tokens
office-janitor license --clean-ospp
# Clean vNext/device-based licensing
office-janitor license --clean-vnext
# Preserve licenses during removal
office-janitor remove --keep-license
```
### Additional Cleanup Options
```bash
# Clean MSOCache installation files
office-janitor remove --clean-msocache
# Remove Office AppX/MSIX packages
office-janitor remove --clean-appx
# Clean Windows Installer metadata
office-janitor remove --clean-wi-metadata
# Clean Office shortcuts
office-janitor remove --clean-shortcuts
# Clean registry add-ins, COM, shell extensions
office-janitor remove --cleanup-only --clean-addin-registry --clean-com-registry --clean-shell-extensions
```
### Multiple Passes
For stubborn installations:
```bash
# Run 3 uninstall passes
office-janitor remove --passes 3
# Or use max-passes
office-janitor remove --max-passes 5
```
---
## CLI Reference
Office Janitor uses a subcommand-based interface. Run `office-janitor <command> --help` for command-specific options.
### Commands
| Command | Description |
|---------|-------------|
| `install` | Install Office via ODT with presets or custom configurations |
| `repair` | Repair Click-to-Run Office installations |
| `remove` | Remove and scrub Office installations |
| `diagnose` | Detection and planning only, no changes |
| `odt` | Generate ODT XML configurations |
| `offscrub` | Legacy OffScrub compatibility mode |
| `c2r` | Click-to-Run management operations |
| `license` | Office license management |
| `config` | Manage configuration files |
| (none) | Launch interactive menu |
### Global Options
Available with all commands:
| Flag | Description |
|------|-------------|
| `-n, --dry-run` | Simulate without changes |
| `-y, --yes` | Skip confirmations |
| `--config JSON` | Load options from file |
| `--logdir DIR` | Custom log directory |
| `--timeout SEC` | Per-step timeout |
| `-v, -vv, -vvv` | Increase verbosity |
| `--quiet` | Reduce output |
| `--no-color` | Disable colors |
### Install Command
Install Office using Office Deployment Tool:
```bash
office-janitor install [OPTIONS]
```
| Flag | Description |
|------|-------------|
| `--preset NAME` | Use installation preset |
| `--product ID` | Add product (repeatable) |
| `--language CODE` | Add language (repeatable) |
| `--arch 32/64` | Architecture (default: 64) |
| `--channel CHANNEL` | Update channel |
| `--exclude-app APP` | Exclude app (repeatable) |
| `--shared-computer` | Enable shared licensing |
| `--goobler` | Full LTSC 2024 suite, no bloatware (pt-pt, en-us) |
| `--pupa` | ProPlus 2024 only, no bloatware (pt-pt, en-us) |
| `--list-presets` | List available presets |
| `--list-products` | List product IDs |
| `--list-channels` | List update channels |
| `--list-languages` | List language codes |
### Repair Command
Repair Click-to-Run Office installations:
```bash
office-janitor repair [OPTIONS]
```
| Flag | Description |
|------|-------------|
| `--quick` | Quick local repair |
| `--full` | Full online repair from CDN |
| `--config NAME` | Use OEM configuration preset |
| `--config-file XML` | Custom XML configuration |
| `--culture LANG` | Language for repair (default: en-us) |
| `--platform ARCH` | Architecture (x86/x64) |
| `--visible` | Show repair UI |
| `--timeout SEC` | Timeout (default: 3600) |
### Remove Command
Remove and scrub Office installations:
```bash
office-janitor remove [OPTIONS]
```
| Flag | Description |
|------|-------------|
| `--target VER` | Target specific version (2003-2024, 365) |
| `--msi-only` | Remove only MSI-based Office |
| `--c2r-only` | Remove only Click-to-Run Office |
| `--product-code GUID` | Remove specific MSI product |
| `--release-id ID` | Remove specific C2R release |
| `--scrub-level LEVEL` | minimal/standard/aggressive/nuclear |
| `--passes N` | Uninstall passes |
| `--backup DIR` | Backup registry/files |
| `--cleanup-only` | Skip uninstall, clean residue only |
| `--registry-only` | Only registry cleanup |
| `--skip-uninstall` | Skip uninstall phase |
| `--skip-processes` | Don't terminate Office processes |
| `--skip-services` | Don't stop Office services |
| `--skip-tasks` | Don't remove scheduled tasks |
| `--skip-registry` | Don't clean registry |
| `--skip-filesystem` | Don't clean files |
### Diagnose Command
Detection and planning without changes:
```bash
office-janitor diagnose [OPTIONS]
```
| Flag | Description |
|------|-------------|
| `--plan FILE` | Export plan to JSON |
| `--json` | JSON output to stdout |
### ODT Command
Generate ODT XML configurations:
```bash
office-janitor odt [OPTIONS]
```
| Flag | Description |
|------|-------------|
| `--build` | Generate configuration XML |
| `--download DIR` | Download Office source files |
| `--removal` | Generate removal XML |
| `--remove-msi` | Include RemoveMSI element |
| `--output FILE` | Output XML path |
| `--preset NAME` | Use preset configuration |
| `--product ID` | Add product (repeatable) |
| `--language CODE` | Add language (repeatable) |
### License Command
Office license management:
```bash
office-janitor license [OPTIONS]
```
| Flag | Description |
|------|-------------|
| `--clean-all` | Clean all license types |
| `--clean-spp` | Clean SPP tokens (KMS/MAK) |
| `--clean-ospp` | Clean OSPP tokens |
| `--clean-vnext` | Clean vNext/device licensing |
| `--keep-license` | Preserve licenses during removal |
### C2R Command
Click-to-Run management:
```bash
office-janitor c2r [OPTIONS]
```
| Flag | Description |
|------|-------------|
| `--remove` | Remove all C2R products |
| `--repair` | Repair C2R installation |
### User Data Options
Available with `remove` command:
| Flag | Description |
|------|-------------|
| `--keep-templates` | Preserve templates |
| `--keep-user-settings` | Preserve settings |
| `--keep-outlook-data` | Preserve Outlook data |
| `--delete-user-settings` | Remove settings |
| `--clean-shortcuts` | Remove shortcuts |
### Cleanup Options
Available with `remove` command:
| Flag | Description |
|------|-------------|
| `--clean-msocache` | Clean MSOCache files |
| `--clean-appx` | Remove AppX/MSIX packages |
| `--clean-wi-metadata` | Clean Windows Installer metadata |
| `--clean-addin-registry` | Clean add-in registry |
| `--clean-com-registry` | Clean COM registry |
| `--clean-shell-extensions` | Clean shell extensions |
### Legacy Flags
For backward compatibility, some legacy flags are still supported:
| Legacy Flag | New Syntax |
|-------------|------------|
| `--auto-all` | `remove` |
| `--repair quick` | `repair --quick` |
| `--odt-install` | `install` |
| `--diagnose` | `diagnose` |
---
## Configuration Files
Save common options in JSON:
```json
{
"dry_run": false,
"backup": "C:\\Backups\\Office",
"scrub_level": "standard",
"keep_license": true,
"keep_templates": true,
"passes": 2,
"timeout": 600
}
```
Use with:
```bash
office-janitor remove --config settings.json
```
CLI flags override config file values.
---
## Safety Guidance
### Always Preview First
```bash
# Preview what will happen
office-janitor remove --dry-run --plan preview.json
# Review the plan file, then execute
office-janitor remove --backup "C:\Backups"
```
### Create Backups
```bash
# Automatic backup to specified directory
office-janitor remove --backup "C:\Backups\Office"
# System restore points are created by default
# Disable with --no-restore-point if needed
```
### Preserve User Data
```bash
# Keep everything the user might want
office-janitor remove \
--keep-templates \
--keep-user-settings \
--keep-outlook-data \
--keep-license
```
### Enterprise Deployment
```bash
# Silent unattended for SCCM/Intune
office-janitor remove --yes --quiet --no-restore-point
# Log to network share
office-janitor remove --logdir "\\server\logs\%COMPUTERNAME%"
```
---
## Logging & Diagnostics
### Log Locations
Default: `%ProgramData%\OfficeJanitor\logs`
- `human.log` – Human-readable log (rotated)
- `events.jsonl` – Machine-readable telemetry
Override with `--logdir` or `OFFICE_JANITOR_LOGDIR` environment variable.
### Diagnostics Mode
```bash
# Full diagnostic without changes
office-janitor diagnose --plan report.json -vvv
# JSON output to stdout
office-janitor diagnose --json
# Maximum verbosity
office-janitor diagnose -vvv
```
### Troubleshooting
```bash
# Skip phases to isolate issues
office-janitor remove --skip-processes --skip-services --dry-run
# Force through guardrails (use with caution)
office-janitor remove --force --skip-preflight
# Registry-only cleanup
office-janitor remove --registry-only
```
---
## TUI (Text User Interface)
Launch the interactive terminal UI:
```bash
office-janitor
```
The TUI provides:
- Live progress display with spinner
- Real-time event log
- Detection/planning/execution phases
- Key bindings for control
Auto-selects TUI when terminal supports ANSI sequences. Disable colors with `--no-color`.
---
## OffScrub Compatibility
Legacy OffScrub VBS switches are mapped to native behaviors. See `docs/CLI_COMPATIBILITY.md` for the full matrix.
```bash
# OffScrub-style flags
office-janitor offscrub --all # Remove all
office-janitor offscrub --quiet # Reduce output
office-janitor offscrub --test-rerun # Double-pass
```
---
## Contributing & Testing
### Development Setup
```bash
git clone https://github.com/supermarsx/office-janitor.git
cd office-janitor
python -m venv .venv
.venv\Scripts\activate
pip install -e ".[dev]"
```
### Running Tests
```bash
# Run all tests
pytest
# With coverage
pytest --cov=src/office_janitor
# Specific test file
pytest tests/test_odt_build.py -v
```
### Code Quality
```bash
# Format
black .
# Lint
ruff check .
# Type check
mypy src tests
# All checks (PowerShell helper)
.\scripts\lint_format.ps1 -Fix
.\scripts\type_check.ps1
.\scripts\test.ps1
```
### Building
```bash
# PyInstaller executable
.\scripts\build_pyinstaller.ps1
# Distribution packages
.\scripts\build_dist.ps1
```
### Documentation Style
Use Doxygen-style docstrings:
```python
def my_function(arg: str) -> bool:
"""!
@brief Short description.
@details Extended description if needed.
@param arg Description of parameter.
@returns Description of return value.
"""
```
---
## License
See [license.md](license.md) for licensing information.
| text/markdown | Office Janitor Contributors | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"black>=23.9; extra == \"dev\"",
"mypy>=1.6; extra == \"dev\"",
"pyinstaller>=6.0; extra == \"dev\"",
"pytest>=7.4; extra == \"dev\"",
"ruff>=0.1.8; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T00:32:53.955444 | office_janitor-0.0.1.tar.gz | 370,265 | 24/e2/5fec1ee799500ad48ce468faae9c3d46f680b22686b9b1252a25e7d4a131/office_janitor-0.0.1.tar.gz | source | sdist | null | false | 000851a1edd7d2debe0a3d7408dd410a | 0034943b7057049ce2a5d379e99c7572ae17bdafb489ff55739e1f83ddcc8b51 | 24e25fec1ee799500ad48ce468faae9c3d46f680b22686b9b1252a25e7d4a131 | null | [
"license.md"
] | 252 |
2.4 | seismic-web3 | 0.1.1 | Seismic Python SDK — web3.py extensions for the Seismic privacy-enabled EVM | # seismic-web3
Python SDK for [Seismic](https://seismic.systems), built on [web3.py](https://github.com/ethereum/web3.py). Requires **Python 3.10+**.
```bash
pip install seismic-web3
```
## Client types
The SDK provides two client types:
- **Wallet client** — you provide a private key. Gives you full capabilities: shielded reads/writes, signed calls, deposits.
- **Public client** — no private key needed. Read-only access via transparent `eth_call`.
## Quick start
```python
from seismic_web3 import SEISMIC_TESTNET, PrivateKey
pk = PrivateKey(bytes.fromhex("YOUR_PRIVATE_KEY_HEX"))
# Wallet client — full capabilities (requires private key)
w3 = SEISMIC_TESTNET.wallet_client(pk)
contract = w3.seismic.contract(address="0x...", abi=ABI)
# Shielded write — calldata is encrypted (TxSeismic type 0x4a)
tx_hash = contract.write.setNumber(42)
receipt = w3.eth.wait_for_transaction_receipt(tx_hash)
# Shielded read — signed, encrypted eth_call
result = contract.read.getNumber()
```
```python
# Public client — read-only (no private key needed)
public = SEISMIC_TESTNET.public_client()
contract = public.seismic.contract(address="0x...", abi=ABI)
result = contract.tread.getNumber()
```
`ShieldedContract` (from the wallet client) exposes five namespaces:
| Namespace | What it does | On-chain visibility |
|-----------|-------------|-------------------|
| `.write` | Encrypted transaction (`TxSeismic` type `0x4a`) | Calldata hidden |
| `.read` | Encrypted signed `eth_call` | Calldata + result hidden |
| `.twrite` | Standard `eth_sendTransaction` | Calldata visible |
| `.tread` | Standard `eth_call` | Calldata visible |
| `.dwrite` | Debug write — returns plaintext + encrypted views | Calldata hidden |
Both sync and async clients are supported. See the full documentation for details.
## Documentation
Full docs are hosted on GitBook: **[docs.seismic.systems/clients/python](https://docs.seismic.systems/clients/python)**
## Contributing
See [DEVELOPMENT.md](DEVELOPMENT.md) for local setup, running tests, and publishing.
---
> This SDK was entirely vibecoded.
| text/markdown | Seismic Systems | null | null | null | null | blockchain, ethereum, evm, privacy, seismic, web3 | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"coincurve>=20.0",
"cryptography>=43.0",
"web3<8,>=7.0"
] | [] | [] | [] | [
"Documentation, https://docs.seismic.systems/clients/python",
"Repository, https://github.com/SeismicSystems/seismic",
"Source, https://github.com/SeismicSystems/seismic/tree/main/clients/py"
] | uv/0.9.10 {"installer":{"name":"uv","version":"0.9.10"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T00:32:35.992876 | seismic_web3-0.1.1.tar.gz | 249,915 | 26/f3/cf53d534d557b5560837e60542382db82b275585fe50677d6293fd758698/seismic_web3-0.1.1.tar.gz | source | sdist | null | false | d86eec848116a3fd2b9363e707bee42d | a2706398d60c7906b3d835c1f5847aebd1cc50ac819a199caabd29e4a843be79 | 26f3cf53d534d557b5560837e60542382db82b275585fe50677d6293fd758698 | MIT | [] | 247 |
2.4 | vibego | 1.5.36 | vibego CLI:用于初始化与管理 Telegram Master Bot 的工具 | # vibego - 通过 telegram 随时随地的进行 vibe coding
**通过 Telegram 随时随地驱动你的终端 AI CLI(支持 Codex / ClaudeCode)**
For the english version, see [README-en](README-en.md).
## 功能介绍
1. 通过 Telegram 随时随地驱动你的终端 AI CLI;
2. 通过 telegram 做到简单的任务管理与缺陷报告,可在 Telegram 中直接记录并追踪;
3. 通过 telegram 随时在 Codex / ClaudeCode 终端 CLI 间一键切换;
4. 通过 Telegram Bot API 的 HTTPS 请求通道传输指令到 CLI,链路全程由 TLS 加密保护。
5. 运行期日志和状态文件统一写入本机 ~/.config/vibego/,敏感数据不出终端;
## 环境依赖
**终端环境安装且登录了 codex/claudeCode**
```bash
brew install python@3.11 tmux
brew install pipx
python3 -m venv ~/.config/vibego/runtime/venv
source ~/.config/vibego/runtime/venv/bin/activate
```
- 最低需要 Python 3.9,推荐 3.11+;根据 [datetime 官方文档](https://docs.python.org/3/library/datetime.html#datetime.UTC),`datetime.UTC` 仅在 3.11+ 中提供,本仓库已通过 `timezone.utc` 兜底兼容更早版本。
- `scripts/run_bot.sh` 会在启动时自动优先选择可用的 `python3.11`(可通过 `VIBEGO_PYTHON_BIN` 覆盖),并在找不到 3.11 时回退到系统 `python3` 但确保版本 ≥3.9。
## 快速开始
### 创建并获取 telegram bot token
建议通过 PyPI 安装的 `vibego` 命令完成初始化与启动,示例:
- 首次创建 Token 可参考 Telegram 官方 BotFather 指南(<https://core.telegram.org/bots#botfather>):
1) 在 Telegram 客户端搜索 `@BotFather` 并开始对话;
2) 发送 `/start`,然后依次发送 `/newbot`,根据提示输入机器人名称与用户名;
3) BotFather 将返回以 `123456789:ABC...` 形式的 HTTP API Token,请妥善保存;
4) 若需重新获取或重置 Token,可在同一对话中发送 `/token`,选择目标机器人后领取新令牌。
### 安装 & 启动 vibego
执行该步骤之前,确保您的终端已经安装并登录了 codex / claudeCode / gemini(按需),且已经准备好了 telegram bot token。
- `demo` 启动脚本会在运行前自动把仓库根目录的 [AGENTS.md](AGENTS.md) 写入 `$HOME/.codex/AGENTS.md` / `$HOME/.claude/CLAUDE.md` 的 `<!-- vibego-agents:start -->...<!-- vibego-agents:end -->` 区块;若文件原本不存在会直接创建,存在则保留你已有内容并备份为 `.vibego.bak`,方便进一步自定义
```bash
pipx install vibego # 或者 pip install --user vibego
vibego init # 初始化配置目录并写入 Master Bot Token
vibego start # 启动 master 服务
```
然后在 telegram 创建的 bot中点击`/statr`,enjoy it!
## 目录结构
- `bot.py`:aiogram 3 worker,支持多模型会话解析(Codex / ClaudeCode / Gemini)。
- `scripts/run_bot.sh`:一键启动脚本(自动建 venv、启动 tmux + 模型 CLI + bot)。
- `scripts/stop_bot.sh`:终止当前项目 worker(tmux + bot 进程)。
- `scripts/start_tmux_codex.sh`:底层 tmux/CLI 启动器,被 `run_bot.sh` 调用,默认以 `tmux -u` 强制启用 UTF-8。
- `scripts/models/`:模型配置模块(`common.sh`/`codex.sh`/`claudecode.sh`/`gemini.sh`)。
- `logs/<model>/<project>/`:运行日志(`run_bot.log`、`model.log`、`bot.pid`、`current_session.txt`),默认位于
`~/.config/vibego/logs/`。
- `model.log` 由 `scripts/log_writer.py` 控制,单文件上限 20MB,仅保留最近 24 小时的归档(可通过 `MODEL_LOG_MAX_BYTES`、
`MODEL_LOG_RETENTION_SECONDS` 覆盖)。
- `.env.example`:环境配置模板(复制为 `.env` 后按需修改)。
## Spec-Driven Development(speckit)工作流(实验)
vibego 仓库内包含 `.specify/` 脚本与模板,可用于按 Spec-Driven Development 的节奏产出可审阅的规格/计划/任务,
再进入实现阶段;用于降低“纯 vibe coding”带来的不确定性。
参考入口(本仓库样例,绝对路径可复核):
- 评估结论:`/Users/david/hypha/tools/vibego/specs/001-speckit-feasibility/assessment-report.md`
- 快速复现与演示:`/Users/david/hypha/tools/vibego/specs/001-speckit-feasibility/quickstart.md`
上游参考(官方):
- Spec Kit:https://github.com/github/spec-kit
- SDD 流程说明:https://raw.githubusercontent.com/github/spec-kit/main/spec-driven.md
安全边界提醒(必读):
- 不要在任何文档/日志/报错中粘贴真实 token、chat_id 或用户标识;示例请使用占位符。
- 运行期日志/状态文件必须写入 `~/.config/vibego/`(或由 `VIBEGO_CONFIG_DIR`/`MASTER_CONFIG_ROOT` 覆盖),不要落入仓库。
## 日志 & 目录
```
~/.config/vibego/logs/
└─ codex/
└─ mall-backend/
├─ run_bot.log # run_bot.sh 输出
├─ model.log # tmux pipe-pane 捕获的模型 CLI 输出
├─ bot.pid # 当前 bot 进程 PID(stop_bot.sh 使用)
└─ current_session.txt # 最近一次 JSONL 会话指针
```
> 从 2025 年起,所有日志、数据库、状态文件默认写入 `~/.config/vibego/`;`scripts/migrate_runtime.sh`
> 可将旧版本在仓库内生成的运行期文件一次性迁移到该目录。
## 模型切换
- 支持参数:`codex`、`claudecode`、`gemini`。
- 切换流程:`stop_bot.sh --model <旧>` → `run_bot.sh --model <新>`。
- 每个模型在 `scripts/models/<model>.sh` 中维护独立配置,互不依赖;公共逻辑位于 `scripts/models/common.sh`。
- `ACTIVE_MODEL` 会在 `/start` 回复及日志中显示,并写入环境变量供 `bot.py` 使用。
### Codex
| 变量 | 说明 |
|----------------------|-----------------------------------------------|
| `CODEX_WORKDIR` | Codex CLI 工作目录(默认 `.env` 中自定义或 fallback ROOT) |
| `CODEX_CMD` | 启动命令,默认 `codex --dangerously-bypass-...` |
| `CODEX_SESSION_ROOT` | JSONL 根目录(默认 `~/.codex/sessions`) |
| `CODEX_SESSION_GLOB` | JSONL 文件匹配(默认 `rollout-*.jsonl`) |
### ClaudeCode
| 变量 | 说明 |
|-----------------------|---------------------------------------|
| `CLAUDE_WORKDIR` | 工程目录(默认与 Codex 相同) |
| `CLAUDE_CMD` | CLI 启动命令,示例 `claude --project <path>` |
| `CLAUDE_PROJECT_ROOT` | JSONL 根目录(默认 `~/.claude/projects`) |
| `CLAUDE_SESSION_GLOB` | JSONL 文件匹配(默认 `*.jsonl`) |
| `CLAUDE_PROJECT_KEY` | 可选:显式指定 `~/.claude/projects/<key>` 路径 |
### Gemini
Gemini 基于官方 `gemini-cli`(Homebrew 包名 `gemini-cli`,命令为 `gemini`)。
默认会话落盘路径(可复核):
```
~/.gemini/tmp/<sha256(工作目录绝对路径字符串)>/chats/session-*.json
```
| 变量 | 说明 |
|--------------------|------|
| `GEMINI_WORKDIR` | 工程目录(默认与 `MODEL_WORKDIR` 相同) |
| `GEMINI_CMD` | CLI 启动命令,默认 `gemini --approval-mode yolo --sandbox`(高风险,需自行评估) |
| `GEMINI_SESSION_ROOT` | 会话根目录,默认 `~/.gemini/tmp` |
| `GEMINI_SESSION_GLOB` | 会话文件匹配,默认 `session-*.json` |
启动时会自动把仓库根目录的 `AGENTS.md` 同步到 `~/.gemini/GEMINI.md` 的 `<!-- vibego-agents:start -->...<!-- vibego-agents:end -->` 区块,
用于让 Gemini CLI 自动继承 vibego 的强制规约。
## aiogram Worker 行为
- `/start`:返回 `chat_id`、`MODE`、`ACTIVE_MODEL`;日志打印 `chat_id` 与 `user_id`。
- 文本消息:
1. 依据 `ACTIVE_MODEL` 解析会话文件:Codex/ClaudeCode 为 JSONL,Gemini 为 `session-*.json`;
默认读取 `current_session.txt` 中记录的会话路径,必要时搜索 `MODEL_SESSION_ROOT` 以回填。
2. 将 prompt 注入 tmux(发送 `Esc` 清空模式、`C-j` 换行、`Enter` 提交)。
3. 首次读取 `SESSION_OFFSETS` 初始化偏移;随后通过 `_deliver_pending_messages()` 补发当前尾部内容并持续轮询 JSONL。
4. watcher 阶段提示 `ACTIVE_MODEL` 正在处理中,完成后自动推送结果(保留 Markdown)。
- MODE=A 下仍支持 `AGENT_CMD` 直接执行 CLI。
## 新增脚本
- `run_bot.sh`
- `--model <name>`:codex / claudecode / gemini。
- `--project <slug>`:日志/会话目录名称;未提供时依据工作目录推导。
- `--foreground`:前台运行(默认后台 + `nohup`)。
- `--no-stop`:启动前跳过 stop(默认先执行 `stop_bot.sh` 保证幂等)。
- `stop_bot.sh`
- 幂等终止:`tmux kill-session`、关闭 `bot.pid` 指向的进程、移除缓存。
- 示例:`./scripts/stop_bot.sh --model codex --project mall-backend`。
## 配置要点
### `.env`(Master 全局配置)
- 文件位置:`~/.config/vibego/.env`(可通过环境变量 `VIBEGO_CONFIG_DIR` 自定义)。
- `MASTER_BOT_TOKEN`:master bot 的 Token,由 `vibego init` 引导输入,启动时必须存在。
- `MASTER_CHAT_ID` / `MASTER_USER_ID`:首次在 Telegram 与 master 交互时自动写入,表示已授权的管理员账号。
- `MASTER_WHITELIST`:逗号分隔的 chat_id 列表,留空表示不限制;若与自动写入冲突以最新值为准。
- 其他可选变量(代理、日志级别、默认模型等)可按需新增,未设置时脚本使用默认值。
- 项目配置持久化在 `~/.config/vibego/config/master.db`(SQLite),对应的 JSON 镜像为
`~/.config/vibego/config/projects.json`。如需离线编辑,可参考仓库内的 `config/projects.json.example` 模板。
- Master「⚙️ 项目管理」可新增/编辑/删除项目;仍可离线编辑 JSON,启动时会自动导入并同步至数据库。
- 必填字段:`bot_name`、`bot_token`、`project_slug`、`default_model`。
- 可选字段:`workdir`(项目路径)、`allowed_chat_id`(用于预设授权)。留空时,worker 首次收到消息会自动记录 chat_id 并写回
`~/.config/vibego/state/master_state.json`。
- 其他自定义字段暂不读取。
### 微信开发命令端口配置(wx-dev-preview / wx-dev-upload)
- `wx-dev-preview` 与 `wx-dev-upload` 都会调用微信开发者工具 CLI 的 `--port`,该端口为 IDE HTTP 服务端口;若端口未配置则命令会直接报错。
- 配置文件:`~/.config/vibego/config/wx_devtools_ports.json`(若设置了 `VIBEGO_CONFIG_DIR`/`MASTER_CONFIG_ROOT`,则路径随之变化)
- 配置模板:`config/wx_devtools_ports.json.example`
- 端口获取:微信开发者工具 → 设置 → 安全设置 → 服务端口(官方文档:https://developers.weixin.qq.com/miniprogram/dev/devtools/cli.html)
### 自动授权 & 状态
- worker 启动时若 `allowed_chat_id` 为空,首次合法消息会写入 `state/state.json` 并立即生效。
- master 重启:先调用 `stop_bot.sh` 清理,再依据 state 还原正在运行的项目。
## 后续规划
- Master bot 将统一轮询多个项目 bot,并调用 run/stop 脚本管理
worker;当前版本先提供 worker 端结构与日志规范。
- Gemini 已接入;后续可按需补充更细粒度的工具调用回推与会话管理能力。
## 注意
- `~/.config/vibego/.env` 内包含敏感 Token 与管理员信息,请勿提交至版本库。
- 若需减少日志体积,可按需清理 `logs/<model>/<project>/` 或调整脚本输出阈值。
- 如果以源码仓库方式运行过旧版本,请执行 `./scripts/migrate_runtime.sh` 并确认仓库中仅保留 `.example`
模板文件,避免误将数据库或日志提交至 Git。
- Master 会缓存版本检测结果,每个版本只提醒一次;如需立即重试可执行 `/projects` 或重启 master。
## Master 控制
- 管理员 Bot 使用 `MASTER_BOT_TOKEN` 启动(运行 `python master.py`)。
- 项目列表由 Master 仓库(`~/.config/vibego/config/master.db`)维护,可通过项目管理按钮或
`~/.config/vibego/config/projects.json` 同步文件更新。示例字段:
- `bot_name`:对应 Telegram 机器人的用户名(可带或不带 `@`,展示与交互时自动加 `@`)
- `bot_token`:对应 worker 的 Telegram Token
- `default_model`:默认模型(codex / claudecode / gemini)
- `project_slug`:日志/目录名称
- `workdir`:项目工作目录(可选)
- `allowed_chat_id`:项目 worker 的授权 chat(用于 run_bot 时注入环境)
- 状态持久化:`~/.config/vibego/state/master_state.json` 自动记录各项目当前模型与运行状态,master 重启时会先 `stop_bot.sh`
清理现场,再根据状态恢复。
### 管理命令
| 命令 | 说明 |
|-----------------------------|---------------------------------------------------------------|
| `/projects` | 列出所有项目当前状态与模型 |
| `/run <project> [model]` | 启动指定项目 worker,模型可选(默认使用当前/配置值) |
| `/stop <project>` | 停止项目 worker |
| `/switch <project> <model>` | 停止后以新模型重新启动 |
| `/start` | 显示帮助与项目数量 |
| `/upgrade` | 执行 `pipx upgrade vibego && vibego stop && vibego start` 完成自升级 |
- `<project>` 参数可填写 `project_slug` 或对应 `@bot_name`,命令回复将自动展示可点击的 `@` 链接。
> master 仅与管理员 bot 交互;项目 bot 仍由 worker(run_bot.sh 启动的 `bot.py`)负责处理业务消息。
| text/markdown | Hypha | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: MacOS",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"Environment :: Console"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiogram<4.0.0,>=3.0.0",
"aiohttp-socks>=0.10.0",
"aiosqlite>=0.19.0",
"markdown-it-py<4.0.0,>=3.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T00:32:32.967321 | vibego-1.5.36.tar.gz | 1,402,908 | fc/24/6ab0b1956f6e6c9d7dbaa12412678026de93f05bfade4abb51c55b1332c1/vibego-1.5.36.tar.gz | source | sdist | null | false | 2bb7a6b69814e5725522badd05f0a5c0 | af3dbb17b7ee89b30057d8d00eb49e0c36f817fcd82c402a3aa855d70bdb3869 | fc246ab0b1956f6e6c9d7dbaa12412678026de93f05bfade4abb51c55b1332c1 | LicenseRef-Proprietary | [
"LICENSE"
] | 258 |
2.3 | msgraph-sdk | 1.55.0 | The Microsoft Graph Python SDK | # Microsoft Graph SDK for Python
[](https://badge.fury.io/py/msgraph-sdk)
[](https://pepy.tech/project/msgraph-sdk)
[](https://pypi.org/project/msgraph-sdk)
[](https://github.com/microsoftgraph/msgraph-sdk-python/graphs/contributors)
Get started with the Microsoft Graph SDK for Python by integrating the [Microsoft Graph API](https://docs.microsoft.com/graph/overview) into your Python application.
> **Note:**
>
> * This SDK allows you to build applications using the [v1.0](https://docs.microsoft.com/graph/use-the-api#version) of Microsoft Graph. If you want to try the latest Microsoft Graph APIs, try the [beta](https://github.com/microsoftgraph/msgraph-beta-sdk-python) SDK.
## 1. Installation
```py
pip install msgraph-sdk
```
> **Note:**
>
> * The Microsoft Graph SDK for Python is a fairly large package. It may take a few minutes for the initial installation to complete.
> * Enable long paths in your environment if you receive a `Could not install packages due to an OSError`. For details, see [Enable Long Paths in Windows 10, Version 1607, and Later](https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=powershell#enable-long-paths-in-windows-10-version-1607-and-later).
## 2. Getting started with Microsoft Graph
### 2.1 Register your application
Register your application by following the steps at [Register your app with the Microsoft Identity Platform](https://docs.microsoft.com/graph/auth-register-app-v2).
### 2.2 Select and create an authentication provider
To start writing code and making requests to the Microsoft Graph service, you need to set up an authentication provider. This object will authenticate your requests to Microsoft Graph. For authentication, the Microsoft Graph Python SDK supports both sync and async credential classes from Azure Identity. Which library to choose depends on the type of application you are building.
> **Note**: For authentication we support both `sync` and `async` credential classes from `azure.identity`. Please see the azure identity [docs](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity?view=azure-python) for more information.
The easiest way to filter this decision is by looking at the permissions set you'd use. Microsoft Graph supports 2 different types of permissions: delegated and application permissions:
* Application permissions are used when you don’t need a user to login to your app, but the app will perform tasks on its own and run in the background.
* Delegated permissions, also called scopes, are used when your app requires a user to login and interact with data related to this user in a session.
The following table lists common libraries by permissions set.
| MSAL library | Permissions set | Common use case |
|---|---|---|
| [ClientSecretCredential](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.aio.clientsecretcredential?view=azure-python&preserve-view=true) | Application permissions | Daemon apps or applications running in the background without a signed-in user. |
| [DeviceCodeCredential](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.devicecodecredential?view=azure-python) | Delegated permissions | Enviroments where authentication is triggered in one machine and completed in another e.g in a cloud server. |
| [InteractiveBrowserCredentials](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.interactivebrowsercredential?view=azure-python) | Delegated permissions | Environments where a browser is available and the user wants to key in their username/password. |
| [AuthorizationCodeCredentials](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.authorizationcodecredential?view=azure-python) | Delegated permissions | Usually for custom customer applications where the frontend calls the backend and waits for the authorization code at a particular url. |
You can also use [EnvironmentCredential](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.environmentcredential?view=azure-python), [DefaultAzureCredential](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python), [OnBehalfOfCredential](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.onbehalfofcredential?view=azure-python), or any other [Azure Identity library](https://learn.microsoft.com/en-us/python/api/overview/azure/identity-readme?view=azure-python#credential-classes).
Once you've picked an authentication library, we can initiate the authentication provider in your app. The following example uses ClientSecretCredential with application permissions.
```python
import asyncio
from azure.identity.aio import ClientSecretCredential
credential = ClientSecretCredential("tenantID",
"clientID",
"clientSecret")
scopes = ['https://graph.microsoft.com/.default']
```
The following example uses DeviceCodeCredentials with delegated permissions.
```python
import asyncio
from azure.identity import DeviceCodeCredential
credential = DeviceCodeCredential("client_id",
"tenant_id")
scopes = ['https://graph.microsoft.com/.default']
```
### 2.3 Initialize a GraphServiceClient object
You must create **GraphServiceClient** object to make requests against the service. To create a new instance of this class, you need to provide credentials and scopes, which can authenticate requests to Microsoft Graph.
```py
# Example using async credentials and application access.
from azure.identity.aio import ClientSecretCredential
from msgraph import GraphServiceClient
credentials = ClientSecretCredential(
'TENANT_ID',
'CLIENT_ID',
'CLIENT_SECRET',
)
scopes = ['https://graph.microsoft.com/.default']
client = GraphServiceClient(credentials=credentials, scopes=scopes)
```
The above example uses default scopes for [app-only access](https://learn.microsoft.com/en-us/graph/permissions-overview?tabs=http#application-permissions). If using [delegated access](https://learn.microsoft.com/en-us/graph/permissions-overview#delegated-permissions) you can provide custom scopes:
```py
# Example using sync credentials and delegated access.
from azure.identity import DeviceCodeCredential
from msgraph import GraphServiceClient
credentials = DeviceCodeCredential(
'CLIENT_ID',
'TENANT_ID',
)
scopes = ['https://graph.microsoft.com/.default']
client = GraphServiceClient(credentials=credentials, scopes=scopes)
```
> **Note**: Refer to the [following documentation page](https://learn.microsoft.com/graph/sdks/customize-client?tabs=python#configuring-the-http-proxy-for-the-client) if you need to configure an HTTP proxy.
## 3. Make requests against the service
After you have a **GraphServiceClient** that is authenticated, you can begin making calls against the service. The requests against the service look like our [REST API](https://docs.microsoft.com/graph/api/overview?view=graph-rest-1.0).
> **Note**: This SDK offers an asynchronous API by default. Async is a concurrency model that is far more efficient than multi-threading, and can provide significant performance benefits and enable the use of long-lived network connections such as WebSockets. We support popular python async environments such as `asyncio`, `anyio` or `trio`.
The following is a complete example that shows how to fetch a user from Microsoft Graph.
```py
import asyncio
from azure.identity.aio import ClientSecretCredential
from msgraph import GraphServiceClient
credential = ClientSecretCredential(
'tenant_id',
'client_id',
'client_secret'
)
scopes = ['https://graph.microsoft.com/.default']
client = GraphServiceClient(credentials=credential, scopes=scopes)
# GET /users/{id | userPrincipalName}
async def get_user():
user = await client.users.by_user_id('userPrincipalName').get()
if user:
print(user.display_name)
asyncio.run(get_user())
```
Note that to calling `me` requires a signed-in user and therefore delegated permissions. See [Authenticating Users](https://learn.microsoft.com/en-us/python/api/overview/azure/identity-readme?view=azure-python#authenticate-users) for more:
```py
import asyncio
from azure.identity import InteractiveBrowserCredential
from msgraph import GraphServiceClient
credential = InteractiveBrowserCredential(
client_id=os.getenv('client_id'),
tenant_id=os.getenv('tenant_id'),
)
scopes = ["User.Read"]
client = GraphServiceClient(credentials=credential, scopes=scopes,)
# GET /me
async def me():
me = await client.me.get()
if me:
print(me.display_name)
asyncio.run(me())
```
### 3.1 Error Handling
Failed requests raise `APIError` exceptions. You can handle these exceptions using `try` `catch` statements.
```py
from kiota_abstractions.api_error import APIError
async def get_user():
try:
user = await client.users.by_user_id('userID').get()
print(user.user_principal_name, user.display_name, user.id)
except APIError as e:
print(f'Error: {e.error.message}')
asyncio.run(get_user())
```
### 3.2 Pagination
By default a maximum of 100 rows are returned but in the response if odata_next_link is present, it can be used to fetch the next batch of max 100 rows. Here's an example to fetch the initial rows of members in a group, then iterate over the pages of rows using the odata_next_link
```py
# get group members
members = await client.groups.by_group_id(id).members.get()
if members:
print(f"########## Members:")
for i in range(len(members.value)):
print(f"display_name: {members.value[i].display_name}, mail: {members.value[i].mail}, id: {members.value[i].id}")
# iterate over result batches > 100 rows
while members is not None and members.odata_next_link is not None:
members = await client.groups.by_group_id(id).members.with_url(members.odata_next_link).get()
if members:
print(f"########## Members:")
for i in range(len(members.value)):
print(f"display_name: {members.value[i].display_name}, mail: {members.value[i].mail}, id: {members.value[i].id}")
```
## Documentation and resources
* [Overview](https://docs.microsoft.com/graph/overview)
* [Microsoft Graph website](https://aka.ms/graph)
* [Samples](docs)
### Update Schedule
The Microsoft Graph .NET Client Library is scheduled to be updated during the second and fourth week of each month
## Upgrading
For detailed information on breaking changes, bug fixes and new functionality introduced during major upgrades, check out our [Upgrade Guide](UPGRADING.md)
## Issues
View or log issues on the [Issues](https://github.com/microsoftgraph/msgraph-sdk-python/issues) tab in the repo.
## Contribute
Please read our [Contributing](CONTRIBUTING.md) guidelines carefully for advice on how to contribute to this repo.
## Copyright and license
Copyright (c) Microsoft Corporation. All Rights Reserved. Licensed under the MIT [license](LICENSE).
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Third Party Notices
[Third-party notices](THIRD%20PARTY%20NOTICES)
| text/markdown | null | Microsoft <graphtooling+python@microsoft.com> | null | null | null | msgraph, openAPI, Microsoft, Graph | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [
"msgraph"
] | [] | [
"azure-identity>=1.12.0",
"microsoft-kiota-serialization-json<2.0.0,>=1.8.0",
"microsoft-kiota-serialization-text<2.0.0,>=1.8.0",
"microsoft-kiota-serialization-form<2.0.0,>=1.8.0",
"microsoft-kiota-serialization-multipart<2.0.0,>=1.8.0",
"msgraph_core>=1.3.1",
"yapf; extra == \"dev\"",
"bumpver; extr... | [] | [] | [] | [
"documentation, https://github.com/microsoftgraph/msgraph-sdk-python/docs",
"homepage, https://github.com/microsoftgraph/msgraph-sdk-python#readme",
"repository, https://github.com/microsoftgraph/msgraph-sdk-python"
] | python-requests/2.32.5 | 2026-02-20T00:32:29.378961 | msgraph_sdk-1.55.0.tar.gz | 6,295,669 | 10/44/0b5a188addf6341b3da10dd207e444417de255f7c1651902ba72016a2843/msgraph_sdk-1.55.0.tar.gz | source | sdist | null | false | ed896e0b42034d2b3096e8412c09e186 | 6df691a31954a050d26b8a678968017e157d940fb377f2a8a4e17a9741b98756 | 10440b5a188addf6341b3da10dd207e444417de255f7c1651902ba72016a2843 | null | [] | 54,640 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.