metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | FreeTrace | 1.5.16 | FreeTrace |
[](https://doi.org/10.64898/2026.01.08.698486)

[](https://doi.org/10.5281/zenodo.13336251)

## FreeTrace
> [!IMPORTANT]
> Requirements </br>
> - Windows(10/11) / GNU/Linux(Debian/Ubuntu) / MacOS(Sequoia/Tahoe)</br>
> - C compiler (clang)</br>
> - Python3.10 ↑</br>
> - GPU & Cuda12 on GNU/Linux with pre-trained [models](https://github.com/JunwooParkSaribu/FreeTrace/blob/main/FreeTrace/models/README.md) (recommended)</br>
> [!NOTE]
> - PRE-REQUISITE: pre-installation and compilation, check [tutorial](https://github.com/JunwooParkSaribu/FreeTrace/blob/main/tutorial.ipynb). </br>
> - Check [compatibilities](https://github.com/JunwooParkSaribu/FreeTrace/blob/main/FreeTrace/models/README.md) of Python and Tensorflow to run FreeTrace with source code.</br>
> - Without GPU, FreeTrace is slow if it infers under fractional Brownian motion.</br>
> - Current version is stable with python 3.10 / 3.11 / 3.12</br>
<h2>Visualised FreeTrace results</h2>
<img width="825" src="https://github.com/JunwooParkSaribu/FreeTrace/blob/main/tmps/stars.gif">
<table border="0">
<tr>
<td><img src="https://github.com/JunwooParkSaribu/FreeTrace/blob/main/tmps/trjs0.gif" width="230" height="230"></td>
<td><img src="https://github.com/JunwooParkSaribu/FreeTrace/blob/main/tmps/trjs1.gif" width="230" height="230"></td>
<td><img src="https://github.com/JunwooParkSaribu/FreeTrace/blob/main/tmps/trjs2.gif" width="285" height="230"></td>
</tr>
</table>
<b>[FreeTrace](https://doi.org/10.64898/2026.01.08.698486)</b> infers individual trajectories from time-series images with reconnection of the detected particles under fBm.</br>
<h3> Contact person </h3>
<junwoo.park@sorbonne-universite.fr>
<h3> Contributors </h3>
> If you use this software, please cite it as below. </br>
```
@article {Park2026.01.08.698486,
author = {Park, Junwoo and Sokolovska, Nataliya and Cabriel, Cl{\'e}ment and Kobayashi, Asaki and Corsin, Enora and Garcia Fernandez, Fabiola and Izeddin, Ignacio and Min{\'e}-Hattab, Judith},
title = {Novel estimation of memory in molecular dynamics with extended and comprehensive single-molecule tracking software: FreeTrace},
elocation-id = {2026.01.08.698486},
year = {2026},
doi = {10.64898/2026.01.08.698486},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2026/01/10/2026.01.08.698486},
eprint = {https://www.biorxiv.org/content/early/2026/01/10/2026.01.08.698486.full.pdf},
journal = {bioRxiv}
}
```
<br>
| text/markdown | null | Junwoo PARK <junwoo.park@sorbonne-universite.fr> | null | Junwoo PARK <junwoo.park@sorbonne-universite.fr> | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
The copyright notice
FreeTrace: single particle tracking software.
Copyright (C) 2024 Junwoo Park
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
contact email: junwoo5071@gmail.com | SPT | [
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click; extra == \"cli\"",
"rich; extra == \"cli\"",
"pyqt6; extra == \"gui\""
] | [] | [] | [] | [
"Homepage, https://github.com/JunwooParkSaribu/FreeTrace",
"Repository, https://doi.org/10.5281/zenodo.13336251"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T18:27:44.525530 | freetrace-1.5.16.tar.gz | 594,213 | 10/84/fc58fe5c2cdec27d857d15177dd4b15a62179ad27345afb986a38e43af3c/freetrace-1.5.16.tar.gz | source | sdist | null | false | 252b2211849387e874531ceab4bb2dc6 | 19d3ab5c6b03076da962823c936af8123850da9625bddd00c8e40e389791d803 | 1084fc58fe5c2cdec27d857d15177dd4b15a62179ad27345afb986a38e43af3c | null | [
"LICENSE"
] | 0 |
2.4 | blandify | 0.1.0 | Unicode normalization for stripping LLM artifacts | # blandify (Python bindings)
Python bindings for the `blandify` Rust Unicode normalization library.
## What it does
`blandify.normalize(...)` replaces common Unicode artifacts with plain ASCII forms, including:
- smart quotes and apostrophes
- Unicode dashes and minus signs
- non-ASCII whitespace (including tab expansion to two spaces)
- zero-width and directional markers
- arrows, vulgar fractions, common math symbols, and common text symbols
- optional German umlaut transliteration (`ä -> ae`, `ö -> oe`, `ü -> ue`, `ß -> ss`)
## Development
From the repository root:
```bash
cd python
pixi run maturin develop --uv
pixi run pytest tests/
```
Or from the root with the configured task:
```bash
pixi run -e dev python-test
```
| text/markdown; charset=UTF-8; variant=GFM | null | Moritz Wilksch <moritzwilksch@gmail.com> | null | null | MIT | unicode, normalization, text-processing, ascii | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: ... | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.9 | 2026-02-19T18:27:20.069913 | blandify-0.1.0.tar.gz | 32,415 | 2d/2f/99036e7ee4298ee569768b3369c256a6546e11c61d83119d9fb2a787b600/blandify-0.1.0.tar.gz | source | sdist | null | false | 819584454ed3271a9931cb2d28bebdc5 | 3db407bd8d3dc0f14e04b83bc05d5640fb57e213d871bf3c322948536924bd9a | 2d2f99036e7ee4298ee569768b3369c256a6546e11c61d83119d9fb2a787b600 | null | [] | 530 |
2.4 | tp-sdk | 0.2.1 | Python SDK for TeaserPaste API - Simple, Typed, and Fun. | # TeaserPaste Python SDK
Official Python SDK for TeaserPaste API. Simple, typed, and ready for the Teaserverse.
## Installation
```bash
# Using uv (Recommended)
uv add tp-sdk
# Using pip
pip install tp-sdk
```
## Quick Start
### Standard (Sync)
```python
import tp
# Context Manager (Recommended for connection pooling)
with tp.TeaserPaste("YOUR_API_KEY") as api:
# Create a new paste
note = api.paste(tp.SnippetInput(
title="Teaserverse Logs",
content="System status: All green.",
expires=tp.Expiry.HOUR_1
))
print(f"Created: {note.id}")
# Get a paste
data = api.get(note.id)
print(data.content)
```
### Async (AsyncIO)
```python
import asyncio
import tp
async def main():
async with tp.AsyncTeaserPaste("YOUR_API_KEY") as api:
note = await api.get("xyz_123")
print(note.title)
asyncio.run(main())
```
## Features
### Connection Pooling
The SDK now supports `Context Manager` usage (using `with` statement) to reuse HTTP connections, significantly improving performance for multiple requests.
### Type Hints & Enums
Improved type safety for arguments and Models.
```python
# Use Enum for expiry
from tp import Expiry
api.paste(tp.SnippetInput(..., expires=Expiry.WEEK_1))
# Explicit arguments for edit (IDE autocompletion enabled)
api.edit(snippet_id, title="New Title", visibility="private")
```
### Iteration Helpers
Iterate through your snippets (single page only).
```python
# Iterate snippets
for snippet in api.ls_iter(include_deleted=True):
print(snippet.title)
```
## API Reference
"One-word" API's. Both Sync and Async clients support these methods.
* `api.paste(input)` — Create a new snippet.
* `api.get(id, pwd=None)` — Get a snippet.
* `api.edit(id, title=..., ...)` — Update a snippet.
* `api.kill(id)` — Soft delete a snippet.
* `api.live(id)` — Restore a deleted snippet.
* `api.fork(id)` — Copy a snippet to your account.
* `api.star(id, on=True)` — Star (or unstar) a snippet.
* `api.ls(limit=20, include_deleted=False)` — List your snippets.
* `api.ls_iter(limit=20, include_deleted=False)` — Iterator for listing snippets (single page).
* `api.user(uid)` — List another user's public snippets.
* `api.find(q)` — Search snippets.
* `api.find_iter(q)` — Iterator for searching snippets (single page).
* `api.me()` — Get your account info.
## Configuration
You can configure the base URL via environment variable `TP_BASE_URL` or passing `base_url` to the constructor.
## Development
```bash
# Install dependencies
uv sync
# Build
uv build
```
## License
[MIT](LICENSE)
| text/markdown | null | TeaserPaste <contact@teaserverse.online> | null | null | MIT License
Copyright (c) 2025 TeaserPaste
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:27:06.472116 | tp_sdk-0.2.1.tar.gz | 24,237 | 6d/60/51f884cc308f7479502442c203ed87f6791eda41f7a1d4f284c90b6f965d/tp_sdk-0.2.1.tar.gz | source | sdist | null | false | 29aa9f754686eb788fc9714c88737c83 | 876b69a01dc87f3c3224273588ae3d1faf43c345ed65aad60f42baa86d8a5547 | 6d6051f884cc308f7479502442c203ed87f6791eda41f7a1d4f284c90b6f965d | null | [
"LICENSE"
] | 227 |
2.4 | pysentry-rs | 0.4.2 | Security vulnerability auditing tool for Python packages | # PySentry
[](https://pepy.tech/projects/pysentry-rs)
[Help to test and improve](https://github.com/nyudenkov/pysentry/issues/12) · [Participate in pysentry usage survey](https://tally.so/r/mYNPNv)
Please, send feedback to nikita@pysentry.com
A fast, reliable security vulnerability scanner for Python projects, written in Rust.
PySentry audits Python projects for known security vulnerabilities by analyzing dependency files and cross-referencing them against multiple vulnerability databases.
**[Documentation](https://docs.pysentry.com)** · **[Benchmarks](benchmarks/results/)** · **[Buy Me a Coffee](https://buymeacoffee.com/nyudenkov)**
## Features
- **Multiple formats** — `uv.lock`, `poetry.lock`, `Pipfile.lock`, `pylock.toml`, `pyproject.toml`, `Pipfile`, `requirements.txt`
- **Multiple sources** — PyPA Advisory Database, PyPI JSON API, OSV.dev (all enabled by default)
- **PEP 792 support** — Detects archived, deprecated, and quarantined packages
- **Flexible output** — Human-readable, JSON, SARIF, Markdown
- **Fast** — Written in Rust with async processing and caching
## Installation
```bash
# Using uvx (recommended)
uvx pysentry-rs /path/to/project
# Using pip
pip install pysentry-rs
# Using cargo
cargo install pysentry
# Pre-built binaries available at GitHub Releases
```
See [Installation Guide](https://docs.pysentry.com/getting-started/installation) for all options.
## Quick Start
```bash
# Scan current directory
pysentry
# Scan specific project
pysentry /path/to/project
# Filter by severity
pysentry --severity high
# Output to JSON
pysentry --format json --output report.json
# Fail on critical vulnerabilities only
pysentry --fail-on critical
# Block quarantined packages (malware protection)
pysentry --forbid-quarantined
```
See [Quickstart Guide](https://docs.pysentry.com/getting-started/quickstart) for more examples.
## Pre-commit
```yaml
repos:
- repo: https://github.com/pysentry/pysentry-pre-commit
rev: v0.4.2
hooks:
- id: pysentry
# Use compact mode for minimal pre-commit output
# args: ['--compact']
```
## Configuration
PySentry supports TOML configuration via `.pysentry.toml` or `pyproject.toml`:
```toml
# .pysentry.toml
version = 1
[defaults]
severity = "medium"
fail_on = "high"
[sources]
enabled = ["pypa", "osv"]
[ignore]
ids = ["CVE-2023-12345"]
```
See [Configuration Guide](https://docs.pysentry.com/configuration/config-files) for all options.
## Documentation
Full documentation is available at **[docs.pysentry.com](https://docs.pysentry.com)**:
- [Installation](https://docs.pysentry.com/getting-started/installation)
- [Quickstart](https://docs.pysentry.com/getting-started/quickstart)
- [CLI Options](https://docs.pysentry.com/configuration/cli-options)
- [Configuration Files](https://docs.pysentry.com/configuration/config-files)
- [Environment Variables](https://docs.pysentry.com/configuration/environment-variables)
- [Troubleshooting](https://docs.pysentry.com/troubleshooting)
## Requirements
- **For `requirements.txt` scanning**: Install `uv` (recommended) or `pip-tools` for dependency resolution
- **Python**: 3.9–3.14 (for pip/uvx installation)
- **Rust**: 1.79+ (for cargo installation or building from source)
## Acknowledgments
- Inspired by [pip-audit](https://github.com/pypa/pip-audit) and [uv #9189](https://github.com/astral-sh/uv/issues/9189)
- Vulnerability data from [PyPA](https://github.com/pypa/advisory-database), [PyPI](https://pypi.org/), and [OSV.dev](https://osv.dev/)
| text/markdown; charset=UTF-8; variant=GFM | null | nyudenkov <nyudenkov@pm.me> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming ... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/nyudenkov/pysentry",
"Issues, https://github.com/nyudenkov/pysentry/issues",
"Repository, https://github.com/nyudenkov/pysentry"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:26:27.191832 | pysentry_rs-0.4.2.tar.gz | 598,945 | ca/7a/f2a7a46b6aba5f4588d3e38bef106baa3a5df48dd9e614ae68b7d92ce5b6/pysentry_rs-0.4.2.tar.gz | source | sdist | null | false | 0575f6fee1ebcaefb655968e2c310cab | 693e83ba7998fa914c666a15362d27ca674406fa3b6785be85772f063e1f393d | ca7af2a7a46b6aba5f4588d3e38bef106baa3a5df48dd9e614ae68b7d92ce5b6 | null | [
"LICENSE"
] | 4,044 |
2.4 | eric-kraus | 1.0.1 | 20 years of enterprise sales, validated by Pydantic. | # eric-kraus
a resume validated by Pydantic.
## What Is This?
This is the resume of **Eric Kraus** ... structured, typed, and validated using [Pydantic](https://docs.pydantic.dev)!
I've also prepared an interactive terminal (UI) styled by [Rich](https://rich.readthedocs.io).
Every data point comes from a Pydantic model.
The instance of my resume is instantiated from a BaseModel called: `IdealCandidate`.
**Every property is validated.**
> Because unvalidated data is a liability — in pipelines... and in `hiring`.
## Quick Start
**Install Package**
```bash
# With uvx (fastest, cached install)
uvx eric-kraus
#OR
# With pip
pip install eric-kraus
# Then run:
eric-kraus
```
OR
**Clone repo and run from root:**
```bash
uv run python3 -c "from eric_kraus.data import build_eric; print(build_eric().model_dump_json(indent=2))"
```
## For the Curious
```python
from eric_kraus.data import build_eric
eric = build_eric()
# the Pydantic model
print(eric.model_dump_json(indent=2))
# validator that matters
from eric_kraus.models import QuotaResult
QuotaResult(year=2024, attainment_pct=50) # ValidationError: "Candidate without track record. Must not be Eric Kraus!"
```
## Interactive Menu
```
--------------------------------------------------
eric-kraus — a resume, validated by Pydantic
--------------------------------------------------
--- MENU ---
1 Overview
2 Experience
3 Technical Projects
4 Why Pydantic?
5 Education & Languages
6 Contact
7 Export as JSON
a Show Everything
q Quit
```
## License
MIT
| text/markdown | null | Eric Kraus <eric.kraus@gmail.com> | null | null | null | enterprise-sales, eric-kraus, pydantic, resume | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0",
"rich>=13.0"
] | [] | [] | [] | [
"Homepage, https://github.com/erickraus/eric-kraus",
"LinkedIn, https://linkedin.com/in/ekraus"
] | uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T18:25:22.805649 | eric_kraus-1.0.1.tar.gz | 25,311 | a7/25/d2bec2f9b64fdc4b7656f0c2306b5d8bd8fa0d653a0bc265a3935aee1d18/eric_kraus-1.0.1.tar.gz | source | sdist | null | false | 4bf2144bcfb1adf5ad0553fe36f6ffba | 3cab3a62e1c75734342544e611bea8ab94d51fae4fea17ffe020b18e8bf3359c | a725d2bec2f9b64fdc4b7656f0c2306b5d8bd8fa0d653a0bc265a3935aee1d18 | MIT | [
"LICENSE"
] | 238 |
2.1 | bidsschematools | 1.2.1 | Python tools for working with the BIDS schema. | # BIDS Schema Tools
[](https://opensource.org/licenses/MIT)
[](https://codecov.io/gh/bids-standard/bids-specification)
[](https://repology.org/project/bidsschematools/versions)
[](https://pypi.org/project/bidsschematools/)
A Python library (available after installation in the Python interpreter as `bidsschematools`)
for working with the [Brain Imaging Data Structure (BIDS)](https://bids.neuroimaging.io/) schema.
Features:
* lightweight
* reference schema parsing implementation used for schema testing
* simple CLI bindings (e.g. `bst export`)
If you have questions, you can post them in one of several channels where BIDS members are active:
- the [NeuroStars](https://neurostars.org/tags/bids) discourse forum
- the [BrainHack Mattermost](https://mattermost.brainhack.org),
for instant messaging (see also this [news item](https://bids.neuroimaging.io/blog/2020/06/24/Join%20the%20BIDS%20community%20on%20the%20BrainHack%20Mattermost.html))
- the [Google group](https://groups.google.com/forum/#!forum/bids-discussion),
for broader discussions surrounding BIDS
| text/markdown | bids-standard developers | null | null | bids-standard developers <bids.maintenance@gmail.com> | MIT | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Information Analysis",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"... | [] | null | null | >=3.9 | [] | [] | [] | [
"acres",
"click",
"pyyaml",
"jsonschema[format]; extra == \"validation\"",
"bidsschematools[tests]; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pyparsing; extra == \"expressions\"",
"tabulate; extra == \"render\"",
"pandas; extra == \"render\"",
"markdown-it-py; extra == \"render\"",
"pymdown-... | [] | [] | [] | [
"Homepage, https://github.com/bids-standard/bids-specification"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:24:10.939781 | bidsschematools-1.2.1.tar.gz | 1,768,328 | 64/ef/d4ef2e808468078ced2571ebb68b7bb1335b0d43a6592b155f52c63c087a/bidsschematools-1.2.1.tar.gz | source | sdist | null | false | de87627239fab1a4d76940cd0e819793 | f147943812c01a61913e8306d779393e095a58b9e8fb63b48f2366662b702a80 | 64efd4ef2e808468078ced2571ebb68b7bb1335b0d43a6592b155f52c63c087a | null | [] | 8,854 |
2.4 | unifi-network-mcp | 0.6.2 | Unifi Network MCP Server | # 📡 UniFi Network MCP Server
[![License][license-shield]](LICENSE)
![Project Maintenance][maintenance-shield]
[![GitHub Activity][commits-shield]][commits]
[![GitHub Release][release-shield]][releases]
[![issues][issues-shield]][issues-link]
[![test-badge]][test-workflow]
[![validate-badge]][validate-workflow]
[![validate-docker-badge]][validate-docker-workflow]
[](https://www.buymeacoffee.com/sirkirby)
A self-hosted [Model Context Protocol](https://github.com/modelcontextprotocol) (MCP) server that turns your UniFi Network Controller into a rich set of interactive tools. Every capability is exposed via standard MCP **tools** prefixed with `unifi_`, so any LLM or agent that speaks MCP (e.g. Claude Desktop, `mcp-cli`, LangChain, etc.) can query, analyze **and** – when explicitly authorized – modify your network. These tools must have local access to your UniFi Network Controller, by either running locally or in the cloud connected via a secure reverse proxy. Please consider the [security implications](#security-considerations) of running these tools in the cloud as they contain sensitive information and access to your network.
---
## Table of Contents
* [Features](#features)
* [Quick Start](#quick-start)
* [Docker](#docker)
* [Python / UV](#python--uv)
* [Install from PyPI](#install-from-pypi)
* [Using with Local LLMs and Agents](#using-with-local-llms-and-agents)
* [Using with Claude Desktop](#using-with-claude-desktop)
* [Code Execution Mode](#code-execution-mode)
* [Overview](#overview)
* [Context Optimization](#context-optimization)
* [Tool Index](#tool-index)
* [Tool Execution](#tool-execution)
* [Runtime Configuration](#runtime-configuration)
* [Diagnostics (Advanced Logging)](#diagnostics-advanced-logging)
* [Developer Console (Local Tool Tester)](#developer-console-local-tool-tester)
* [Security Considerations](#security-considerations)
* [📚 Tool Catalog](#-tool-catalog)
* [📖 Documentation](#-documentation)
* [Testing](#testing)
* [Local Development](#local-development)
* [Contributing: Releasing / Publishing](#contributing-releasing--publishing)
---
## Features
* Full catalog of UniFi controller operations – firewall, traffic-routes, port-forwards, QoS, VPN, WLANs, stats, devices, clients **and more**.
* All mutating tools require `confirm=true` so nothing can change your network by accident.
* **Workflow automation friendly** – set `UNIFI_AUTO_CONFIRM=true` to skip confirmation prompts (ideal for n8n, Make, Zapier).
* Works over **stdio** (FastMCP). Optional HTTP endpoint (Streamable HTTP or legacy SSE) can be enabled via config.
* **Code execution mode** with tool index, async operations, and TypeScript examples.
* One-liner launch via the console-script **`unifi-network-mcp`**.
* Idiomatic Python ≥ 3.13, packaged with **pyproject.toml** and ready for PyPI.
---
## Quick Start
### Docker
```bash
# 1. Retrieve the latest image (published from CI)
docker pull ghcr.io/sirkirby/unifi-network-mcp:latest
# 2. Run – supply UniFi credentials via env-vars or a mounted .env file
# Ensure all UNIFI_* variables are set as needed (see Runtime Configuration table)
docker run -i --rm \
-e UNIFI_HOST=192.168.1.1 \
-e UNIFI_USERNAME=admin \
-e UNIFI_PASSWORD=secret \
-e UNIFI_PORT=443 \
-e UNIFI_SITE=default \
-e UNIFI_VERIFY_SSL=false \
ghcr.io/sirkirby/unifi-network-mcp:latest
# Optional: Set controller type (auto-detected if omitted)
# -e UNIFI_CONTROLLER_TYPE=auto \
```
### Python / UV
```bash
# Install UV (modern pip/venv manager) if you don't already have it
curl -fsSL https://astral.sh/uv/install.sh | bash
# 1. Clone & create a virtual-env
git clone https://github.com/sirkirby/unifi-network-mcp.git
cd unifi-network-mcp
uv venv
source .venv/bin/activate
# 2. Install in editable mode (develop-install)
uv pip install --no-deps -e .
# 3. Provide credentials (either export vars or create .env)
# The server will auto-detect your controller type (UniFi OS vs standard)
# Use UNIFI_CONTROLLER_TYPE to manually override if needed
cp .env.example .env # then edit values
# 4. Launch
unifi-network-mcp
```
### Install from PyPI
*(when published)*
```bash
uv pip install unifi-network-mcp # or: pip install unifi-network-mcp
```
The `unifi-network-mcp` entry-point will be added to your `$PATH`.
---
## Using with Local LLMs and Agents
No internet access is required, everything runs locally. It's recommend you have an M-Series Mac or Windows/Linux with a very modern GPU (Nvidia RTX 4000 series or better)
### Recommended
Install [LM Studio](https://lmstudio.ai) and edit the mcp.json file `chat prompt --> tool icon --> edit mcp.json` to add the unifi-network-mcp server tools, allowing you to prompt using a locally run LLM of your choice. Configure just as you would for Claude desktop. I recommend loading a tool capable model like OpenAI's [gp-oss](https://lmstudio.ai/models/openai/gpt-oss-20b), and prompt it to use the UniFi tools.
```text
Example prompt: using the unifi tools, list my most active clients on the network and include the type of traffic and total bandwidth used.
```
### Alternative
Use [Ollama](https://ollama.com/) with [ollmcp](https://github.com/jonigl/mcp-client-for-ollama), allowing you to use a locally run LLM capable of tool calling via your favorite [terminal](https://app.warp.dev/referral/EJK58L).
---
## Code Execution Mode
The UniFi Network MCP server supports **code-execution mode**, enabling agents to write code that interacts with tools programmatically. This approach reduces token usage by up to 98% compared to traditional tool calls, as agents can filter and transform data in code before presenting results.
### Overview
Code execution mode consists of three key components:
1. **Tool Index** - Machine-readable catalog of all available tools with JSON schemas
2. **Async Operations** - Background job execution for long-running operations
3. **Reference Implementations** - Example clients showing code-execution patterns
This implementation follows the patterns described in [Anthropic's Code Execution with MCP article](https://www.anthropic.com/engineering/code-execution-with-mcp).
### 🚀 Context Optimization (New in v0.2.0)
The server now supports **lazy tool registration** to dramatically reduce LLM context usage.
**🎯 DEFAULT: Lazy Mode (lazy)** ⭐⭐⭐ **Active in v0.2.0!**
- Registers only 3 meta-tools initially
- ~200 tokens consumed (96% reduction!)
- Tools loaded automatically on first use
- **Seamless UX** - no manual discovery needed
- **Best of both worlds!**
- **Active by default** - no configuration needed
**Eager Mode (eager):**
- Registers all 86 tools immediately
- ~5,000 tokens consumed for tool schemas
- All tools visible in context from start
- **Best for:** Dev console, automation scripts
- **How to enable:** Set `UNIFI_TOOL_REGISTRATION_MODE=eager`
**Meta-Only Mode (meta_only):**
- Registers only 3 meta-tools initially
- ~200 tokens consumed (96% reduction!)
- Requires `unifi_tool_index` call for discovery
- **Best for:** Maximum control
- **How to enable:** Set `UNIFI_TOOL_REGISTRATION_MODE=meta_only`
**Upgrading from v0.1.x?**
If you're upgrading and want to restore the previous behavior (all tools registered immediately), add this to your config:
```json
{
"mcpServers": {
"unifi": {
"command": "uv",
"args": ["--directory", "/path/to/unifi-network-mcp", "run", "python", "-m", "src.main"],
"env": {
"UNIFI_HOST": "192.168.1.1",
"UNIFI_USERNAME": "admin",
"UNIFI_PASSWORD": "password",
"UNIFI_TOOL_REGISTRATION_MODE": "eager"
}
}
}
}
```
**Default behavior (lazy mode - recommended):**
```json
{
"mcpServers": {
"unifi": {
"command": "uv",
"args": ["--directory", "/path/to/unifi-network-mcp", "run", "python", "-m", "src.main"],
"env": {
"UNIFI_HOST": "192.168.1.1",
"UNIFI_USERNAME": "admin",
"UNIFI_PASSWORD": "password"
// UNIFI_TOOL_REGISTRATION_MODE defaults to "lazy" - no need to set!
}
}
}
}
```
**Result:** Claude starts with minimal context, tools load transparently when called - 96% token savings with zero UX compromise!
### Tool Index
The server exposes a special `unifi_tool_index` tool that returns a complete list of all registered tools with their schemas:
```json
{
"name": "unifi_tool_index",
"arguments": {}
}
```
**Response:**
```json
{
"tools": [
{
"name": "unifi_list_clients",
"schema": {
"name": "unifi_list_clients",
"description": "List all network clients",
"input_schema": {
"type": "object",
"properties": {
"filter": {"type": "string"},
"limit": {"type": "integer"}
}
}
}
},
...
]
}
```
**Use Cases:**
- Programmatic tool discovery
- Wrapper/SDK generation
- Dynamic client configuration
- IDE autocomplete support
### Tool Execution
The server provides two execution modes for discovered tools:
**Single Tool Execution (synchronous):**
```json
{
"name": "unifi_execute",
"arguments": {
"tool": "unifi_list_clients",
"arguments": {}
}
}
```
**Batch Execution (parallel, async):**
For bulk operations or long-running tasks, use batch mode:
```json
{
"name": "unifi_batch",
"arguments": {
"operations": [
{"tool": "unifi_get_client_details", "arguments": {"mac": "aa:bb:cc:dd:ee:ff"}},
{"tool": "unifi_get_client_details", "arguments": {"mac": "11:22:33:44:55:66"}}
]
}
}
```
**Response:**
```json
{
"jobs": [
{"index": 0, "tool": "unifi_get_client_details", "jobId": "af33b233cbdc860c"},
{"index": 1, "tool": "unifi_get_client_details", "jobId": "bf44c344dcde971d"}
],
"message": "Started 2 operation(s). Use unifi_batch_status to check progress."
}
```
**Check batch status:**
```json
{
"name": "unifi_batch_status",
"arguments": {
"jobIds": ["af33b233cbdc860c", "bf44c344dcde971d"]
}
}
```
**Response:**
```json
{
"jobs": [
{"jobId": "af33b233cbdc860c", "status": "done", "result": {...}},
{"jobId": "bf44c344dcde971d", "status": "done", "result": {...}}
]
}
```
**Notes:**
- Use `unifi_execute` for single operations (returns result directly)
- Use `unifi_batch` + `unifi_batch_status` for parallel/bulk operations
- Jobs are stored in-memory only (no persistence)
- Job IDs are unique per server session
### Using with Claude Desktop
Claude Desktop has built-in code execution that automatically uses the tool index:
```
You: "Show me the top 10 wireless clients by traffic, excluding guest networks"
```
Claude will:
1. Query `unifi_tool_index` to discover tools
2. Call `unifi_list_clients` to fetch data
3. Write and execute code to filter/sort in its sandbox
4. Show you only the final top 10 results
**Token savings:** Instead of processing 500+ clients in context, Claude processes them in code and shows only the summary.
See [`examples/CLAUDE_DESKTOP.md`](examples/CLAUDE_DESKTOP.md) for detailed usage guide.
### Python Client Examples
Practical examples showing programmatic usage:
```python
from mcp import ClientSession, stdio_client
# Discover tools
tools = await session.call_tool("unifi_tool_index", {})
# Execute a single tool (returns result directly)
result = await session.call_tool("unifi_execute", {
"tool": "unifi_list_clients",
"arguments": {}
})
# Batch execution for parallel operations
batch = await session.call_tool("unifi_batch", {
"operations": [
{"tool": "unifi_get_client_details", "arguments": {"mac": "..."}},
{"tool": "unifi_get_device_details", "arguments": {"mac": "..."}}
]
})
# Check batch status
status = await session.call_tool("unifi_batch_status", {
"jobIds": [j["jobId"] for j in batch["jobs"]]
})
```
**Three complete examples:**
- `query_tool_index.py` - Discover available tools
- `use_async_jobs.py` - Batch operations and status checking
- `programmatic_client.py` - Build custom Python clients
See [`examples/python/README.md`](examples/python/README.md) for complete examples.
### MCP Identity
The server advertises its capabilities via an MCP identity file at [`.well-known/mcp-server.json`](.well-known/mcp-server.json):
```json
{
"name": "unifi-network-mcp",
"version": "0.2.0",
"transports": ["stdio", "streamable-http", "http+sse"],
"capabilities": {
"tools": true,
"tool_index": true,
"batch_operations": true
},
"features": {
"tool_index": {
"tool": "unifi_tool_index"
},
"execution": {
"tool": "unifi_execute"
},
"batch_operations": {
"start_tool": "unifi_batch",
"status_tool": "unifi_batch_status"
}
}
}
```
This enables:
- Programmatic capability discovery
- Future MCP registry integration
- Client auto-configuration
---
## Using with Claude Desktop
Add (or update) the `unifi-network-mcp` block under `mcpServers` in your `claude_desktop_config.json`.
### Option 1 – Claude invokes the local package
```jsonc
"unifi-network-mcp": {
"command": "/path/to/your/.local/bin/uvx",
"args": ["--quiet", "unifi-network-mcp"], // Or "unifi-network-mcp==<version>"
"env": {
"UNIFI_HOST": "192.168.1.1",
"UNIFI_USERNAME": "admin",
"UNIFI_PASSWORD": "secret",
"UNIFI_PORT": "443",
"UNIFI_SITE": "default",
"UNIFI_VERIFY_SSL": "false"
// Optional: "UNIFI_CONTROLLER_TYPE": "auto"
}
}
```
* `uvx` handles installing/running the package in its own environment.
* The `--quiet` flag is recommended if `uvx` outputs non-JSON messages.
* If you want to pin to a specific version, use `"unifi-network-mcp==<version_number>"` as the package name.
* If your script name in `pyproject.toml` differs from the package name, use `["--quiet", "<package-name>", "<script-name>"]`.
### Option 2 – Claude starts a Docker container
```jsonc
"unifi-network-mcp": {
"command": "docker",
"args": [
"run", "--rm", "-i",
"-e", "UNIFI_HOST=192.168.1.1",
"-e", "UNIFI_USERNAME=admin",
"-e", "UNIFI_PASSWORD=secret",
"-e", "UNIFI_PORT=443",
"-e", "UNIFI_SITE=default",
"-e", "UNIFI_VERIFY_SSL=false",
// Optional: "-e", "UNIFI_CONTROLLER_TYPE=auto",
"ghcr.io/sirkirby/unifi-network-mcp:latest"
]
}
```
### Option 3 – Claude attaches to an existing Docker container (recommended for compose)
1) Using the container name as specified in `docker-compose.yml` from the repository root:
```yaml
docker-compose up --build
```
2) Then configure Claude Desktop:
```jsonc
"unifi-network-mcp": {
"command": "docker",
"args": ["exec", "-i", "unifi-network-mcp", "unifi-network-mcp"]
}
```
Notes:
* Use `-T` only with `docker compose exec` (it disables TTY for clean JSON). Do not use `-T` with `docker exec`.
* Ensure the compose service is running (`docker compose up -d`) before attaching.
After editing the config **restart Claude Desktop**, then test with:
```text
@unifi-network-mcp list tools
```
### Optional HTTP endpoint (off by default)
For environments where HTTP is acceptable (e.g., local development), you can enable the HTTP server. The default transport is **Streamable HTTP** (the current MCP spec default since 2025-03-26), with legacy **SSE** available as a fallback.
```bash
# Streamable HTTP (default)
docker run -i --rm \
-p 3000:3000 \
-e UNIFI_MCP_HTTP_ENABLED=true \
...
ghcr.io/sirkirby/unifi-network-mcp:latest
# Legacy SSE transport
docker run -i --rm \
-p 3000:3000 \
-e UNIFI_MCP_HTTP_ENABLED=true \
-e UNIFI_MCP_HTTP_TRANSPORT=sse \
...
ghcr.io/sirkirby/unifi-network-mcp:latest
```
- **Streamable HTTP** uses a single `/mcp` endpoint (POST for JSON-RPC, GET for SSE stream, DELETE for session termination)
- **SSE** (legacy) uses `/sse` + `/messages/` endpoints
Security note: Leave this disabled in production or sensitive environments. The stdio transport remains the default and recommended mode.
---
## Runtime Configuration
The server merges settings from **environment variables**, an optional `.env` file, and `src/config/config.yaml` (listed in order of precedence).
### Essential variables
| Variable | Description |
|----------|-------------|
| `CONFIG_PATH` | Full path to a custom config YAML file. If not set, checks CWD for `config/config.yaml`, then falls back to the bundled default (`src/config/config.yaml`). |
| `UNIFI_HOST` | IP / hostname of the controller |
| `UNIFI_USERNAME` | Local UniFi admin |
| `UNIFI_PASSWORD` | Admin password |
| `UNIFI_PORT` | HTTPS port (default `443`) |
| `UNIFI_SITE` | Site name (default `default`) |
| `UNIFI_VERIFY_SSL` | Set to `false` if using self-signed certs |
| `UNIFI_CONTROLLER_TYPE` | Controller API path type: `auto` (detect), `proxy` (UniFi OS), `direct` (standalone). Default `auto` |
| `UNIFI_MCP_HTTP_ENABLED` | Set `true` to enable optional HTTP server (default `false`) |
| `UNIFI_MCP_HTTP_TRANSPORT` | HTTP transport: `streamable-http` (default, current MCP spec) or `sse` (legacy). Only applies when HTTP is enabled |
| `UNIFI_MCP_HOST` | HTTP bind address (default `0.0.0.0`) |
| `UNIFI_MCP_PORT` | HTTP bind port (default `3000`) |
| `UNIFI_AUTO_CONFIRM` | Set `true` to auto-confirm all mutating operations (skips preview step). Ideal for workflow automation (n8n, Make, Zapier). Default `false` |
| `UNIFI_TOOL_REGISTRATION_MODE` | Tool loading mode: `lazy` (default), `eager`, or `meta_only`. See [Context Optimization](#context-optimization) |
| `UNIFI_ENABLED_CATEGORIES` | Comma-separated list of tool categories to load (eager mode). See table below |
| `UNIFI_ENABLED_TOOLS` | Comma-separated list of specific tool names to register (eager mode) |
| `UNIFI_MCP_ALLOWED_HOSTS` | Comma-separated list of allowed hostnames for reverse proxy support. Required when running behind Nginx/Cloudflare/etc. Default `localhost,127.0.0.1` |
| `UNIFI_MCP_ENABLE_DNS_REBINDING_PROTECTION` | Enable/disable DNS rebinding protection. Set to `false` for Kubernetes/proxy deployments where `UNIFI_MCP_ALLOWED_HOSTS` is insufficient. Default `true` |
### Tool Categories (for UNIFI_ENABLED_CATEGORIES)
When using eager mode with category filtering, these are the valid category names:
| Category | Description | Example Tools |
|----------|-------------|---------------|
| `clients` | Client listing, blocking, guest auth | `unifi_list_clients`, `unifi_block_client` |
| `config` | Configuration management | - |
| `devices` | Device listing, radio config, reboot, locate, upgrade | `unifi_list_devices`, `unifi_get_device_radio` |
| `events` | Events and alarms | `unifi_list_events`, `unifi_list_alarms` |
| `firewall` | Firewall rules and groups | `unifi_list_firewall_rules`, `unifi_create_firewall_rule` |
| `hotspot` | Vouchers for guest network | `unifi_list_vouchers`, `unifi_create_voucher` |
| `network` | Network/VLAN management | `unifi_list_networks`, `unifi_create_network` |
| `port_forwards` | Port forwarding rules | `unifi_list_port_forwards` |
| `qos` | QoS/traffic shaping rules | `unifi_list_qos_rules`, `unifi_create_qos_rule` |
| `routing` | Static routes (V1 API) | `unifi_list_routes`, `unifi_create_route` |
| `stats` | Statistics and metrics | `unifi_get_client_stats`, `unifi_get_device_stats` |
| `system` | System info, health, settings | `unifi_get_system_info`, `unifi_get_network_health` |
| `traffic_routes` | Policy-based routing (V2 API) | `unifi_list_traffic_routes` |
| `usergroups` | Bandwidth profiles/user groups | `unifi_list_usergroups`, `unifi_create_usergroup` |
| `vpn` | VPN servers and clients | `unifi_list_vpn_servers`, `unifi_list_vpn_clients` |
**Example usage:**
```bash
# Load only client and system tools
export UNIFI_TOOL_REGISTRATION_MODE=eager
export UNIFI_ENABLED_CATEGORIES=clients,system
# Or load specific tools only
export UNIFI_ENABLED_TOOLS=unifi_list_clients,unifi_list_devices,unifi_get_system_info
```
**Note:** Tools may also be filtered by the `permissions` section in config.yaml (e.g., `clients.update: false` blocks mutating client tools).
### Controller Type Detection
The server automatically detects whether your UniFi controller requires UniFi OS proxy paths (`/proxy/network/api/...`) or standard direct paths (`/api/...`). This eliminates 404 errors on newer UniFi OS controllers without manual configuration.
#### Automatic Detection (Default)
```bash
# No configuration needed - detection happens automatically
UNIFI_CONTROLLER_TYPE=auto # or omit entirely
```
The server will:
1. Probe both path structures during connection initialization
2. Cache the result for the session lifetime
3. Automatically use the correct paths for all API requests
**Detection Time**: Adds ~300ms to initial connection time (within 2-second target).
#### Manual Override
If automatic detection fails or you want to force a specific mode:
```bash
# For UniFi OS controllers (Cloud Gateway, UDM-Pro, self-hosted UniFi OS 4.x+)
export UNIFI_CONTROLLER_TYPE=proxy
# For standalone UniFi Network controllers
export UNIFI_CONTROLLER_TYPE=direct
```
#### Troubleshooting
If you encounter connection errors:
1. **Check controller accessibility**: Verify you can reach the controller on the configured port
2. **Try manual override**: Set `UNIFI_CONTROLLER_TYPE=proxy` or `direct` based on your controller type
3. **Check logs**: Look for detection messages in the server output
4. **See issue #19**: [UniFi OS path compatibility](https://github.com/sirkirby/unifi-network-mcp/issues/19)
**When to use manual override**:
- Detection fails (network issues, firewall blocking probes)
- Running in restricted network environment
- Want to skip detection for faster startup
- Testing specific path configuration
### `src/config/config.yaml`
Defines HTTP bind host/port (`0.0.0.0:3000` by default) plus granular permission flags. Examples below assume the default port.
---
## Diagnostics (Advanced Logging)
Enable a global diagnostics mode to emit structured logs for every tool call and controller API request. Only recommended for debugging.
Configuration in `src/config/config.yaml`:
```yaml
server:
diagnostics:
enabled: false # toggle globally
log_tool_args: true # include tool args/kwargs (safely redacted)
log_tool_result: true # include tool results (redacted)
max_payload_chars: 2000 # truncate large payloads
```
Environment overrides:
* `UNIFI_MCP_DIAGNOSTICS` (true/false)
* `UNIFI_MCP_DIAG_LOG_TOOL_ARGS` (true/false)
* `UNIFI_MCP_DIAG_LOG_TOOL_RESULT` (true/false)
* `UNIFI_MCP_DIAG_MAX_PAYLOAD` (integer)
Notes:
* Logs are emitted via standard Python logging under `unifi-network-mcp.diagnostics`.
* Set `server.log_level` (or `UNIFI_MCP_LOG_LEVEL`) to `INFO`/`DEBUG` to surface entries.
* Tool calls log timing and optional redacted args/results; API calls log method, path, timing, and redacted request/response snapshots.
---
## Developer Console (Local Tool Tester)
A lightweight interactive console to list and invoke tools locally without LLM tool calling. It uses your normal config and the same runtime, so diagnostics apply automatically when enabled. Grab your [favorite terminal](https://app.warp.dev/referral/EJK58L) to get started.
Location: `devtools/dev_console.py`
Run:
```bash
python devtools/dev_console.py
```
What it does:
* Loads config and initializes the UniFi connection.
* Auto-loads all `unifi_*` tools.
* **Shows ALL tools** (including those disabled by permissions) with status indicators.
* On selection, shows a schema hint (when available) and prompts for JSON arguments.
* Executes the tool via the MCP server and prints the JSON result.
* Prevents execution of disabled tools with helpful permission guidance.
**New in v0.2.0:** The dev console now displays all tools regardless of permission settings:
* Enabled tools are marked with ✓
* Disabled tools are marked with ✗ [DISABLED]
* Attempting to run a disabled tool shows permission instructions
* See [docs/permissions.md](docs/permissions.md) for how to enable specific tools
Tips:
* Combine with Diagnostics for deep visibility: set `UNIFI_MCP_DIAGNOSTICS=true` (or enable in `src/config/config.yaml`).
* For mutating tools, set `{"confirm": true}` in the JSON input when prompted.
* To enable disabled tools, set environment variables like `UNIFI_PERMISSIONS_NETWORKS_CREATE=true` before running the console.
### Supplying arguments
You can provide tool arguments in three ways:
* Paste a JSON object (recommended for complex inputs):
```json
{"mac_address": "14:1b:4f:dc:5b:cf"}
```
* Type a single value when the tool has exactly one required parameter. The console maps it automatically to that key. Example for `unifi_get_client_details`:
```bash
14:1b:4f:dc:5b:cf
```
* Press Enter to skip JSON and the console will interactively prompt for missing required fields (e.g., it will ask for `mac_address`).
Notes:
* For arrays or nested objects, paste valid JSON.
* The console shows a schema hint (when available). Defaults from the schema are used if you press Enter on a prompt.
* If validation fails, the console extracts required fields from the error and prompts for them.
### Environment setup
Using UV (recommended):
```bash
# 1) Install UV if needed
curl -fsSL https://astral.sh/uv/install.sh | bash
# 2) Create and activate a virtual environment
uv venv
source .venv/bin/activate # macOS/Linux
# .venv\Scripts\activate # Windows PowerShell: .venv\\Scripts\\Activate.ps1
# 3) Install project and dependencies
uv pip install -e .
# 4) (If you see "ModuleNotFoundError: mcp") install the MCP SDK explicitly
uv pip install mcp
# 5) Run the console
python devtools/dev_console.py
```
Using Python venv + pip:
```bash
# 1) Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate # macOS/Linux
# .venv\Scripts\activate # Windows PowerShell: .venv\\Scripts\\Activate.ps1
# 2) Install project (and dependencies)
pip install -e .
# 3) (If you see "ModuleNotFoundError: mcp") install the MCP SDK explicitly
pip install mcp
# 4) Run the console
python devtools/dev_console.py
```
---
## Security Considerations
These tools will give any LLM or agent configured to use them full access to your UniFi Network Controller. While this can be very useful for analysis and configuration of your network, there is potential for abuse if not configured correctly. By default, all tools that can modify state or disrupt availability are disabled and must be explicitly enabled via **environment variables**. The tools are built directly on the UniFi Network Controller API, so they can operate with similar functionality to the UniFi web interface.
### Permission System 🔐 **NEW in v0.2.0**
The server includes a comprehensive permission system with **safe defaults**:
> **Permissions control tool visibility.** Tools with disabled permissions are **not registered** with the MCP server and will not appear in your client's tool list. If you're missing expected tools, check that the relevant permissions are enabled. All tools remain discoverable via `unifi_tool_index` regardless of permission settings — but disabled tools cannot be called. See [docs/permissions.md](docs/permissions.md) for full details.
**Disabled by Default (High-Risk):**
- Network creation/modification (`unifi_create_network`, `unifi_update_network`)
- Wireless configuration (`unifi_create_wlan`, `unifi_update_wlan`)
- Device operations (`unifi_adopt_device`, `unifi_upgrade_device`, `unifi_reboot_device`, `unifi_update_device_radio`)
- Client operations (`unifi_block_client`, `unifi_authorize_guest`)
**Enabled by Default (Lower Risk):**
- Firewall policies, traffic routes, port forwards, QoS rules
- All read-only operations
**How to Enable Permissions:**
**Recommended: Environment Variables** (works with Docker, PyPI installs, uvx)
```bash
# For Claude Desktop - add to env section:
"env": {
"UNIFI_PERMISSIONS_NETWORKS_CREATE": "true",
"UNIFI_PERMISSIONS_DEVICES_UPDATE": "true"
}
# For command line:
export UNIFI_PERMISSIONS_NETWORKS_CREATE=true
export UNIFI_PERMISSIONS_DEVICES_UPDATE=true
# For Docker:
docker run -e UNIFI_PERMISSIONS_NETWORKS_CREATE=true ...
```
**Alternative: Config File** (only for local git clone development)
If you're running from a local git clone, you can modify `src/config/config.yaml` and regenerate the manifest:
```bash
# Edit permissions in src/config/config.yaml
make manifest # Regenerate tool manifest
# Restart the server
```
**Note:** Most users should use environment variables. Config file changes require rebuilding the manifest and are primarily for local development.
See [docs/permissions.md](docs/permissions.md) for complete documentation including all permission variables.
### General Recommendations
* Use LM Studio or Ollama to run tool-capable models locally if possible. This is the recommended and safest way to use these tools.
* If you opt to use cloud-based LLMs like Claude, Gemini, and ChatGPT for analysis, stick with read-only tools (the default configuration).
* **Review permissions carefully** before enabling high-risk operations. Use environment variables for runtime control.
* Create, update, and delete tools should be used with caution and only enabled when necessary.
* Do not host outside of your network unless using a secure reverse proxy like Cloudflare Tunnel or Ngrok. Even then, an additional layer of authentication is recommended.
* **Reverse Proxy Configuration:** When running behind a reverse proxy (Kubernetes ingress, Nginx, Cloudflare, etc.):
* First try: Set `UNIFI_MCP_ALLOWED_HOSTS` to include your external domain (e.g., `localhost,127.0.0.1,unifi-mcp.example.com`)
* If that's insufficient: Set `UNIFI_MCP_ENABLE_DNS_REBINDING_PROTECTION=false` to disable host validation entirely. Only use this in trusted network environments.
---
## 📚 Tool Catalog
*All state-changing tools require the extra argument `confirm=true`.*
### Firewall
* `unifi_list_firewall_policies`
* `unifi_get_firewall_policy_details`
* `unifi_toggle_firewall_policy`
* `unifi_create_firewall_policy`
* `unifi_update_firewall_policy`
* `unifi_create_simple_firewall_policy`
* `unifi_list_firewall_zones`
* `unifi_list_ip_groups`
### Traffic Routes
* `unifi_list_traffic_routes`
* `unifi_get_traffic_route_details`
* `unifi_toggle_traffic_route`
* `unifi_update_traffic_route`
* `unifi_create_traffic_route`
* `unifi_create_simple_traffic_route`
### Port Forwarding
* `unifi_list_port_forwards`
* `unifi_get_port_forward`
* `unifi_toggle_port_forward`
* `unifi_create_port_forward`
* `unifi_update_port_forward`
* `unifi_create_simple_port_forward`
### QoS / Traffic Shaping
* `unifi_list_qos_rules`
* `unifi_get_qos_rule_details`
* `unifi_toggle_qos_rule_enabled`
* `unifi_update_qos_rule`
* `unifi_create_qos_rule`
* `unifi_create_simple_qos_rule`
### Networks & WLANs
* `unifi_list_networks`
* `unifi_get_network_details`
* `unifi_update_network`
* `unifi_create_network`
* `unifi_list_wlans`
* `unifi_get_wlan_details`
* `unifi_update_wlan`
* `unifi_create_wlan`
### VPN
* `unifi_list_vpn_clients`
* `unifi_get_vpn_client_details`
* `unifi_update_vpn_client_state`
* `unifi_list_vpn_servers`
* `unifi_get_vpn_server_details`
* `unifi_update_vpn_server_state`
### Devices
* `unifi_list_devices`
* `unifi_get_device_details`
* `unifi_get_device_radio` – per-radio config & live stats for access points
* `unifi_update_device_radio` – update radio settings (TX power, channel, width, min RSSI)
* `unifi_reboot_device`
* `unifi_rename_device`
* `unifi_adopt_device`
* `unifi_upgrade_device`
### Clients
* `unifi_list_clients`
* `unifi_get_client_details`
* `unifi_list_blocked_clients`
* `unifi_block_client`
* `unifi_unblock_client`
* `unifi_rename_client`
* `unifi_force_reconnect_client`
* `unifi_authorize_guest`
* `unifi_unauthorize_guest`
* `unifi_set_client_ip_settings`
### Events & Alarms
* `unifi_list_events`
* `unifi_list_alarms`
* `unifi_archive_alarm`
* `unifi_archive_all_alarms`
* `unifi_get_event_types`
### Routing (Static Routes)
* `unifi_list_routes`
* `unifi_get_route_details`
* `unifi_create_route`
* `unifi_update_route`
* `unifi_list_active_routes`
### Hotspot (Vouchers)
* `unifi_list_vouchers`
* `unifi_get_voucher_details`
* `unifi_create_voucher`
* `unifi_revoke_voucher`
### User Groups
* `unifi_list_usergroups`
* `unifi_get_usergroup_details`
* `unifi_create_usergroup`
* `unifi_update_usergroup`
### Statistics & Alerts
* `unifi_get_network_stats`
* `unifi_get_client_stats`
* `unifi_get_device_stats`
* `unifi_get_top_clients`
* `unifi_get_dpi_stats`
* `unifi_get_alerts`
### System
* `unifi_get_system_info`
* `unifi_get_network_health`
* `unifi_get_site_settings`
---
## 📖 Documentation
Comprehensive documentation is available in the [docs/](docs/) directory:
### Quick Links
- **[Documentation Index](docs/README.md)** - Complete documentation overview
- **[Quick Start Guide](QUICKSTART.md)** - Get started in 5 minutes
### Key Guides
- **[Context Optimization](docs/context-optimization-comparison.md)** - Visual comparison of modes
- **[Tool Index API](docs/tool-index.md)** - Programmatic tool discovery
---
## Testing
The project includes comprehensive unit and integration tests for all features, including async jobs and lazy tool loading.
### Running Tests Locally
**Prerequisites:**
```bash
# Install UV (if not already installed)
curl -fsSL https://astral.sh/uv/install.sh | bash
# Clone the repository
git clone https://github.com/sirkirby/unifi-network-mcp.git
cd unifi-network-mcp
# Install dependencies (includes test dependencies)
uv sync
```
**Run all tests:**
```bash
uv run pytest tests/ -v
```
**Run only unit tests:**
```bash
uv run pytest tests/unit/ -v
```
**Run only integration tests:**
```bash
uv run pytest tests/integration/ -v
```
**Run with coverage report:**
```bash
uv run pytest tests/ --cov=src --cov-report=term-missing
```
**Run specific test file:**
```bash
uv run pytest tests/unit/test_path_detection.py -v
```
**Run specific test:**
```bash
uv run pytest tests/unit/test_path_detection.py::TestPathDetection::test_detects_unifi_os_correctly -v
```
### Test Structure
```
tests/
├── conftest.py # Pytest configuration
├── unit/ # Unit tests (fast, isolated)
│ └── test_path_detection.py
└── integration/ # Integration tests (slower, with mocks)
└── test_path_interceptor.py
```
### Test Coverage
The test suite includes:
- **8 unit tests** for UniFi OS path detection logic
- **5 integration tests** for path interception and manual override
- Coverage for automatic detection, manual override, retry logic, and error handling
All tests use `pytest-asyncio` for async support and `aioresponses` for HTTP mocking.
### Continuous Integration
Tests run automatically on every push and pull request via GitHub Actions. See [`.github/workflows/test.yml`](.github/workflows/test.yml) for the CI configuration.
---
## Contributing: Releasing / Publishing
This project uses [PyPI Trusted Publishing](https://docs.pypi.org/trusted-publishers/creating-a-project-through-oidc/) via a [GitHub Actions workflow](.github/workflows/publish-to-pypi.yml).
**To publish a new version:**
1. **Bump the `version`** in `pyproject.toml`.
2. **Create a new GitHub Release:** Draft a new release on GitHub, tagging it with the *exact* same version number (e.g., `v0.2.0` if the version in `pyproject.toml` is `0.2.0`).
Once published, users can install it via:
```bash
uv pip install unifi-network-mcp
```
## Local Development
### Option 1: Using Docker
Test with Docker and Claude Desktop:
```bash
docker compose up --build
```
Then configure Claude Desktop to use the Docker container (see [Configuration](#configuration) above).
### Option 2: Using Python/uv (Recommended for Development)
For local development and testing without Docker:
**1. Install dependencies:**
```bash
# Install UV (if not already installed)
curl -fsSL https://astral.sh/uv/install.sh | bash
# Clone and setup
git clone https://github.com/sirkirby/unifi-network-mcp.git
cd unifi-network-mcp
# Install dependencies
uv sync
```
**2. Configure environment:**
```bash
# Create .env file (or set environment variables)
cat > .env << EOF
UNIFI_HOST=your-controller-ip
UNIFI_USERNAME=your-username
UNIFI_PASSWORD=your-password
UNIFI_PORT=443
UNIFI_SITE=default
UNIFI_VERIFY_SSL=false
EOF
```
**3. Test with the dev console (interactive):**
```bash
# Launch interactive tool tester
uv run python devtools/dev_console.py
# You'll see a menu of all tools including:
# - unifi_tool_index (list all tools with schemas)
# - unifi_execute (run any discovered tool)
# - unifi_batch / unifi_batch_status (parallel operations)
# - All 86 UniFi tools (clients, devices, networks, etc.)
```
**4. Test with Python examples:**
```bash
# Query the tool index
uv run python examples/python/query_tool_index.py
# Test async jobs
uv run python examples/python/use_async_jobs.py
# Use the programmatic client
uv run python examples/python/programmatic_client.py
```
**5. Test with Claude Desktop (local Python server):**
Update your Claude Desktop config to use the local Python server instead of Docker:
```json
{
"mcpServers": {
"unifi": {
"command": "uv",
"args": [
"--directory",
"/path/to/unifi-network-mcp",
"run",
"python",
"-m",
"src.main"
],
"env": {
"UNIFI_HOST": "your-controller-ip",
"UNIFI_USERNAME": "your-username",
"UNIFI_PASSWORD": "your-password"
}
}
}
}
```
Then restart Claude Desktop and test:
- "What UniFi tools are available?" (uses `unifi_tool_index`)
- "Show me my top 10 wireless clients" (uses code execution mode)
- "List all my UniFi devices"
**6. Test with LM Studio or other local LLMs:**
For testing with local LLMs that support MCP, you can run the server in stdio mode:
```bash
# Start the MCP server
uv run python -m src.main
# The server will listen on stdin/stdout for MCP protocol messages
# Configure your LLM client to use this as an MCP server
```
**7. Run unit tests:**
```bash
# Run all tests
uv run pytest tests/ -v
# Run just async job tests (new in v0.2.0)
uv run pytest tests/test_async_jobs.py -v
# Run with coverage
uv run pytest tests/ --cov=src --cov-report=term-missing
```
### Alternative: Traditional venv
If you prefer not to use uv:
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
python devtools/dev_console.py
```
---
### License
[MIT](LICENSE)
[commits-shield]: https://img.shields.io/github/commit-activity/y/sirkirby/unifi-network-mcp?style=for-the-badge
[commits]: https://github.com/sirkirby/unifi-network-mcp/commits/main
[license-shield]: https://img.shields.io/github/license/sirkirby/unifi-network-mcp.svg?style=for-the-badge
[maintenance-shield]: https://img.shields.io/badge/maintainer-sirkirby-blue.svg?style=for-the-badge
[releases]: https://github.com/sirkirby/unifi-network-mcp/releases
[release-shield]: https://img.shields.io/github/v/release/sirkirby/unifi-network-mcp?style=flat
[issues-shield]: https://img.shields.io/github/issues/sirkirby/unifi-network-mcp?style=flat
[issues-link]: https://github.com/sirkirby/unifi-network-mcp/issues
[test-badge]: https://github.com/sirkirby/unifi-network-mcp/actions/workflows/test.yml/badge.svg
[test-workflow]: https://github.com/sirkirby/unifi-network-mcp/actions/workflows/test.yml
[validate-badge]: https://github.com/sirkirby/unifi-network-mcp/actions/workflows/publish-to-pypi.yml/badge.svg
[validate-workflow]: https://github.com/sirkirby/unifi-network-mcp/actions/workflows/publish-to-pypi.yml
[validate-docker-badge]: https://github.com/sirkirby/unifi-network-mcp/actions/workflows/docker-publish.yml/badge.svg
[validate- | text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiohttp>=3.8.5",
"aiounifi>=88",
"jsonschema>=4.17.0",
"mcp[cli]<2,>=1.26.0",
"omegaconf>=2.3.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"typing-extensions>=4.4.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:23:41.769656 | unifi_network_mcp-0.6.2.tar.gz | 127,592 | bd/a9/c4f2edd8ef44573479fa8148b5270b461cc99290c2ecbc7e91770a5dffbe/unifi_network_mcp-0.6.2.tar.gz | source | sdist | null | false | 45c14649b63819176fbc5d0ff643eb32 | e71d4cc54d997d90e46d25a38efe05a6588ffc2b29288436372c7db30e51ba30 | bda9c4f2edd8ef44573479fa8148b5270b461cc99290c2ecbc7e91770a5dffbe | null | [
"LICENSE"
] | 504 |
2.4 | recce-nightly | 1.37.0.20260219 | Environment diff tool for dbt | <p align="center">
<a href="https://reccehq.com">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://reccehq.com/assets/images/recce-logo-stacked.avif">
<source media="(prefers-color-scheme: light)" srcset="https://reccehq.com/assets/images/recce-logo-stacked.avif">
<img alt="Recce: DataRecce.io" src="https://reccehq.com/assets/images/recce-logo-stacked.avif" width="200" style="display: block; margin: 0 auto 20px;">
</picture>
</a>
</p>
<h3 align="center">Helping data teams preview, validate, and ship data changes with confidence.</h3>
<p align="center">
<a href="https://pypi.org/project/recce/"><img src="https://img.shields.io/badge/pip_install-recce-006DAD?style=flat-square" alt="install"></a>
<a href="https://pypi.org/project/recce/"><img src="https://img.shields.io/pypi/v/recce?style=flat-square" alt="pipy"></a>
<a href="https://pypi.org/project/recce/"><img src="https://img.shields.io/pypi/pyversions/recce?style=flat-square" alt="Python"></a>
<a href="https://pypi.org/project/recce/#files"><img src="https://img.shields.io/pypi/dw/recce?style=flat-square" alt="downloads"></a>
<a href="https://github.com/DataRecce/recce/blob/main/LICENSE"><img src="https://img.shields.io/github/license/DataRecce/recce?style=flat-square" alt="license"></a>
<a href="https://getdbt.slack.com/archives/C05C28V7CPP"><img src="https://img.shields.io/badge/Slack-4A154B?style=flat-square&logo=slack&logoColor=white" alt="Slack"></a>
<a href="https://discord.com/invite/5zb2aK9KBV"><img src="https://img.shields.io/discord/664381609771925514?color=%237289DA&label=chat&logo=discord&logoColor=white&style=flat-square" alt="InfuseAI Discord Invite"></a>
</p>
<p align="center">
<a href="https://cal.com/team/recce/chat?utm_source=banner&utm_campaign=oss">
<img alt="Book us with Cal.com" src="https://cal.com/book-with-cal-light.svg" />
</a>
</p>
## Trust, Verify, Ship
Cut dbt review time by 90% and ship accurate data fast
Recce gives data teams a faster, more reliable way to understand, review, and ship changes without all the guesswork or manual overhead.
## Quick Start
### Installation
Recce offers two packages to fit different use cases:
**For full local development and Recce Cloud features:**
```bash
pip install -U recce
recce server
```
**For CI/CD artifact uploads only (lightweight):**
```bash
pip install -U recce-cloud
recce-cloud upload
```
The `recce-cloud` package is a lightweight CLI tool designed specifically for CI/CD environments where you only need to upload dbt artifacts to Recce Cloud. It has minimal dependencies and installs faster than the full `recce` package.
### Getting Started
You can launch Recce in any dbt project in just two commands:
```bash
# cd into your dbt project
pip install -U recce
recce server
```
(Note: while recce is not version spsecific, `dbt-core` is currently [not compatible with Python 3.13](https://docs.getdbt.com/faqs/Core/install-python-compatibility). Please make sure to use Python 3.10 - 3.12.)
This starts Recce locally, where you can explore lineage and run queries. To unlock the full set of diffing tools, such as data comparisons and impact checks, you'll need to prepare two environments to compare against. You can follow our [Getting Started](https://docs.reccehq.com/get-started/) and [5-minute Jaffle Shop tutorial](https://docs.reccehq.com/get-started-jaffle-shop/) to try it out step-by-step.
## What You Get
Recce gives you a clear, fast way to understand what your data changes are doing and why they matter. It helps you catch problems early, verify metrics, and share your findings with others, all as part of your normal workflow.
<a href="https://pr46.demo.reccehq.com/"><img width="1347" alt="readme" src="https://github.com/user-attachments/assets/773e4c3a-0a15-49e0-8d1b-38a55af17cb0" /></a>
<a href="https://reccehq.com"><img src="https://docs.reccehq.com/assets/images/1-whats-recce/diff-readme2.png" style="width: 100%; max-width: 600px; display: block; margin: 0 auto 20px;" alt="Model and column level diff"/></a>
<a href="https://reccehq.com"><img src="https://docs.reccehq.com/assets/images/1-whats-recce/checklist-readme3.png" style="width: 100%; max-width: 600px; display: block; margin: 0 auto 20px;" alt="Checklist for collaboration"/></a>
### Using Recce for Impact Assessment in dbt PR Review
- Select nodes in the lineage to perform Checks (diffs) as part of your impact assessment during development or PR
review.
- Add Checks to your Checklist to note observed impacts.
- Share your Checklist with the PR reviewer.
- (`Recce Cloud`) Automatically sync Check status between Recce Instances
- (`Recce Cloud`) Block PR merging until all Recce Checks have been approved
Read more about using Recce on our [blog](https://blog.reccehq.com).
### What’s Included
- [Lineage and impact mapping](https://docs.reccehq.com/features/lineage/): Quickly see which models and columns are affected by a change. Navigate lineage down to the column level, and spot breaking changes with clear visual cues.
- Metric and data comparisons: Use [Profile, Value, Top-K, and Histogram Diffs](https://docs.reccehq.com/features/lineage/#node-details) to compare results before and after changes. Validate things like row counts, category distributions, and numeric ranges without writing extra SQL.
- [Query diff](https://docs.reccehq.com/features/query/): Write and compare any two queries side by side. This is helpful when validating fixes or reviewing changes with teammates.
- [Checklist for reviews and approvals](https://docs.reccehq.com/features/checklist/): Turn your validation steps into a checklist. Add notes, rerun checks, and share the results with reviewers or stakeholders. In Recce Cloud, checklists can sync automatically and even block PRs until checks are approved.
- Secure by design: Recce is [SOC 2 compliant](https://trust.reccehq.com/) to meet enterprise security standards. It runs locally or in your private environment, and your data stays in your warehouse.
👉 Want to dive deeper? Check out the [full documentation](https://docs.reccehq.com/).
## Recce Cloud
Ready to collaborate and move faster as a team? Recce Cloud adds real-time collaboration, automatic checklist sync, and PR gating, so nothing gets merged without a full review.
- Share checklists across environments
- Invite stakeholders to review data changes
- Block merges until all Checks are approved
- Launch demo links from your CI with full context
Recce Cloud is a hosted version of Recce that standardizes your workflow, keeps teams aligned, and reduces errors—so you can ship data changes with confidence.
👉 [View Pricing and Plans](https://reccehq.com/pricing)
## Developer Documentation
If you want to contribute to Recce or test local changes in your dbt project, follow these steps to install the development version.
### Installing the Local Dev Version
1. **Clone the repository** (if you haven't already):
```bash
git clone https://github.com/DataRecce/recce.git
cd recce
```
2. **Build the project** from the repository root:
```bash
make build
```
This builds the frontend assets and prepares the package for installation.
3. **Install the local dev version** in your dbt project:
```bash
# Navigate to your dbt project
cd /path/to/your/dbt-project
# Install recce in editable mode (replace with your actual path to the recce repo)
pip install -e /path/to/recce
```
Using `-e` (editable mode) means any changes you make to the Recce source code will be immediately available without reinstalling.
4. **Start the Recce server** to verify the installation:
```bash
recce server
```
### Development Tips
- After making frontend changes in `js/`, run `make build` again to rebuild the static assets
- Run `make install-dev` in the recce repository to install development dependencies
- Use `make test` to run the Python test suite
- Use `cd js && pnpm test` to run the frontend test suite
For more detailed development guidelines, see [CONTRIBUTING.md](CONTRIBUTING.md).
## Community & Support
Here's where you can get in touch with the Recce team and find support, add a subscribe to our newsletter option as well:
- [chat on our website](https://reccehq.com/). We welcome you to start a chat or drop us a note. Our team monitors the chat box and will follow up soon.
- [Our discord](https://discord.com/invite/VpwXRC34jz)
- [dbt Slack](https://www.getdbt.com/community/join-the-community) in the [#tools-recce](https://getdbt.slack.com/archives/C05C28V7CPP) channel
- Email us [help@reccehq.com](mailto:help@reccehq.com)
If you believe you have found a bug on our open source, or there is some missing functionality in Recce, please open a [GitHub Issue](https://github.com/DataRecce/recce/issues).
## Recce on the web
You can follow along with news about Recce and blogs from our team in the following places:
- [RecceHQ.com](https://reccehq.com/)
- [LinkedIn](https://www.linkedin.com/company/datarecce)
- [Blog](https://blog.reccehq.com/)
- [@datarecce](https://x.com/DataRecce) on Twitter/X
- [@DataRecce@mastodon.social](https://mastodon.social/@DataRecce) on Mastodon
- [@datarecce.bsky.social](https://bsky.app/profile/datarecce.bsky.social) on BlueSky
| text/markdown | null | InfuseAI Dev Team <dev@infuseai.io> | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programm... | [] | null | null | >=3.9 | [] | [] | [] | [
"boto3",
"click>=7.1",
"deepdiff<9.0,>=7.0",
"fastapi",
"gitpython",
"itsdangerous",
"jinja2",
"packaging",
"portalocker",
"py-markdown-table",
"pydantic",
"pygithub",
"python-dateutil",
"python-multipart",
"pytz",
"requests>=2.28.1",
"rich>=12.0.0",
"ruamel-yaml>=0.18.6",
"sentr... | [] | [] | [] | [
"Bug Tracker, https://github.com/InfuseAI/recce/issues",
"Homepage, https://github.com/InfuseAI/recce"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T18:22:24.690115 | recce_nightly-1.37.0.20260219.tar.gz | 2,330,842 | 0b/c0/1086fae403a76441a65e15fbae054c7a642a830e5d9415d8ab14a5ff6ad3/recce_nightly-1.37.0.20260219.tar.gz | source | sdist | null | false | 26d3ab22919fd6b282c69407c1ade73e | 8ff459d0f4bebaec9cbed2672d32d88f3d663ed50a3cbee5aaad9713f220cea9 | 0bc01086fae403a76441a65e15fbae054c7a642a830e5d9415d8ab14a5ff6ad3 | null | [
"LICENSE"
] | 210 |
2.4 | cnpj-processor | 4.3.1 | Sistema de Processamento de Dados CNPJ da Receita Federal do Brasil | # CNPJ Processor 🏢
[](https://badge.fury.io/py/cnpj-processor)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
> Sistema profissional de processamento de dados públicos CNPJ da Receita Federal do Brasil
Automatize o download, processamento e análise dos dados públicos de CNPJ com performance excepcional e API simplificada.
## 🚀 Instalação
```bash
pip install cnpj-processor
```
## ☁️ Suporte ao Nextcloud da Receita Federal
**🆕 Integrado**: O cnpj-processor agora suporta nativamente a infraestrutura **Nextcloud** da Receita Federal!
### 🎯 Como Funciona
O sistema detecta automaticamente URLs Nextcloud e aplica autenticação apropriada:
```python
from cnpj_processor import CNPJProcessor
# Funciona automaticamente com Nextcloud
processor = CNPJProcessor()
success, folder = processor.run()
# O sistema detecta e autentica automaticamente em:
# https://arquivos.receitafederal.gov.br/index.php/s/gn672Ad4CF8N6TK?dir=/Dados/Cadastros/CNPJ
```
### 📦 Dados Disponíveis
O Nextcloud da Receita Federal disponibiliza:
- **33 pastas históricas** desde 2023-05
- **Pasta mais recente**: 2026-01 (37 arquivos, 6.79 GB)
- **Dados completos**: Empresas, Estabelecimentos, Sócios, Simples Nacional
- **Atualização mensal**: Novos dados publicados mensalmente
## ⚡ Início Rápido
### Via Linha de Comando (CLI)
```bash
# Pipeline completo (download + processamento + banco de dados)
cnpj-processor
# Download apenas
cnpj-processor --step download --types empresas estabelecimentos
# Gerar CSVs normalizados (sem processar parquets)
cnpj-processor --step csv --types simples
# Exportar tabelas base da API para CSV
cnpj-processor --step csv --export-csv-base
# Processar com cópia de arquivos base da API
cnpj-processor --step process --types empresas --export-parquet-base
# Processar painel consolidado por UF
cnpj-processor --step painel --painel-uf GO --painel-situacao 2
```
### Via API Python
```python
from cnpj_processor import CNPJProcessor
# Criar processador
processor = CNPJProcessor()
# Pipeline completo
success, folder = processor.run()
# Painel de empresas ativas em São Paulo
success, folder = processor.run(
step='painel',
painel_uf='GO',
painel_situacao=2 # Ativas
)
```
## 🎯 Principais Funcionalidades
### 📥 Download Inteligente
- Download assíncrono de alta performance
- Retomada automática em caso de falha
- Verificação de integridade de arquivos
- Cache inteligente para evitar downloads duplicados
### ⚙️ Processamento Otimizado
- Pipeline paralelo: download e processamento simultâneos
- Até **70% mais rápido** que processamento sequencial
- **Padronização automática de colunas**: CSVs renomeados conforme padrão esperado
- Suporte a múltiplos tipos: empresas, estabelecimentos, sócios, simples
- Exportação para Parquet com compressão eficiente
### 💾 Gestão Inteligente de Espaço
- **Limpeza automática**: Remove ZIPs e arquivos temporários por padrão
- **Banco opcional**: DuckDB criado apenas quando solicitado
- **Múltiplas estratégias**: De 15 GB (apenas banco) até 93 GB (tudo)
- **Controle total**: Flags para customizar retenção de artefatos
### 🎨 Painel Consolidado
- Combinação inteligente de dados de múltiplas fontes
- Filtros avançados: UF, situação cadastral, Simples Nacional
- Ideal para análises e dashboards
- Formato otimizado para BI tools
### 💾 Banco de Dados
- Geração automática de banco DuckDB
- Queries SQL de alta performance
- Integração perfeita com ferramentas de análise
## 📚 API Simplificada
A API do `cnpj-processor` foi projetada para ser **simples, poderosa e intuitiva**.
### Métodos Principais
#### `run()` - Método Universal
Execute qualquer operação com um único método:
```python
processor = CNPJProcessor()
# Pipeline completo
processor.run()
# Download específico
processor.run(step='download', tipos=['empresas'], remote_folder='2026-01')
# Processamento com economia de espaço
processor.run(
step='all',
delete_zips_after_extract=True,
cleanup_all_after_db=True
)
# Painel customizado
processor.run(
step='painel',
painel_uf='GO',
painel_situacao=2,
output_subfolder='painel_go_ativas'
)
```
**Parâmetros do `run()`:**
| Parâmetro | Tipo | Descrição |
| --------- | ---- | --------- |
| `step` | str | Etapa: 'download', 'extract', 'csv', 'process', 'database', 'painel', 'all' |
| `tipos` | list | Tipos a processar: ['empresas', 'estabelecimentos', 'simples', 'socios'] |
| `remote_folder` | str | Pasta remota (formato AAAA-MM) |
| `output_subfolder` | str | Subpasta de saída |
| `output_csv_folder` | str | Pasta de saída para CSVs normalizados (step csv) |
| `source_zip_folder` | str | Pasta de origem dos ZIPs (para extract/process) |
| `force_download` | bool | Forçar re-download |
| `keep_artifacts` | bool | Manter ZIPs e arquivos temporários (padrão: False) |
| `create_database` | bool | Criar banco DuckDB (padrão: False) |
| `cleanup_after_db` | bool | Remover parquets após criar banco |
| `keep_parquet_after_db` | bool | Manter parquets após criar banco |
| `export_csv_base` | bool | **NOVO**: Exportar tabelas base da API para CSV (step csv) |
| `export_parquet_base` | bool | **NOVO**: Copiar arquivos parquet base para pasta de saída (steps process/all) |
| `processar_painel` | bool | Processar painel consolidado |
| `painel_uf` | str | Filtrar painel por UF |
| `painel_situacao` | int | Filtrar por situação (1=Nula, 2=Ativa, 3=Suspensa, 4=Inapta, 8=Baixada) |
| `criar_empresa_privada` | bool | Criar subset de empresas privadas |
| `criar_subset_uf` | str | Criar subset por UF |
| `quiet` | bool | Modo silencioso |
| `log_level` | str | Nível de log ('DEBUG', 'INFO', 'WARNING', 'ERROR') |
#### `get_latest_folder()` - Consultar Pasta Mais Recente
```python
processor = CNPJProcessor()
latest = processor.get_latest_folder()
print(f"Pasta mais recente: {latest}") # '2026-01'
```
#### `get_available_folders()` - Listar Pastas Disponíveis
```python
processor = CNPJProcessor()
folders = processor.get_available_folders()
print(f"Disponíveis: {folders}") # ['2026-01', '2025-12', ...]
```
## 💡 Exemplos Práticos
### Exemplo 1: Pipeline Completo
```python
from cnpj_processor import CNPJProcessor
processor = CNPJProcessor()
success, folder = processor.run()
if success:
print(f"✅ Dados processados em: {folder}")
```
### Exemplo 2: Download Seletivo
```python
# Baixar apenas empresas e estabelecimentos
processor = CNPJProcessor()
success, folder = processor.run(
step='download',
tipos=['empresas', 'estabelecimentos'],
remote_folder='2026-01'
)
```
### Exemplo 3: Processamento com Economia de Espaço
```python
# Padrão: Remove ZIPs e temporários automaticamente, mantém apenas parquets
processor = CNPJProcessor()
success, folder = processor.run() # ~20 GB
# Máxima economia: Criar banco e remover parquets
success, folder = processor.run(
create_database=True, # Cria banco DuckDB
cleanup_after_db=True # Remove parquets
) # ~15 GB
```
### Exemplo 4: Painel Analítico Customizado
```python
# Painel apenas de empresas ativas de Goiás
processor = CNPJProcessor()
success, folder = processor.run(
step='painel',
painel_uf='GO',
painel_situacao=2, # Ativas
output_subfolder='painel_go_ativas'
)
```
### Exemplo 5: Processar Múltiplos Períodos
```python
processor = CNPJProcessor()
pastas = ['2025-12', '2026-01']
for pasta in pastas:
print(f"Processando {pasta}...")
success, folder = processor.run(
step='all',
remote_folder=pasta,
output_subfolder=f'dados_{pasta.replace("-", "_")}'
)
print(f"{'✅' if success else '❌'} {pasta}")
```
### Exemplo 6: Geração e Exportação de CSVs
```python
# Gerar apenas CSVs normalizados (sem processar parquets)
processor = CNPJProcessor()
success, folder = processor.run(
step='csv',
tipos=['socios'],
output_csv_folder='csvs_normalizados'
)
# Gerar CSVs normalizados + exportar tabelas base da API para CSV
success, folder = processor.run(
step='csv',
export_csv_base=True # Exporta cnae, motivo, municipio, etc.
)
# Os CSVs base são exportados de cnpj_processor/parquet/base/
# para a pasta de saída especificada (padrão: dados-abertos/base/)
# Útil quando você:
# - Quer CSVs com nomes de colunas padronizados
# - Precisa das tabelas de referência em formato CSV
# - Prefere trabalhar com CSVs ao invés de parquets
# - Usa ferramentas externas que aceitam apenas CSV
```
### Exemplo 7: Processar com Arquivos Base da API
```python
# Processar dados + copiar arquivos base da API
processor = CNPJProcessor()
success, folder = processor.run(
step='process',
tipos=['empresas'],
output_subfolder='2026-01',
export_parquet_base=True # Copia parquets base para o destino
)
# Os arquivos base (cnae, motivo, municipio, natureza_juridica, qualificacao_socios)
# são copiados de cnpj_processor/parquet/base/ para output_subfolder/base/
# Necessário quando:
# - Processar painel em diferentes localizações
# - Ter todos os dados de referência junto com os dados processados
# - Deploy independente com todos os arquivos necessários
```
### Exemplo 8: Descompactação de ZIPs
```python
# Apenas descompactar arquivos ZIP (sem processar)
processor = CNPJProcessor()
success, folder = processor.run(
step='extract',
source_zip_folder='dados-abertos-zip/2026-01'
)
# Gerar CSVs normalizados (sem converter para parquet)
success, folder = processor.run(
step='csv',
tipos=['socios'],
output_csv_folder='csvs_normalizados'
)
# Útil quando você:
# - Quer verificar conteúdo dos ZIPs manualmente (extract)
# - Precisa de CSVs com nomes de colunas padronizados (csv)
# - Prefere fazer o processamento depois
# - Usa ferramentas externas para análise dos CSVs
# NOTA: Durante o processamento normal (step='process' ou 'all'),
# os nomes das colunas dos CSVs são automaticamente padronizados
# para corresponder ao esquema esperado do Parquet, sem necessidade
# de configuração adicional!
```
### Exemplo 9: Subset Especializado
```python
# Apenas empresas privadas
processor = CNPJProcessor()
success, folder = processor.run(
step='all',
tipos=['empresas'],
criar_empresa_privada=True,
output_subfolder='empresas_privadas'
)
# Apenas estabelecimentos de uma UF
success, folder = processor.run(
step='all',
tipos=['estabelecimentos'],
criar_subset_uf='GO',
output_subfolder='estabelecimentos_sp'
)
```
### Exemplo 10: Estratégias de Espaço em Disco
```python
processor = CNPJProcessor()
# Estratégia 1: Análise de dados (padrão)
success, folder = processor.run()
# Espaço: ~20 GB (apenas parquets)
# Estratégia 2: Com banco de dados
success, folder = processor.run(create_database=True)
# Espaço: ~35 GB (parquets + banco)
# Estratégia 3: Máxima economia
success, folder = processor.run(
create_database=True,
cleanup_after_db=True
)
# Espaço: ~15 GB (apenas banco)
# Estratégia 4: Manter tudo (desenvolvimento)
success, folder = processor.run(
keep_artifacts=True,
create_database=True,
keep_parquet_after_db=True
)
# Espaço: ~93 GB (ZIPs + temporários + parquets + banco)
```
## 🔧 Uso via CLI
O `cnpj-processor` também oferece interface completa de linha de comando:
```bash
# Pipeline completo
cnpj-processor
# Download de pasta específica
cnpj-processor --step download --remote-folder 2026-01
# Apenas descompactar ZIPs (sem processar)
cnpj-processor --step extract --source-zip-folder dados-abertos-zip/2026-01
# Gerar CSVs normalizados
cnpj-processor --step csv --types socios --output-csv-folder csvs_normalizados
# Exportar tabelas base da API para CSV
cnpj-processor --step csv --export-csv-base
# Processar dados já descompactados
cnpj-processor --step process --source-zip-folder dados-abertos-zip/2026-01 --output-subfolder processados
# Processar com cópia de arquivos base
cnpj-processor --step process --types empresas --export-parquet-base
# Processar apenas estabelecimentos
cnpj-processor --types estabelecimentos
# Painel filtrado
cnpj-processor --step painel --painel-uf GO --painel-situacao 2
# Criar banco de dados (opcional)
cnpj-processor --create-database
# Máxima economia de espaço
cnpj-processor --create-database --cleanup-after-db
# Ver pasta mais recente disponível
cnpj-processor --show-latest-folder
# Ver versão
cnpj-processor --version
# Ajuda completa
cnpj-processor --help
```
### Atalhos de CLI
Interface otimizada com atalhos intuitivos:
```bash
# Equivalentes (forma completa vs. atalho)
cnpj-processor --types empresas --step download --remote-folder 2026-01
cnpj-processor -t empresas -s download -r 2026-01
# Descompactar e processar com atalhos
cnpj-processor --step extract --source-zip-folder dados-abertos-zip/2026-01
cnpj-processor -s extract -z dados-abertos-zip/2026-01
# Gerar CSVs normalizados com atalhos
cnpj-processor --step csv --types socios
cnpj-processor -s csv -t socios
# Exportar base para CSV com atalho
cnpj-processor --step csv --export-csv-base
cnpj-processor -s csv --export-csv-base
# Processar com arquivos base
cnpj-processor --step process --export-parquet-base
cnpj-processor -s process --export-parquet-base
# Criar banco com economia de espaço
cnpj-processor --create-database --cleanup-after-db --quiet
cnpj-processor -D -c -q
# Manter todos os artefatos
cnpj-processor --keep-artifacts --create-database --keep-parquet-after-db
cnpj-processor -k -D -K
# Painel filtrado
cnpj-processor --step painel --painel-uf GO --painel-situacao 2
cnpj-processor -s painel --painel-uf GO --painel-situacao 2
```
## 📊 Estrutura de Dados
### Arquivos Gerados
```folder
parquet/
├── 2026-01/ # Pasta por período
│ ├── empresa/ # Dados de empresas
│ ├── estabelecimento/ # Dados de estabelecimentos
│ ├── simples/ # Dados do Simples Nacional
│ ├── socio/ # Dados de sócios
│ ├── painel_dados.parquet # Painel consolidado
│ └── cnpj.duckdb # Banco de dados
```
### Formato Painel
O painel consolidado combina dados de três fontes:
- **Estabelecimento**: CNPJ, razão social, endereço, situação
- **Empresa**: Nome fantasia, capital social, porte
- **Simples**: Opção pelo Simples Nacional, data de inclusão
Campos principais:
- `cnpj_basico`: CNPJ raiz (8 dígitos)
- `cnpj_completo`: CNPJ completo (14 dígitos)
- `razao_social`: Nome empresarial
- `nome_fantasia`: Nome fantasia
- `uf`: Unidade Federativa
- `municipio`: Município
- `situacao_cadastral`: Situação (Ativa, Baixada, etc.)
- `opcao_simples`: Se optante pelo Simples
- `capital_social`: Capital social da empresa
- `porte`: Porte da empresa
## 🎯 Casos de Uso
### 1. Análise de Mercado
```python
# Obter painel de empresas ativas por estado
processor = CNPJProcessor()
success, folder = processor.run(
step='painel',
painel_uf='GO',
painel_situacao=2
)
```
### 2. Compliance e Due Diligence
```python
# Download completo para análise interna
processor = CNPJProcessor()
success, folder = processor.run(
step='all',
tipos=['empresas', 'estabelecimentos', 'socios']
)
```
### 3. Data Science / ML
```python
# Preparar dados para modelos
processor = CNPJProcessor()
success, folder = processor.run(
step='all',
cleanup_after_db=True # Mantém apenas banco final
)
```
### 4. Dashboards BI
```python
# Gerar painel para PowerBI/Tableau
processor = CNPJProcessor()
success, folder = processor.run(
step='painel',
processar_painel=True
)
```
## 🔍 Requisitos do Sistema
- **Python**: 3.9 ou superior
- **Sistema Operacional**: Windows, Linux, macOS
- **Espaço em Disco**: Mínimo 50GB recomendado
- **Memória RAM**: Mínimo 4GB, recomendado 8GB+
- **Conexão Internet**: Necessária para download
## 🛡️ Tratamento de Erros
```python
from cnpj_processor import CNPJProcessor
processor = CNPJProcessor()
try:
success, folder = processor.run(
step='all',
tipos=['empresas']
)
if success:
print(f"✅ Sucesso! Dados em: {folder}")
else:
print("⚠️ Concluído com avisos. Verifique os logs.")
except KeyboardInterrupt:
print("\n🛑 Processamento interrompido pelo usuário")
except Exception as e:
print(f"❌ Erro: {e}")
```
## 📈 Performance
### Benchmarks
- **Pipeline Otimizado**: 70% mais rápido que processamento sequencial
- **Download Assíncrono**: Múltiplos arquivos simultâneos
- **Processamento Paralelo**: Utilização eficiente de múltiplos cores
- **Compressão Inteligente**: Arquivos Parquet com zstd
### Tempos Típicos
| Operação | Tempo Estimado |
| -------- | -------------- |
| Download completo | 5-15 minutos |
| Processamento (todos os tipos) | 10-30 minutos |
| Geração de banco | 2-5 minutos |
| Painel consolidado | 5-10 minutos |
> Tempos variam conforme hardware e conexão de rede
## 🤝 Contribuindo
Contribuições são bem-vindas! Por favor:
1. Fork o repositório
2. Crie uma branch para sua feature (`git checkout -b feature/AmazingFeature`)
3. Commit suas mudanças (`git commit -m 'Add some AmazingFeature'`)
4. Push para a branch (`git push origin feature/AmazingFeature`)
5. Abra um Pull Request
## 📝 Licença
Este projeto está licenciado sob a Licença MIT - veja o arquivo [LICENSE](LICENSE) para detalhes.
## 🔗 Links Úteis
- **PyPI**: <https://pypi.org/project/cnpj-processor/>
- **Documentação Completa**: Ver pasta `docs/` no repositório
- **Issues**: Reporte bugs e sugira melhorias
- **Dados CNPJ**: [Receita Federal - Dados Públicos](https://dados.gov.br/dados/conjuntos-dados/cadastro-nacional-da-pessoa-juridica---cnpj)
## 🙏 Agradecimentos
- Receita Federal do Brasil pela disponibilização dos dados públicos
- Comunidade Python pelo ecossistema de ferramentas excepcionais
- Todos os contribuidores do projeto
| text/markdown | null | Wesley Modanez Freitas <wesley.modanez@gmail.com> | null | null | null | cnpj, receita-federal, dados-abertos, brasil, empresas, estabelecimentos, data-processing, duckdb, parquet | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Lang... | [] | null | null | >=3.9 | [] | [] | [] | [
"beautifulsoup4>=4.12.0",
"duckdb>=0.10.0",
"python-dotenv>=1.0.0",
"aiohttp[speedups]>=3.9.5",
"aiofiles>=23.2.0",
"rich>=13.9.0",
"pyarrow>=19.0.0",
"matplotlib>=3.8.0",
"polars>=0.20.0",
"seaborn>=0.13.0",
"requests>=2.31.0",
"psutil>=5.9.8",
"pydantic>=2.0.0",
"pytest>=7.0.0; extra == ... | [] | [] | [] | [
"Homepage, https://github.com/wmodanez/cnpj",
"Documentation, https://github.com/wmodanez/cnpj/blob/develop/README.md",
"Repository, https://github.com/wmodanez/cnpj",
"Issues, https://github.com/wmodanez/cnpj/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T18:21:31.106182 | cnpj_processor-4.3.1.tar.gz | 933,740 | 1f/da/0d4ddde3e30b6f972a566a806730cd017be67fa6a1426ef1ecfa63af1862/cnpj_processor-4.3.1.tar.gz | source | sdist | null | false | 9d70bb093dab3f03cfabf69be1d7f2f4 | d85e2cf78603d0d78d53d9a982ca84414ba1e0be33eb4e25ee86ff01f494af44 | 1fda0d4ddde3e30b6f972a566a806730cd017be67fa6a1426ef1ecfa63af1862 | MIT | [
"LICENSE"
] | 271 |
2.4 | restate-sdk | 0.15.0 | A Python SDK for Restate | [](https://docs.restate.dev)
[](https://github.com/restatedev/examples)
[](https://discord.gg/skW3AZ6uGd)
[](https://twitter.com/intent/follow?screen_name=restatedev)
# Restate Python SDK
[Restate](https://restate.dev/) is a system for easily building resilient applications using *distributed durable async/await*. This repository contains the Restate SDK for writing services in **Python**.
## Community
* 🤗️ [Join our online community](https://discord.gg/skW3AZ6uGd) for help, sharing feedback and talking to the community.
* 📖 [Check out our documentation](https://docs.restate.dev) to get quickly started!
* 📣 [Follow us on Twitter](https://twitter.com/restatedev) for staying up to date.
* 🙋 [Create a GitHub issue](https://github.com/restatedev/sdk-typescript/issues) for requesting a new feature or reporting a problem.
* 🏠 [Visit our GitHub org](https://github.com/restatedev) for exploring other repositories.
## Using the SDK
**Prerequisites**:
- Python >= v3.10
To use this SDK, add the dependency to your project:
```shell
pip install restate_sdk
```
## Versions
The compatibility with Restate is described in the following table:
| Restate Server\sdk-python | < 0.6 | 0.6 - 0.7 | 0.8 - 0.9 | 0.10 - 0.13 |
|---------------------------|------------------|-----------|------------------|------------------|
| < 1.3 | ✅ | ❌ | ❌ | ❌ |
| 1.3 | ✅ | ✅ | ✅ <sup>(1)</sup> | ✅ <sup>(2)</sup> |
| 1.4 | ✅ | ✅ | ✅ | ✅ <sup>(2)</sup> |
| 1.5 | ⚠ <sup>(3)</sup> | ✅ | ✅ | ✅ |
<sup>(1)</sup> **Note** The new Service/Object/Workflow constructor fields and the decorator fields `inactivity_timeout`, `abort_timeout`, `journal_retention`, `idempotency_retention`, `ingress_private`, `workflow_retention` work only from Restate 1.4 onward. Check the in-code documentation for more details.
<sup>(1)</sup> **Note** The new Service/Object/Workflow constructor field and the decorator field `invocation_retry_policy` works only from Restate 1.4 onward. Check the in-code documentation for more details.
<sup>(3)</sup> **Warning** SDK versions < 0.6 are deprecated, and cannot be registered anymore. Check the [Restate 1.5 release notes](https://github.com/restatedev/restate/releases/tag/v1.5.0) for more info.
## Contributing
We’re excited if you join the Restate community and start contributing!
Whether it is feature requests, bug reports, ideas & feedback or PRs, we appreciate any and all contributions.
We know that your time is precious and, therefore, deeply value any effort to contribute!
### Local development
* Python 3
* PyEnv or VirtualEnv
* [just](https://github.com/casey/just)
* [Rust toolchain](https://rustup.rs/)
Set up your virtual environment using the tool of your choice, e.g. VirtualEnv:
```shell
python3 -m venv .venv
source .venv/bin/activate
```
Install the build tools:
```shell
pip install -r requirements.txt
```
Now build the Rust module and include opt-in additional dev dependencies:
```shell
maturin dev -E test,lint
```
You usually need to build the Rust module only once, but you might need to rebuild it on pulls.
For linting and testing:
```shell
just verify
```
## Releasing the package
Pull latest main:
```shell
git checkout main && git pull
```
**Update module version in `Cargo.toml` and run a local build to update the `Cargo.lock` too**, commit it. Then push tag, e.g.:
```
git tag -m "Release v0.1.0" v0.1.0
git push origin v0.1.0
```
| text/markdown; charset=UTF-8; variant=GFM | null | Restate Developers <dev@restate.dev> | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Software Development :: Libraries :: Application Frameworks"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"google-adk>=1.20.0; extra == \"adk\"",
"httpx[http2]; extra == \"client\"",
"testcontainers; extra == \"harness\"",
"hypercorn; extra == \"harness\"",
"httpx; extra == \"harness\"",
"mypy>=1.11.2; extra == \"lint\"",
"pyright>=1.1.390; extra == \"lint\"",
"ruff>=0.6.9; extra == \"lint\"",
"openai-a... | [] | [] | [] | [
"Bug Tracker, https://github.com/restatedev/sdk-python/issues",
"Documentation, https://docs.restate.dev",
"Homepage, https://restate.dev",
"Source, https://github.com/restatedev/sdk-python"
] | maturin/1.12.3 | 2026-02-19T18:21:26.463310 | restate_sdk-0.15.0.tar.gz | 249,610 | f4/ef/b2a730cc9f99f01b6de39c42dd27ef6acb2059d7e592fd2971576c99432a/restate_sdk-0.15.0.tar.gz | source | sdist | null | false | 60911ece4c29e4d240d3a502c9b38aa0 | 8e2ba15244f36b8a8eeeb4d8eb3ae8413c86db85e13d55011c45ffed1ab25eb9 | f4efb2a730cc9f99f01b6de39c42dd27ef6acb2059d7e592fd2971576c99432a | null | [] | 3,849 |
2.4 | recce-cloud-nightly | 1.32.0.20260219 | Lightweight CLI for Recce Cloud operations | # Recce Cloud CLI
Lightweight command-line tool for managing dbt artifacts with Recce Cloud in
CI/CD environments.
## Overview
The Recce Cloud CLI (`recce-cloud`) is a standalone tool designed for CI/CD
pipelines that need to upload and download dbt artifacts (manifest.json and
catalog.json) to/from Recce Cloud without the full `recce` package dependencies.
**Key Features:**
- Lightweight - minimal dependencies for fast CI/CD execution
- Auto-detection - automatically detects CI platform, repository, and PR/MR
context
- Upload/Download - push and pull dbt artifacts to/from Recce Cloud sessions
- Flexible authentication - browser-based login, token-based auth, or CI tokens
- Platform-specific - optimized for GitHub Actions and GitLab CI
## Installation
### Quick Run (no install needed)
Using [uv](https://github.com/astral-sh/uv), you can run `recce-cloud` directly
without installation:
```bash
# Run with uvx (creates temporary isolated environment)
uvx recce-cloud upload --type prod
uvx recce-cloud download --prod --target-path target-base
# Short alias also available
uvx --from recce-cloud rcc upload --type prod
```
### Permanent Install
```bash
# With uv (recommended)
uv tool install recce-cloud
# With pip
pip install recce-cloud
# With pipx
pipx install recce-cloud
```
## Quick Start
### Local Development
```bash
# Login to Recce Cloud (opens browser for authentication)
recce-cloud login
# Initialize project binding (interactive)
recce-cloud init
# Check current status
recce-cloud init --status
# Logout
recce-cloud logout
```
### GitHub Actions
```yaml
- name: Upload to Recce Cloud
run: recce-cloud upload
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Download from Recce Cloud
run: recce-cloud download --prod --target-path target-base
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
### GitLab CI
```yaml
recce-upload:
script:
- recce-cloud upload
recce-download:
script:
- recce-cloud download --prod --target-path target-base
```
## CI/CD Workflows
### Upload Workflow
The `recce-cloud upload` command automatically creates sessions in supported CI
environments.
```bash
# Basic upload (auto-detects CI context)
recce-cloud upload
# Custom target path
recce-cloud upload --target-path custom-target
# Override PR number or session type
recce-cloud upload --pr 123 --type pr
# Generic workflow with session name (for other CI platforms)
recce-cloud upload --session-name "PR-123" --yes
```
**Options:**
| Option | Description |
| ---------------- | ------------------------------------------------ |
| `--target-path` | Path to dbt target directory (default: `target`) |
| `--session-id` | Session ID for generic workflow |
| `--session-name` | Session name for human-readable workflow |
| `--pr` | Override PR/MR number |
| `--type` | Override session type: `pr`, `prod`, `dev` |
| `--yes` | Auto-confirm session creation |
| `--dry-run` | Preview without uploading |
### Download Workflow
The `recce-cloud download` command retrieves artifacts from Recce Cloud
sessions.
```bash
# Download current PR/MR session
recce-cloud download
# Download production/base session
recce-cloud download --prod
# Download to custom path
recce-cloud download --prod --target-path target-base
# Force overwrite existing files
recce-cloud download --force
# Generic workflow with session ID
recce-cloud download --session-id abc123
```
**Options:**
| Option | Description |
| --------------- | ---------------------------------------- |
| `--target-path` | Download destination (default: `target`) |
| `--session-id` | Session ID for generic workflow |
| `--prod` | Download production/base session |
| `--force`, `-f` | Overwrite existing files |
| `--dry-run` | Preview without downloading |
## Authentication
The CLI supports multiple authentication methods (in priority order):
1. **RECCE_API_TOKEN** - Environment variable (recommended for CI)
2. **GITHUB_TOKEN** - GitHub Actions (must be explicitly set)
3. **CI_JOB_TOKEN** - GitLab CI (auto-detected)
4. **Stored credentials** - From `recce-cloud login`
### Getting API Tokens
**Recce Cloud API Token:**
1. Log in to [Recce Cloud](https://cloud.datarecce.io)
2. Go to Settings → API Tokens
## CI/CD Integration Examples
### GitHub Actions - Complete Workflow
```yaml
name: Recce CI
on:
pull_request:
branches: [main]
jobs:
recce:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: "3.11"
- name: Install dependencies
run: pip install dbt-core dbt-snowflake recce-cloud
# Download production artifacts for comparison
- name: Download base artifacts
run: recce-cloud download --prod --target-path target-base
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Build current PR
- name: Build dbt project
run: |
dbt deps
dbt build
dbt docs generate
# Upload current PR artifacts
- name: Upload to Recce Cloud
run: recce-cloud upload
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
### GitLab CI - Complete Workflow
```yaml
stages:
- download
- build
- upload
recce-download-base:
stage: download
image: python:3.11-slim
script:
- pip install recce-cloud
- recce-cloud download --prod --target-path target-base
artifacts:
paths:
- target-base/
only:
- merge_requests
dbt-build:
stage: build
image: python:3.11-slim
script:
- pip install dbt-core dbt-snowflake
- dbt deps
- dbt build
- dbt docs generate
artifacts:
paths:
- target/
only:
- merge_requests
recce-upload:
stage: upload
image: python:3.11-slim
script:
- pip install recce-cloud
- recce-cloud upload
dependencies:
- dbt-build
only:
- merge_requests
```
### Generic CI Platform
For other CI platforms, use session name workflow with your PR/MR number:
```bash
export RECCE_API_TOKEN=your_token_here
# Upload PR artifacts (creates session if not exists)
recce-cloud upload --session-name "PR-${PR_NUMBER}" --yes
# Upload production artifacts (in CD pipeline after merge)
recce-cloud upload --type prod --yes
```
The `--session-name` option creates a human-readable session that's easy to
track. Use `--yes` to auto-confirm session creation in CI environments.
## Environment Variables
| Variable | Description |
| ------------------ | ---------------------------------------- |
| `RECCE_API_TOKEN` | Recce Cloud API token |
| `RECCE_SESSION_ID` | Default session ID for generic workflows |
| `GITHUB_TOKEN` | GitHub authentication (Actions) |
| `CI_JOB_TOKEN` | GitLab CI job token (auto-detected) |
## Additional Commands
Beyond upload and download, the CLI provides:
```bash
# List sessions in your project
recce-cloud list
# Delete a session
recce-cloud delete --session-id abc123
# Generate AI review for a session
recce-cloud review --session-id abc123
# Generate PR metrics report
recce-cloud report --since 30d
# Diagnose setup issues
recce-cloud doctor
# Show version
recce-cloud version
```
Run `recce-cloud <command> --help` for detailed options.
## Troubleshooting
### Quick Diagnosis
```bash
recce-cloud doctor
```
This validates login status, project binding, and session availability.
### Common Issues
**Missing dbt artifacts:**
```bash
dbt build
dbt docs generate # Required before upload
recce-cloud upload
```
**Authentication failed:**
- For GitHub Actions: Set `GITHUB_TOKEN` in env
- For GitLab CI: `CI_JOB_TOKEN` is auto-detected
- For generic CI: Set `RECCE_API_TOKEN`
**Platform not supported:**
```bash
# Use session name workflow for unsupported CI platforms
recce-cloud upload --session-name "PR-${PR_NUMBER}" --yes
```
### Debug Mode
```bash
export RECCE_LOG_LEVEL=DEBUG
recce-cloud upload
```
## Support
- **Documentation:** [docs.reccehq.com](https://docs.reccehq.com)
- **Issues:** [GitHub Issues](https://github.com/DataRecce/recce/issues)
- **Community:** [Recce Slack](https://getdbt.slack.com/archives/C05C28V7CPP)
- **Email:** <support@reccehq.com>
## License
Apache License 2.0 - See [LICENSE](../LICENSE) file for details.
| text/markdown | null | InfuseAI Dev Team <dev@infuseai.io> | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programm... | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=7.1",
"cryptography>=3.4",
"pyyaml>=6.0",
"requests>=2.28.1",
"rich>=12.0.0",
"flake8>=7.2.0; extra == \"dev\"",
"pytest>=4.6; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://cloud.reccehq.com",
"Documentation, https://docs.reccehq.com",
"Repository, https://github.com/DataRecce/recce",
"Bug Tracker, https://github.com/DataRecce/recce/issues",
"Changelog, https://github.com/DataRecce/recce/releases"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T18:21:19.216868 | recce_cloud_nightly-1.32.0.20260219.tar.gz | 192,830 | b7/f0/c97d0980e428cbee2416cf02c742d28188e3991a14205472f681f0a715ef/recce_cloud_nightly-1.32.0.20260219.tar.gz | source | sdist | null | false | 4e9600ea61fd3ce38915eb1a51f236f7 | d63596fa753b8f7520d2b309aaffa17db6c474492087d898d128c5fb339d6db6 | b7f0c97d0980e428cbee2416cf02c742d28188e3991a14205472f681f0a715ef | null | [] | 204 |
2.4 | imaegete | 0.1.4 | A FOSS image viewer built with PyQt6 | # Imaegete
A FOSS image viewer built with PyQt6.
## Features
- Fast keyboard-driven navigation
- Zoom + pan
- Animated GIF support
- Slideshow with tap-tempo
- On-disk metadata cache + in-memory pixmap cache
- Folder watching (auto-refresh when files change)
- Vim-inspired command mode (`:`) and filename search (`/`)
- Category sorting (move current image into configured folders)

## Install
```bash
# from PyPI
pip install imaegete
# from source (development)
git clone https://github.com/actx4gh/Imaegete.git
cd Imaegete
pip install -r requirements-dev.txt
pip install -e .
```
## Usage
```bash
imaegete [options] [paths...]
```
Examples:
```bash
# scan a directory
imaegete ~/Pictures
# open a single file (also uses its parent directory as the effective root)
imaegete ~/Pictures/foo.jpg
# playlist/selection-only mode: open exactly these files (no directory scanning)
imaegete 0.jpg 1.jpg 2.jpg
```
### Playlist mode (multi-select)
If you launch Imaegete with **multiple file paths** (for example via a file manager “Open With” multi-select), it runs in **selection-only / playlist mode**:
- the image list is exactly the argv file list (deduped, order preserved)
- folder scanning does **not** run
- watchers may refresh/notice deletions, but will **not** inject unrelated files
### Common options
- `--start_dirs DIR [DIR ...]` scan one or more folders (default: `.`)
- `--categories CAT [CAT ...]` category folder names (enables move-to-category keys)
- `--sort_dir DIR` base directory to create category folders in (defaults to `start_dirs`)
- `--cache_dir DIR` cache location (default: `~/.config/Imaegete/cache`)
- `--cache_size MB` cache size in MB
- `--clear_cache` clear cache and exit
For the full CLI: `imaegete --help`
## Controls
### Keyboard shortcuts
All shortcuts below are active in **normal mode** (i.e. when you are *not* typing into the `:` or `/` bars). Press `Esc` to leave command/search and return to normal mode.
| Action | Key |
|---|---|
| Next image | `Right` / `j` |
| Previous image | `Left` / `k` |
| First image | `Home` / `gg` |
| Last image | `End` / `G` |
| Random image / toggle shuffle mode | `R` |
| Toggle slideshow | `S` |
| Slideshow/GIF/flood-zoom speed up | `]` |
| Slideshow/GIF/flood-zoom speed down | `[` |
| Zoom in | `+` |
| Zoom out | `-` |
| Reset zoom (fit-to-window) | `=` |
| Flood-zoom (auto-zoom) toggle | `\\` |
| Toggle fullscreen | `F` |
| Delete current image | `Delete` |
| Undo last delete/move | `U` |
| Move to category 1..9 (if configured) | `1` … `9` |
| Enter command mode | `:` |
| Enter filename search | `/` |
| Exit command/search bars | `Esc` |
| Quit | `Q` |
### Mouse
- Zoom: mouse wheel
- Pan: left-click + drag (only when zoomed in past “fit to window”)
### Slideshow tap-tempo
While slideshow is running, **manual navigation taps** set the interval:
- two taps establish a tempo; additional taps refine it
- direction changes reset the tap sequence
- taps time out after inactivity
Keys that count as taps: `Right`/`j` (next), `Left`/`k` (previous), `R` (random).
### Category moves
If you pass `--categories` (or set categories in config), you can move the current image into a category folder:
- press `1`..`9` to move to the corresponding category (first 9 only)
- or use `:m <category>` / `:m <N>` in command mode
## Vim-style modes
### Command mode (`:`)
Type `:` to open the command bar, then press Enter.
- `:q` / `:quit` / `:exit` — quit
- `:n` / `:next` — next image
- `:p` / `:prev` / `:previous` — previous image
- `:first` / `:last` / `:rand` (`:random`) — jump
- `:del` / `:delete` / `:rm` — delete current image
- `:u` / `:undo` — undo last move/delete
- `:m <category>` or `:m <N>` — move to category by name or 1-based index
- `:fs` (`:fullscreen`) — toggle fullscreen
- `:ss` (`:slideshow`) [`on`/`off`] — toggle or set slideshow
- `:<N>` — jump to 1-based index (e.g. `:12`)
### Filename search (`/`)
Type `/` to open the search bar.
- start typing to live-filter the image list
- select from the popup list (mouse or keyboard) and press Enter
- `Esc` exits search without jumping
## License
AGPLv3 (see LICENSE)
| text/markdown | Aaron Colichia | null | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright © 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software.
A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public.
The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version.
An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based on the Program.
To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.
A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices".
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagatthe contents of its contributor version.
In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
<one le to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements.
You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see <https://www.gnu.org/licenses/>.
| image-viewer, pyqt6, sway, lxqt | [
"Development Status :: 3 - Alpha",
"Environment :: X11 Applications :: Qt",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :... | [] | null | null | >=3.10 | [] | [] | [] | [
"Pillow>=10",
"PyQt6>=6.7",
"PyYAML>=6.0",
"colorama>=0.4",
"confuse>=2.0",
"natsort>=8.4",
"packaging>=24",
"psutil>=6.0",
"watchfiles>=0.21",
"GlavnaQt>=0.1.0",
"confumo>=0.1.1",
"build>=1.2; extra == \"dev\"",
"twine>=6.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/actx4gh/Imaegete",
"Repository, https://github.com/actx4gh/Imaegete"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:20:37.365324 | imaegete-0.1.4.tar.gz | 100,476 | b9/4f/a99dae8ff9ee57ce631dc007c51deab8ba57e1fd3fe97c1d71e9f39f17c9/imaegete-0.1.4.tar.gz | source | sdist | null | false | 906505a7f508340734daf192e0143e8a | 0b199bd5a099dc567f0afabbd708bfb60205c6c533c7f3defc41754334a08adf | b94fa99dae8ff9ee57ce631dc007c51deab8ba57e1fd3fe97c1d71e9f39f17c9 | null | [
"LICENSE"
] | 240 |
2.4 | python-wekan | 0.3.2 | This is a python client for interacting with the WeKan® REST-API | 
# python-wekan
This is a python client for interacting with the [WeKan®](https://github.com/wekan/wekan) REST-API.
Each WeKan object is represented by a corresponding python object.
For further details about the [WeKan® API](https://wekan.github.io/api) consider the official documentation.
## Installation
The project assumes that you are using a [currently-supported](https://devguide.python.org/versions/) version of Python, which is 3.9+ at the time of writing.
### From OS Packaging
[](https://repology.org/project/python:wekan/versions)
### Via pip
```bash
pip install python-wekan
```
## How to use
### Set the credentials
```bash
export WEKAN_USERNAME="USERNAME"
export WEKAN_PASSWORD="PASSWORD"
```
### Use the module
```python
import os
from wekan import WekanClient
if __name__ == '__main__':
wekan = WekanClient(
base_url='https://your_wekan_instance.com',
username=os.getenv('WEKAN_USERNAME'),
password=os.getenv('WEKAN_PASSWORD'))
boards = wekan.list_boards()
for board in boards:
print(board.title)
```
### Dependencies between the wekan python objects
There are dependencies between objects.
This visualisation shows the dependencies between different objects.
```mermaid
graph TD;
WekanClient-->Board;
WekanClient-->User;
Board-->List;
Board-->Swimlane;
Swimlane-->Card;
Board-->Integration;
Board-->CustomField;
Board-->Label;
List-->Card;
Card-->CardComment;
Card-->CardChecklist;
CardChecklist-->CardChecklistItem;
```
Example:
If you want to fetch the cards within a list, you need to get the board and the list object first.
## Examples
### Add a new board
```python
import os
from wekan import WekanClient
wekan = WekanClient(
base_url='https://your_wekan_instance.com',
username=os.getenv('WEKAN_USERNAME'),
password=os.getenv('WEKAN_PASSWORD'))
new_board = wekan.add_board(
title="My new Board",
color="midnight",
is_admin=True,
is_no_comments=False,
is_comment_only=False)
print(new_board.created_at)
```
### Create a new list
```python
import os
from wekan import WekanClient
wekan = WekanClient(
base_url='https://your_wekan_instance.com',
username=os.getenv('WEKAN_USERNAME'),
password=os.getenv('WEKAN_PASSWORD'))
board = wekan.list_boards(regex_filter='My new Board')[0]
board.add_list(title="My first list")
board.add_list(title="My second list")
```
### Create a new card
```python
import os
from wekan import WekanClient
wekan = WekanClient(
base_url='https://your_wekan_instance.com',
username=os.getenv('WEKAN_USERNAME'),
password=os.getenv('WEKAN_PASSWORD'))
board = wekan.list_boards(regex_filter='My new Board')[0]
wekan_list = board.list_lists(regex_filter="My first list")[0]
swimlane = board.list_swimlanes()[0]
wekan_list.add_card(
title="My first card",
swimlane=swimlane,
description="My first description")
```
### Move card between lists
```python
import os
from wekan import WekanClient
wekan = WekanClient(
base_url='https://your_wekan_instance.com',
username=os.getenv('WEKAN_USERNAME'),
password=os.getenv('WEKAN_PASSWORD'))
board = wekan.list_boards(regex_filter='My new Board')[0]
src_list = board.list_lists(regex_filter="My first list")[0]
dst_list = board.list_lists(regex_filter="My second list")[0]
card = src_list.list_cards(regex_filter="My first card")[0]
card.edit(new_list=dst_list)
```
### Create a new swimlane
```python
import os
from wekan import WekanClient
wekan = WekanClient(
base_url='https://your_wekan_instance.com',
username=os.getenv('WEKAN_USERNAME'),
password=os.getenv('WEKAN_PASSWORD'))
board = wekan.list_boards(regex_filter='My new Board')[0]
board.add_swimlane(title="My first swimlane")
```
## Development
### Generate requirements
```bash
pipenv requirements > requirements.txt
pipenv requirements --dev-only > requirements_dev.txt
```
## credits
This project is based on [py-trello](https://github.com/sarumont/py-trello).
Some methods and design structures have been adopted 1:1.
| text/markdown | Bastian Wenske | null | null | null | BSD 3-Clause License | python | [
"Programming Language :: Python :: 3",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"certifi==2024.7.4",
"charset-normalizer==3.3.2",
"idna==3.7",
"python-dateutil==2.9.0.post0",
"requests==2.32.4",
"six==1.16.0",
"urllib3==2.6.3"
] | [] | [] | [] | [
"homepage, https://github.com/bastianwenske/python-wekan"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T18:20:36.077552 | python_wekan-0.3.2.tar.gz | 32,172 | 47/aa/b5536f713f58d3cfd5eafc780ea056ee363ec5c58cd71251de327645d397/python_wekan-0.3.2.tar.gz | source | sdist | null | false | 3b2c2b976a48fdd841d8bf237c3d8a2d | 337d2c3785cc62908c020b916fe36870038bbfe9bea4721b65bd1828a19b1a9b | 47aab5536f713f58d3cfd5eafc780ea056ee363ec5c58cd71251de327645d397 | null | [
"LICENSE"
] | 236 |
2.4 | smolvm | 0.0.5a0 | Secure runtime for AI agents, and tools -- free and open-source from Celesto AI 🧡 | <div align="center">
# SmolVM
**Secure runtime for AI agents and tools**
[](https://opensource.org/licenses/Apache-2.0)
[](https://www.python.org/downloads/)
[Docs](https://docs.celesto.ai) • [Examples](examples/) • [GitHub](https://github.com/celestoai/smolvm)
</div>
---
SmolVM is a lightning-fast, secure microVM runtime designed for high-density isolation. It provides AI agents and tools with a safe, hardware-virtualized environment to execute untrusted code without risking the host system.
## ✨ Features
- **🔒 Secure Isolation**: Hardware-level virtualization (utilizing Firecracker) for strong sandbox boundaries.
- **⚡ Blazing Fast**: MicroVMs boot in sub-second time with minimal overhead.
- **🐍 Python Native**: Clean, high-level SDK for managing VM lifecycles and command execution.
- **🌐 Automatic Networking**: Built-in NAT, port forwarding, and SSH tunneling.
- **🛠️ Custom Images**: Build specialized Debian-based rootfs images with your own tools.
- **🧹 Auto-Cleanup**: Integrated resource management to keep your host system clean.
## 🤔 Why SmolVM?
AI agents often need to execute arbitrary code (Python, JS, shell scripts) generated by LLMs. Running this code directly on your host or in standard containers can be risky.
- **MicroVM-based Security**: Unlike containers that share the host kernel, SmolVM uses KVM-backed microVMs. This provides a significantly smaller attack surface and stronger hardware-level isolation.
- **Agent-First Design**: SmolVM abstracts away the complexity of microVM networking, storage, and TAP devices into a simple, pythonic API.
## 🚀 Quickstart
### 1. Prerequisites
- **Linux + Firecracker backend**: KVM support (Ubuntu/Debian/Fedora).
- **macOS + QEMU backend**: Homebrew and QEMU (`qemu-system-*`).
### 2. Installation
```bash
# Install the Python package
pip install smolvm
```
Linux (Firecracker):
```bash
sudo ./scripts/system-setup.sh --configure-runtime
```
macOS (QEMU):
```bash
./scripts/system-setup-macos.sh
# Optional explicit backend override:
# export SMOLVM_BACKEND=qemu
```
### 3. Basic Usage
Initialize a VM with no arguments for an auto-configured, SSH-ready environment:
```python
from smolvm import SmolVM
# Start sandboxed runtime
vm = SmolVM()
vm.start()
# Run ANY command like a real system
result = vm.run("echo 'Hello from the sandbox!'")
print(result.output)
# Stop the runtime
vm.stop()
```
Customize auto-config memory and disk size:
```python
from smolvm import SmolVM
# Use with context manager (auto start and deletes after use)
with SmolVM(mem_size_mib=2048, disk_size_mib=4096) as vm:
print(vm.run("free -m").output)
```
### 4. Reconnect to an existing VM
You can also reconnect to a running VM by its ID:
```python
from smolvm import SmolVM
# Reconnect to an existing VM
vm = SmolVM.from_id("vm-abcdef12")
print(f"Status: {vm.status}")
```
### 4.1 Disk isolation defaults
SmolVM now defaults to **isolated per-VM disks** (`disk_mode="isolated"`),
so each VM gets its own writable rootfs clone (sandbox-by-default).
If you intentionally want shared/persistent image behavior across VMs, set:
```python
from smolvm import VMConfig
config = VMConfig(..., disk_mode="shared")
```
### 5. Port Forwarding
Expose a guest application to your local machine securely. `expose_local` prefers host-local nftables forwarding and automatically falls back to an SSH tunnel when needed.
```python
from smolvm import SmolVM
with SmolVM() as vm:
# Example: App in VM listening on port 8080, expose to host port 18080
host_port = vm.expose_local(guest_port=8080, host_port=18080)
print(f"App available at http://localhost:{host_port}")
```
## 6. Environment Variables
Inject environment variables into a running VM. Variables are persisted in
`/etc/profile.d/smolvm_env.sh` and apply to new SSH/login shell sessions.
```python
from smolvm import SmolVM
with SmolVM() as vm:
vm.set_env_vars({"API_KEY": "sk-...", "DEBUG": "1"})
print(vm.list_env_vars())
print(vm.run("echo $API_KEY").output)
```
CLI:
```bash
smolvm env set <vm_id> API_KEY=sk-... DEBUG=1
smolvm env list <vm_id> --show-values
smolvm env unset <vm_id> DEBUG
```
Diagnostics:
```bash
# Auto-detect backend (Darwin -> qemu, Linux -> firecracker)
smolvm doctor
# Force backend checks
smolvm doctor --backend firecracker
smolvm doctor --backend qemu
# CI-friendly mode
smolvm doctor --json --strict
```
## ⚡ Performance
SmolVM is optimized for low-latency agent workflows. Latest lifecycle timings (p50) on a standard Linux host:
| Phase | Time |
|---|---|
| Create + Start | ~572ms |
| SSH ready | ~2.1s |
| Command execution | **~43ms** |
| Stop + Delete | ~751ms |
| **Full lifecycle (boot → run → teardown)** | **~3.5s** |
Run the benchmark yourself:
```bash
python scripts/benchmarks/bench_subprocess.py --vms 10 -v
```
> Measured on AMD Ryzen 7 7800X3D (8C/16T), Ubuntu Linux, KVM/Firecracker backend.
## 🔐 SSH trust model (important)
SmolVM currently prioritizes zero-touch VM access for local agent workflows.
By default, SSH host keys are not strictly verified during first connection
(`paramiko.AutoAddPolicy`).
- Use SmolVM on trusted/local networks and hosts.
- Do not expose guest SSH endpoints publicly without additional controls.
- See [SECURITY.md](SECURITY.md) for policy and scope details.
## 📄 License
Apache 2.0 License - see [LICENSE](LICENSE) for details.
---
<div align="center">
Built with 🧡 by <a href="https://celesto.ai">Celesto AI</a>
</div>
| text/markdown | null | Celesto AI <hello@celesto.ai> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Langua... | [] | null | null | >=3.10 | [] | [] | [] | [
"paramiko>=3.0",
"pydantic>=2.0",
"requests-unixsocket>=0.3",
"requests>=2.28",
"fastapi>=0.115.0; extra == \"dashboard\"",
"uvicorn[standard]>=0.34.0; extra == \"dashboard\"",
"websockets>=14.0; extra == \"dashboard\"",
"mypy>=1.0; extra == \"dev\"",
"pre-commit>=4.2.0; extra == \"dev\"",
"pytest... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:20:00.667983 | smolvm-0.0.5a0.tar.gz | 136,970 | e7/80/ef96fbc0e31adb6fe4a9858cf7c94eca5123154df5504d87fde740d5316f/smolvm-0.0.5a0.tar.gz | source | sdist | null | false | 8cf62879577ffe21134e637582f115e9 | f501998f14e3721cac8277c547d365b232d1c08915273cacad7ab1d50bd58385 | e780ef96fbc0e31adb6fe4a9858cf7c94eca5123154df5504d87fde740d5316f | Apache-2.0 | [
"LICENSE"
] | 202 |
2.4 | patrocinio-research-assistant-mcp | 0.1.0 | Research Assistant MCP | # Yahoo Finance MCP Server
A Model Context Protocol (MCP) server that provides access to Yahoo Finance data. This server enables AI assistants to fetch real-time stock prices, financial statements, news, options data, and more through a standardized interface.
PIP: https://pypi.org/project/yahoo-finance-mcp-server/
## Features
- **Historical Stock Prices**: Get historical price data with customizable periods and intervals
- **Stock Information**: Access comprehensive stock data including financials, metrics, and company info
- **Financial Statements**: Retrieve income statements, balance sheets, and cash flow statements
- **News**: Fetch latest news articles for any ticker
- **Options Data**: Get option chains, expiration dates, and options analytics
- **Holder Information**: Access institutional, mutual fund, and insider holder data
- **Analyst Recommendations**: Get analyst ratings, upgrades, and downgrades
- **Corporate Actions**: Track dividends and stock splits
## Installation
### Using uvx (Recommended)
The easiest way to use this MCP server is with `uvx`, which runs it directly without installation:
```bash
uvx yahoo-finance-mcp-server
```
### Using uv
Install the package using `uv`:
```bash
uv pip install yahoo-finance-mcp-server
```
### Using pip
```bash
pip install yahoo-finance-mcp-server
```
### From Source
```bash
git clone https://github.com/laxmimerit/yahoo-finance-mcp-server.git
cd yahoo-finance-mcp-server
uv pip install -e .
```
## Configuration
### Claude Desktop Configuration
Add this to your Claude Desktop configuration file:
**MacOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
#### Using uvx (Recommended):
```json
{
"mcpServers": {
"yahoo-finance": {
"command": "uvx",
"args": ["yahoo-finance-mcp-server"]
}
}
}
```
#### Using uv:
```json
{
"mcpServers": {
"yahoo-finance": {
"command": "uv",
"args": ["run", "yahoo-finance-mcp-server"]
}
}
}
```
#### Using Python directly:
```json
{
"mcpServers": {
"yahoo-finance": {
"command": "python",
"args": ["-m", "yahoo_finance_mcp_server.server"]
}
}
}
```
### Other MCP Clients
For other MCP clients that support stdio transport, you can run:
```bash
yahoo-finance-mcp-server
```
Or with uvx:
```bash
uvx yahoo-finance-mcp-server
```
## Available Tools
### 1. get_historical_stock_prices
Get historical stock prices for a ticker symbol.
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "AAPL")
- `period` (str, optional): Valid periods: 1d, 5d, 1mo, 3mo, 6mo, 1y, 2y, 5y, 10y, ytd, max. Default: "1mo"
- `interval` (str, optional): Valid intervals: 1m, 2m, 5m, 15m, 30m, 60m, 90m, 1h, 1d, 5d, 1wk, 1mo, 3mo. Default: "1d"
**Example:**
```
Get historical prices for Apple stock over the last year
```
### 2. get_stock_info
Get comprehensive stock information including price, company details, financial metrics, and more.
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "TSLA")
**Example:**
```
Get detailed information about Tesla stock
```
### 3. get_yahoo_finance_news
Get latest news articles for a ticker symbol.
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "GOOGL")
**Example:**
```
Get recent news about Google
```
### 4. get_stock_actions
Get dividend and stock split history.
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "MSFT")
**Example:**
```
Get dividend history for Microsoft
```
### 5. get_financial_statement
Get financial statements (income statement, balance sheet, or cash flow).
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "AMZN")
- `financial_type` (str): One of: `income_stmt`, `quarterly_income_stmt`, `balance_sheet`, `quarterly_balance_sheet`, `cashflow`, `quarterly_cashflow`
**Example:**
```
Get Amazon's quarterly income statement
```
### 6. get_holder_info
Get holder and ownership information.
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "NVDA")
- `holder_type` (str): One of: `major_holders`, `institutional_holders`, `mutualfund_holders`, `insider_transactions`, `insider_purchases`, `insider_roster_holders`
**Example:**
```
Get institutional holders of NVIDIA
```
### 7. get_option_expiration_dates
Get available options expiration dates for a ticker.
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "SPY")
**Example:**
```
Get option expiration dates for SPY
```
### 8. get_option_chain
Get option chain data for calls or puts.
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "AAPL")
- `expiration_date` (str): Expiration date in YYYY-MM-DD format
- `option_type` (str): Either "calls" or "puts"
**Example:**
```
Get Apple call options expiring on 2024-12-20
```
### 9. get_recommendations
Get analyst recommendations and upgrades/downgrades.
**Parameters:**
- `ticker` (str): Stock ticker symbol (e.g., "META")
- `recommendation_type` (str): Either "recommendations" or "upgrades_downgrades"
- `months_back` (int, optional): Number of months to look back. Default: 12
**Example:**
```
Get recent analyst upgrades for Meta
```
## Usage Examples
Once configured with Claude Desktop or another MCP client, you can ask questions like:
- "What's the current price of Apple stock?"
- "Show me Tesla's quarterly revenue for the last year"
- "Get the latest news about Microsoft"
- "What are the dividend payments for Coca-Cola?"
- "Show me the institutional holders of NVIDIA"
- "Get Amazon's balance sheet"
- "What options are available for SPY?"
- "Show me recent analyst upgrades for Google"
## Development
### Setup Development Environment
```bash
# Clone the repository
git clone https://github.com/laxmimerit/yahoo-finance-mcp-server.git
cd yahoo-finance-mcp-server
# Install with development dependencies
uv pip install -e ".[dev]"
```
### Running Tests
```bash
pytest
```
### Code Formatting
```bash
black src/
ruff check src/
```
## Technical Details
- **Protocol**: Model Context Protocol (MCP)
- **Transport**: stdio
- **Data Source**: Yahoo Finance via yfinance library
- **Response Format**: JSON
## Requirements
- Python 3.10 or higher
- Dependencies:
- fastmcp >= 0.3.0
- yfinance >= 0.2.40
- pandas >= 2.0.0
## Limitations
- Data is provided by Yahoo Finance and subject to their terms of service
- Real-time data may have delays depending on your subscription level
- Some data may not be available for all ticker symbols
- Intraday data is limited to the last 60 days
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Disclaimer
This software is for informational purposes only. It should not be considered financial advice. Always do your own research before making investment decisions.
## Support
If you encounter any issues or have questions:
- Open an issue on GitHub
- Check existing issues for solutions
- Consult the MCP documentation at https://modelcontextprotocol.io
## Acknowledgments
- Built with [FastMCP](https://github.com/jlowin/fastmcp)
- Data provided by [yfinance](https://github.com/ranaroussi/yfinance)
- Implements the [Model Context Protocol](https://modelcontextprotocol.io)
| text/markdown | null | Eduardo Patrocinio <epatro@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"chromadb>=1.1.0",
"faiss-cpu>=1.12.0",
"fastapi>=0.116.1",
"fastmcp>=2.12.3",
"langchain-chroma>=0.2.6",
"langchain-community>=0.3.29",
"langchain-mcp-adapters>=0.1.9",
"langchain-ollama>=0.3.8",
"langchain-openai>=0.3.33",
"langchain>=0.3.27",
"langgraph>=0.6.7",
"matplotlib>=3.10.6",
"mcp... | [] | [] | [] | [
"Homepage, https://github.com/patrocinio/research-assistant-mcp",
"Repository, https://github.com/patrocinio/research-assistant-mcp",
"Issues, https://github.com/patrocinio/research-assistant-mcp"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T18:19:48.950524 | patrocinio_research_assistant_mcp-0.1.0.tar.gz | 7,904 | 74/29/50cc1e413c3631542c6aa31c945eddb328e5c741f17d5d74e82e94ee297d/patrocinio_research_assistant_mcp-0.1.0.tar.gz | source | sdist | null | false | 7ab7ea17b1ecd3b2c187514ea3565a9c | 4d8fcc2911d695673541421e3586ac6b30083ccba3fe166204eb8e84454a5ffc | 742950cc1e413c3631542c6aa31c945eddb328e5c741f17d5d74e82e94ee297d | null | [
"LICENSE"
] | 238 |
2.4 | http-snapshot | 0.1.8 | http-snapshot is a pytest plugin that snapshots requests made with popular Python HTTP clients. | # http-snapshot
`http-snapshot` is a pytest plugin that captures and snapshots HTTP requests/responses made with popular Python HTTP clients like `httpx` and `requests`. It uses [inline-snapshot](https://github.com/15r10nk/inline-snapshot) to store HTTP interactions as JSON files, enabling fast and reliable HTTP testing without making actual network calls.
## Features
- 🚀 **Support for multiple HTTP clients**: `httpx` (async, sync) and `requests` (sync)
- 📸 **Automatic HTTP interaction capture**: Records both requests and responses
- 🔒 **Security-aware**: Automatically excludes sensitive headers like authorization and cookies
- ⚙️ **Configurable**: Control what gets captured and what gets excluded
- 🧪 **pytest integration**: Works seamlessly with your existing pytest test suite
- 📁 **External snapshots**: Stores snapshots in organized JSON files
## Installation
```bash
pip install http-snapshot
```
For specific HTTP client support:
```bash
# For httpx support
pip install http-snapshot[httpx]
# For requests support
pip install http-snapshot[requests]
# For both
pip install http-snapshot[httpx,requests]
```
## Quick Start
### Using Context Managers (Recommended)
The context manager API provides proper resource cleanup and doesn't require any additional dependencies.
#### Using with httpx (async)
```python
import pytest
import inline_snapshot
from http_snapshot.httpx import HttpxAsyncSnapshotClient
@pytest.mark.anyio
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:my-test-snapshot.json")],
)
async def test_api_call(http_snapshot, is_recording: bool) -> None:
async with HttpxAsyncSnapshotClient(
snapshot=http_snapshot,
is_recording=is_recording,
) as client:
response = await client.get("https://api.example.com/users")
assert response.status_code == 200
assert "users" in response.json()
```
#### Using with httpx (sync)
```python
import pytest
import inline_snapshot
from http_snapshot.httpx import HttpxSyncSnapshotClient
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:my-test-snapshot.json")],
)
def test_api_call(http_snapshot, is_recording: bool) -> None:
with HttpxSyncSnapshotClient(
snapshot=http_snapshot,
is_recording=is_recording,
) as client:
response = client.get("https://api.example.com/users")
assert response.status_code == 200
assert "users" in response.json()
```
#### Using with requests (sync)
```python
import pytest
import inline_snapshot
from http_snapshot.requests import RequestsSnapshotSession
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:my-test-snapshot.json")],
)
def test_api_call(http_snapshot, is_recording: bool) -> None:
with RequestsSnapshotSession(
snapshot=http_snapshot,
is_recording=is_recording,
) as session:
response = session.get("https://api.example.com/users")
assert response.status_code == 200
assert "users" in response.json()
```
### Using with pytest fixtures (Deprecated)
> **Note**: The pytest fixture API is deprecated. Please use the context manager API shown above instead.
#### Using with httpx (async)
```python
import httpx
import pytest
import inline_snapshot
@pytest.mark.anyio
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:my-test-snapshot.json")],
)
async def test_api_call(snapshot_async_httpx_client: httpx.AsyncClient) -> None:
# This will be captured on first run, replayed on subsequent runs
response = await snapshot_async_httpx_client.get("https://api.example.com/users")
assert response.status_code == 200
assert "users" in response.json()
```
#### Using with httpx (sync)
```python
import httpx
import pytest
import inline_snapshot
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:my-test-snapshot.json")],
)
def test_api_call(snapshot_sync_httpx_client: httpx.Client) -> None:
# This will be captured on first run, replayed on subsequent runs
response = snapshot_sync_httpx_client.get("https://api.example.com/users")
assert response.status_code == 200
assert "users" in response.json()
```
#### Using with requests (sync)
```python
import requests
import pytest
import inline_snapshot
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:my-test-snapshot.json")],
)
def test_api_call(snapshot_requests_session: requests.Session) -> None:
# This will be captured on first run, replayed on subsequent runs
response = snapshot_requests_session.get("https://api.example.com/users")
assert response.status_code == 200
assert "users" in response.json()
```
## Migration Guide
If you're currently using the deprecated pytest fixtures, here's how to migrate to the context manager API:
### Before (using fixtures):
```python
@pytest.mark.anyio
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:test.json")],
)
async def test_api(snapshot_async_httpx_client: httpx.AsyncClient):
await snapshot_async_httpx_client.get("https://example.com")
```
### After (using context managers):
```python
from http_snapshot.httpx import HttpxAsyncSnapshotClient
@pytest.mark.anyio
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:test.json")],
)
async def test_api(http_snapshot, is_recording: bool):
async with HttpxAsyncSnapshotClient(
snapshot=http_snapshot,
is_recording=is_recording,
) as client:
await client.get("https://example.com")
```
### Key differences:
1. **Import the context manager**: Instead of relying on pytest fixtures, import the context manager class
2. **Use context manager syntax**: Use `async with` (for async) or `with` (for sync)
3. **Pass parameters explicitly**: `snapshot` and `is_recording` are now constructor parameters
4. **Add `is_recording` fixture**: The `is_recording` pytest fixture is still available and works the same way
5. **No additional dependencies needed**: Unlike the fixtures which had async teardown issues, context managers work without pytest-asyncio
## How It Works
```bash
# Record new HTTP interactions (makes actual network calls and creates snapshots)
pytest tests/ --http-record --inline-snapshot=create
# Re-record and update existing snapshots (makes actual network calls and updates snapshots)
pytest tests/ --http-record --inline-snapshot=fix
# Replay existing snapshots (default - no network calls made)
pytest tests/
```
## Configuration Options
You can customize what gets captured using `SnapshotSerializerOptions`:
### Using with context managers:
```python
import pytest
import inline_snapshot
from http_snapshot.requests import RequestsSnapshotSession, SnapshotSerializerOptions
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:my-test-snapshot.json")],
)
def test_with_custom_options(http_snapshot, is_recording: bool) -> None:
serializer_options = SnapshotSerializerOptions(
exclude_request_headers=["X-API-Key"],
include_request=True,
)
with RequestsSnapshotSession(
snapshot=http_snapshot,
is_recording=is_recording,
serializer_options=serializer_options,
) as session:
response = session.get(
"https://api.example.com/protected",
headers={"X-API-Key": "secret-key"}
)
assert response.status_code == 200
```
### Using with fixtures (deprecated):
```python
import pytest
import inline_snapshot
from http_snapshot import SnapshotSerializerOptions
@pytest.mark.parametrize(
"http_snapshot, http_snapshot_serializer_options",
[
(
inline_snapshot.external("uuid:my-test-snapshot.json"),
SnapshotSerializerOptions(
exclude_request_headers=["X-API-Key"],
include_request=True,
),
),
],
)
def test_with_custom_options(
snapshot_requests_session: requests.Session,
http_snapshot_serializer_options: SnapshotSerializerOptions,
) -> None:
response = snapshot_requests_session.get(
"https://api.example.com/protected",
headers={"X-API-Key": "secret-key"}
)
assert response.status_code == 200
```
### Available Options
- `include_request`: Whether to include request details in snapshots (default: `True`)
- `exclude_request_headers`: List of request headers to exclude from snapshots
- `exclude_response_headers`: List of response headers to exclude from snapshots
By default, the following sensitive headers are always excluded:
- **Request**: `authorization`, `cookie`
- **Response**: `set-cookie`, `www-authenticate`, `proxy-authenticate`, `authentication-info`, `proxy-authentication-info`, `transfer-encoding`, `content-encoding`
## Snapshot Format
Snapshots are stored as JSON files with the following structure:
```json
[
{
"request": {
"method": "GET",
"url": "https://api.example.com/users",
"headers": {
"host": "api.example.com",
"accept": "*/*",
"accept-encoding": "gzip, deflate",
"connection": "keep-alive",
"user-agent": "python-httpx/0.28.1"
},
"body": ""
},
"response": {
"status_code": 200,
"headers": {
"date": "Thu, 21 Aug 2025 15:49:45 GMT",
"content-type": "application/json; charset=utf-8",
"connection": "keep-alive",
"server": "nginx/1.18.0"
},
"body": {
"users": [
{
"id": 1,
"name": "John Doe",
"email": "john@example.com"
},
{
"id": 2,
"name": "Jane Smith",
"email": "jane@example.com"
}
]
}
}
}
]
```
### Content Encoding
The plugin intelligently handles different content types:
- **JSON**: Formatted with proper indentation for readability
- **Text**: Stored as UTF-8 strings
- **Binary**: Base64 encoded
## Advanced Examples
### Testing API with Multiple Requests
```python
from http_snapshot.httpx import HttpxAsyncSnapshotClient
@pytest.mark.anyio
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:multi-request-test.json")],
)
async def test_multiple_requests(http_snapshot, is_recording: bool) -> None:
async with HttpxAsyncSnapshotClient(
snapshot=http_snapshot,
is_recording=is_recording,
) as client:
create_response = await client.post(
"https://api.example.com/users",
json={"name": "Alice", "email": "alice@example.com"}
)
assert create_response.status_code == 201
user_id = create_response.json()["id"]
get_response = await client.get(
f"https://api.example.com/users/{user_id}"
)
assert get_response.status_code == 200
assert get_response.json()["name"] == "Alice"
```
### Testing with Authentication
```python
from http_snapshot.requests import RequestsSnapshotSession, SnapshotSerializerOptions
@pytest.mark.parametrize(
"http_snapshot",
[inline_snapshot.external("uuid:auth-test.json")],
)
def test_authenticated_request(http_snapshot, is_recording: bool) -> None:
serializer_options = SnapshotSerializerOptions(
exclude_request_headers=["Authorization"]
)
with RequestsSnapshotSession(
snapshot=http_snapshot,
is_recording=is_recording,
serializer_options=serializer_options,
) as session:
response = session.get(
"https://api.example.com/profile",
headers={"Authorization": "Bearer secret-token"}
)
assert response.status_code == 200
```
## Best Practices
1. **Exclude sensitive data**: Always exclude headers containing secrets, tokens, or personal data
2. **Review snapshots**: Check generated snapshot files into version control and review changes
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"inline-snapshot>=0.27.2",
"httpx>=0.28.1; extra == \"httpx\"",
"requests>=2.32.5; extra == \"requests\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:19:46.727377 | http_snapshot-0.1.8.tar.gz | 12,988 | 37/a4/bd9cea69146855b47aefec010f36b99741208e5f2dcbf522e44f0f66ce2a/http_snapshot-0.1.8.tar.gz | source | sdist | null | false | 6f45a475ded6e3aa6988939a6d4ed979 | 1f0dc437fea4e13179f26290216bb402096a44f3b29969022a78c2e71fddfc28 | 37a4bd9cea69146855b47aefec010f36b99741208e5f2dcbf522e44f0f66ce2a | null | [
"LICENSE"
] | 257 |
2.4 | langgraph | 1.0.9 | Building stateful, multi-actor applications with LLMs | <picture class="github-only">
<source media="(prefers-color-scheme: light)" srcset="https://langchain-ai.github.io/langgraph/static/wordmark_dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="https://langchain-ai.github.io/langgraph/static/wordmark_light.svg">
<img alt="LangGraph Logo" src="https://langchain-ai.github.io/langgraph/static/wordmark_dark.svg" width="80%">
</picture>
<div>
<br>
</div>
[](https://pypi.org/project/langgraph/)
[](https://pepy.tech/project/langgraph)
[](https://github.com/langchain-ai/langgraph/issues)
[](https://docs.langchain.com/oss/python/langgraph/overview)
Trusted by companies shaping the future of agents – including Klarna, Replit, Elastic, and more – LangGraph is a low-level orchestration framework for building, managing, and deploying long-running, stateful agents.
## Get started
Install LangGraph:
```
pip install -U langgraph
```
Create a simple workflow:
```python
from langgraph.graph import START, StateGraph
from typing_extensions import TypedDict
class State(TypedDict):
text: str
def node_a(state: State) -> dict:
return {"text": state["text"] + "a"}
def node_b(state: State) -> dict:
return {"text": state["text"] + "b"}
graph = StateGraph(State)
graph.add_node("node_a", node_a)
graph.add_node("node_b", node_b)
graph.add_edge(START, "node_a")
graph.add_edge("node_a", "node_b")
print(graph.compile().invoke({"text": ""}))
# {'text': 'ab'}
```
Get started with the [LangGraph Quickstart](https://docs.langchain.com/oss/python/langgraph/quickstart).
To quickly build agents with LangChain's `create_agent` (built on LangGraph), see the [LangChain Agents documentation](https://docs.langchain.com/oss/python/langchain/agents).
## Core benefits
LangGraph provides low-level supporting infrastructure for *any* long-running, stateful workflow or agent. LangGraph does not abstract prompts or architecture, and provides the following central benefits:
- [Durable execution](https://docs.langchain.com/oss/python/langgraph/durable-execution): Build agents that persist through failures and can run for extended periods, automatically resuming from exactly where they left off.
- [Human-in-the-loop](https://docs.langchain.com/oss/python/langgraph/interrupts): Seamlessly incorporate human oversight by inspecting and modifying agent state at any point during execution.
- [Comprehensive memory](https://docs.langchain.com/oss/python/langgraph/memory): Create truly stateful agents with both short-term working memory for ongoing reasoning and long-term persistent memory across sessions.
- [Debugging with LangSmith](http://www.langchain.com/langsmith): Gain deep visibility into complex agent behavior with visualization tools that trace execution paths, capture state transitions, and provide detailed runtime metrics.
- [Production-ready deployment](https://docs.langchain.com/langsmith/app-development): Deploy sophisticated agent systems confidently with scalable infrastructure designed to handle the unique challenges of stateful, long-running workflows.
## LangGraph’s ecosystem
While LangGraph can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools for building agents. To improve your LLM application development, pair LangGraph with:
- [LangSmith](http://www.langchain.com/langsmith) — Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangSmith Deployment](https://docs.langchain.com/langsmith/deployments) — Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://docs.langchain.com/oss/python/langgraph/studio).
- [LangChain](https://docs.langchain.com/oss/python/langchain/overview) – Provides integrations and composable components to streamline LLM application development.
> [!NOTE]
> Looking for the JS version of LangGraph? See the [JS repo](https://github.com/langchain-ai/langgraphjs) and the [JS docs](https://docs.langchain.com/oss/javascript/langgraph/overview).
## Additional resources
- [Guides](https://docs.langchain.com/oss/python/langgraph/guides): Quick, actionable code snippets for topics such as streaming, adding memory & persistence, and design patterns (e.g. branching, subgraphs, etc.).
- [Reference](https://reference.langchain.com/python/langgraph/): Detailed reference on core classes, methods, how to use the graph and checkpointing APIs, and higher-level prebuilt components.
- [Examples](https://docs.langchain.com/oss/python/langgraph/agentic-rag): Guided examples on getting started with LangGraph.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [LangChain Academy](https://academy.langchain.com/courses/intro-to-langgraph): Learn the basics of LangGraph in our free, structured course.
- [Case studies](https://www.langchain.com/built-with-langgraph): Hear how industry leaders use LangGraph to ship AI applications at scale.
## Acknowledgements
LangGraph is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The public interface draws inspiration from [NetworkX](https://networkx.org/documentation/latest/). LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programm... | [] | null | null | >=3.10 | [] | [] | [] | [
"langchain-core>=0.1",
"langgraph-checkpoint<5.0.0,>=2.1.0",
"langgraph-prebuilt<1.1.0,>=1.0.8",
"langgraph-sdk<0.4.0,>=0.3.0",
"pydantic>=2.7.4",
"xxhash>=3.5.0"
] | [] | [] | [] | [
"Homepage, https://docs.langchain.com/oss/python/langgraph/overview",
"Documentation, https://reference.langchain.com/python/langgraph/",
"Source, https://github.com/langchain-ai/langgraph/tree/main/libs/langgraph",
"Changelog, https://github.com/langchain-ai/langgraph/releases",
"Twitter, https://x.com/Lan... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:19:45.228516 | langgraph-1.0.9.tar.gz | 502,800 | dc/63/69373a6721f30026ffa462a62084b11ed4bb5a201d1672366e13a89532f3/langgraph-1.0.9.tar.gz | source | sdist | null | false | c3d13f6a020150408eef7eb3c6ddba41 | feac2729faba7d3c325bef76f240d7d7f66b02d2cbf4fdb1ed7d0cc83f963651 | dc6369373a6721f30026ffa462a62084b11ed4bb5a201d1672366e13a89532f3 | MIT | [
"LICENSE"
] | 1,087,281 |
2.1 | hatchet-sdk | 1.25.2 | This is the official Python SDK for Hatchet, a distributed, fault-tolerant task queue. The SDK allows you to easily integrate Hatchet's task scheduling and workflow orchestration capabilities into your Python applications. | # Hatchet Python SDK
<div align="center">
[](https://badge.fury.io/py/hatchet-sdk)
[](https://docs.hatchet.run)
[](https://opensource.org/licenses/MIT)
</div>
This is the official Python SDK for [Hatchet](https://hatchet.run), a distributed, fault-tolerant task queue. The SDK allows you to easily integrate Hatchet's task scheduling and workflow orchestration capabilities into your Python applications.
## Installation
Install the SDK using pip:
```bash
pip install hatchet-sdk
```
Or using poetry:
```bash
poetry add hatchet-sdk
```
## Quick Start
For examples of how to use the Hatchet Python SDK, including worker setup and task execution, please see our [official documentation](https://docs.hatchet.run/home/setup).
## Features
- 🔄 **Workflow Orchestration**: Define complex workflows with dependencies and parallel execution
- 🔁 **Automatic Retries**: Configure retry policies for handling transient failures
- 📊 **Observability**: Track workflow progress and monitor execution metrics
- ⏰ **Scheduling**: Schedule workflows to run at specific times or on a recurring basis
- 🔄 **Event-Driven**: Trigger workflows based on events in your system
## Documentation
For detailed documentation, examples, and best practices, visit:
- [Hatchet Documentation](https://docs.hatchet.run)
- [Examples](https://github.com/hatchet-dev/hatchet/tree/main/sdks/python/examples)
## Contributing
We welcome contributions! Please check out our [contributing guidelines](https://docs.hatchet.run/contributing) and join our [Discord community](https://hatchet.run/discord) for discussions and support.
## License
This SDK is released under the MIT License. See [LICENSE](https://github.com/hatchet-dev/hatchet/blob/main/LICENSE) for details.
| text/markdown | Alexander Belanger | alexander@hatchet.run | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"grpcio<2.0.0,>=1.76.0",
"grpcio-tools<2.0.0,>=1.76.0",
"protobuf<7.0.0,>=6.30.2",
"pydantic<3.0.0,>=2.6.3",
"python-dateutil<3.0.0,>=2.9.0.post0",
"aiohttp<4.0.0,>=3.10.5",
"tenacity>=8.4.1",
"prometheus-client>=0.21.1",
"pydantic-settings<3.0.0,>=2.7.1",
"opentelemetry-api<2.0.0,>=1.28.0; extra ... | [] | [] | [] | [] | poetry/1.7.1 CPython/3.12.3 Linux/6.14.0-1017-azure | 2026-02-19T18:19:06.796149 | hatchet_sdk-1.25.2.tar.gz | 245,631 | e2/30/85d291c90b09f59ec3d321cb7d449fed592e9ef902be81d4f355814826fe/hatchet_sdk-1.25.2.tar.gz | source | sdist | null | false | 29bddd769aa82e5fba937e7d04c462d1 | 879f7c0e2e20cb17e58df787bd9f160870a592567924d0bec673ab7df071632a | e23085d291c90b09f59ec3d321cb7d449fed592e9ef902be81d4f355814826fe | null | [] | 6,632 |
2.4 | ipygame | 0.1.0 | A pygame API-compatible reimplementation for running `pygame`-style code inside Jupyter notebooks (backed by ipycanvas). | # ipygame
[](https://mybinder.org/v2/gh/Kamuyin/ipygame/master?urlpath=lab)
ipygame is a pygame-style API for writing small games inside Jupyter notebooks, primarily for teaching and classroom use. Instead of SDL2, it renders to an `ipycanvas` canvas output, so it works in environments where you do not have a desktop window (e.g. JupyterLab, hosted JupyterHub).

The goal is API familiarity, not perfect drop-in compatibility. Many common drawing and event patterns work, but the browser and the widget stack impose limits.
## Install
For local notebooks (CPython kernels), install from this repository:
```bash
pip install git+https://github.com/Kamuyin/ipygame.git
```
## Quick start
```python
import ipygame as pygame
screen = pygame.display.set_mode((420, 260))
screen.fill("midnightblue")
pygame.draw.rect(screen, "gold", (30, 30, 140, 80))
pygame.draw.circle(screen, "tomato", (280, 130), 50)
pygame.display.flip()
```
## Examples
The [examples/](examples/) folder contains notebooks for basic drawing, input handling, and small game demos.
## Documentation
Docs and API reference: <https://kamuyin.github.io/ipygame>
## Limitations and known issues
Some pygame features are not applicable in the browser or are not implemented yet. For a high-level view of what is currently covered, check the API coverage page in the docs.
Performance can be noticeably lower than desktop pygame. Rendering happens through the browser canvas and a widget message channel, so high-FPS loops and pixel-heavy effects pay extra overhead, and there is no native SDL2 window/GPU pipeline like on the desktop.
If you run the examples in JupyterLite (Pyodide), you may see rendering work while real-time keyboard/mouse input does not. This is a limitation of the Pyodide kernel + widget message processing for long-running loops, and it is not something ipygame can reliably fix from Python alone.
Audio is work in progress.
## Acknowledgements
ipygame is based on the work by the pygame and pygame-ce projects and aims to provide a familiar API for educational notebooks. This project is not affiliated with, endorsed by, or a replacement for pygame/pygame-ce.
## License
Licensed under the GNU Lesser General Public License v2.1 (LGPL-2.1-only). See [LICENSE](LICENSE).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"ipycanvas>=0.13",
"ipywidgets>=8.0",
"numpy>=1.24",
"pillow>=9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Kamuyin/ipygame"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:18:10.171110 | ipygame-0.1.0.tar.gz | 371,275 | 6d/d0/9bf255f6a27234d5497ba6ef6866ac6b55442fc1dc7647cef1574711ae10/ipygame-0.1.0.tar.gz | source | sdist | null | false | e34d1499b18330efb1e82ef3cd7ecddf | 65976215359b71f8fc39cd1e4da76595da9542494c69f40b3f9cc2d974877526 | 6dd09bf255f6a27234d5497ba6ef6866ac6b55442fc1dc7647cef1574711ae10 | LGPL-2.1-only | [
"LICENSE"
] | 251 |
2.4 | schwifty-md | 2026.2.21 | IBAN parsing and validation | .. image:: https://img.shields.io/pypi/v/schwifty.svg?style=flat-square
:target: https://pypi.python.org/pypi/schwifty
.. image:: https://img.shields.io/github/actions/workflow/status/mdomke/schwifty/lint-and-test.yml?branch=main&style=flat-square
:target: https://github.com/mdomke/schwifty/actions?query=workflow%3Alint-and-test
.. image:: https://img.shields.io/pypi/l/schwifty.svg?style=flat-square
:target: https://pypi.python.org/pypi/schwifty
.. image:: https://readthedocs.org/projects/schwifty/badge/?version=latest&style=flat-square
:target: https://schwifty.readthedocs.io
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg?style=flat-square
:target: https://black.readthedocs.io/en/stable/index.html
.. image:: https://img.shields.io/codecov/c/gh/mdomke/schwifty?token=aJj1Yg0NUq&style=flat-square
:target: https://codecov.io/gh/mdomke/schwifty
Gotta get schwifty with your IBANs
==================================
.. teaser-begin
``schwifty`` is a Python library that let's you easily work with IBANs and BICs
as specified by the ISO. IBAN is the Internation Bank Account Number and BIC
the Business Identifier Code. Both are used for international money transfer.
Features
--------
``schwifty`` lets you
* `validate`_ check-digits and the country specific format of IBANs
* `validate`_ format and country codes from BICs
* `generate`_ BICs from country and bank-code
* `generate`_ IBANs from country-code, bank-code and account-number.
* `generate`_ random valid IBANs
* get the BIC associated to an IBAN's bank-code
* access all relevant components as attributes
See the `docs <https://schwifty.readthedocs.io>`_ for more inforamtion.
.. _validate: https://schwifty.readthedocs.io/en/latest/examples.html#validation
.. _generate: https://schwifty.readthedocs.io/en/latest/examples.html#generation
.. teaser-end
Versioning
----------
Since the IBAN specification and the mapping from BIC to bank_code is updated from time to time,
``schwifty`` uses `CalVer <http://www.calver.org/>`_ for versioning with the scheme ``YY.0M.Micro``.
.. installation-begin
Installation
------------
To install ``schwifty``, simply:
.. code-block:: bash
$ pip install schwifty
.. installation-end
Development
-----------
We use the `black`_ as code formatter. This avoids discussions about style preferences in the same
way as ``gofmt`` does the job for Golang. The conformance to the formatting rules is checked in the
CI pipeline, so that it is recommendable to install the configured `pre-commit`_-hook, in order to
avoid long feedback-cycles.
.. code-block:: bash
$ pre-commit install
You can also use the ``fmt`` Makefile-target to format the code or use one of the available `editor
integrations`_.
Project Information
-------------------
# schwifty-md
Fork of [schwifty](https://github.com/mdomke/schwifty) with support for Moldova (MD) IBANs.
Original author: Martin Domke
``schwifty`` is released under `MIT`_ license and its documentation lives at `Read the Docs`_. The
code is maintained on `GitHub`_ and packages are distributed on `PyPI`_
Name
~~~~
Since ``swift`` and ``swiftly`` were already taken by the OpenStack-project, but we somehow wanted
to point out the connection to SWIFT, Rick and Morty came up with the idea to name the project
``schwifty``.
.. image:: https://i.cdn.turner.com/adultswim/big/video/get-schwifty-pt-2/rickandmorty_ep205_002_vbnuta15a755dvash8.jpg
.. _black: https://black.readthedocs.io/en/stable/index.html
.. _pre-commit: https://pre-commit.com
.. _editor integrations: https://black.readthedocs.io/en/stable/editor_integration.html
.. _MIT: https://choosealicense.com/licenses/mit/
.. _Read the Docs: https://schwifty.readthedocs.io
.. _GitHub: https://github.com/mdomke/schwifty
.. _PyPI: https://pypi.org/project/schwifty
| text/x-rst | null | Martin Domke <mail@martindomke.net> | null | Mihai <your-email@example.com> | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: P... | [] | null | null | >=3.9 | [] | [] | [] | [
"importlib-resources>=5.10; python_version <= \"3.11\"",
"pycountry",
"rstr",
"typing-extensions>=4.0.1; python_version <= \"3.10\"",
"pydantic>=2.0; extra == \"pydantic\""
] | [] | [] | [] | [
"Changelog, https://github.com/mdomke/schwifty/blob/main/CHANGELOG.rst",
"Documentation, https://schwifty.readthedocs.io/en/latest/",
"Homepage, http://github.com/mdomke/schwifty"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T18:17:49.593061 | schwifty_md-2026.2.21.tar.gz | 754,181 | 6b/0d/20768f25d31a632911ec391bc695c56ec2d34a38050862d4d23ad79bdc5f/schwifty_md-2026.2.21.tar.gz | source | sdist | null | false | 4e6765453181d01b75b0429f14c55dd1 | c8560c7a0b75baaf8ac3429ad2d4dc9907dc6467d8e24dbd20ac87f54e8d366e | 6b0d20768f25d31a632911ec391bc695c56ec2d34a38050862d4d23ad79bdc5f | MIT | [
"LICENSE"
] | 263 |
2.4 | naas-abi-core | 1.17.2 | Abi framework allowing you to build your AI system. | # naas-abi-core
The core implementation library for ABI (Agentic Brain Infrastructure), providing the fundamental building blocks for building unified AI systems. This library implements the core concepts, services, and architecture patterns that enable ontology-driven AI applications.
## Overview
The ABI Library is the core implementation of ABI's concepts, designed to build a unified AI system. This library provides the fundamental building blocks for connecting, processing, and utilizing data across different AI components.
`naas-abi-core` is the foundational library that powers the ABI framework. It provides:
- **Engine**: Central orchestration system that loads and coordinates modules, services, and ontologies
- **Services**: Hexagonal architecture-based services for storage, secrets, and AI capabilities
- **Modules**: Modular system for organizing agents, integrations, pipelines, and workflows
- **Agents**: LangGraph-based AI agents with tool binding and conversation management
- **Applications**: Ready-to-use interfaces (REST API, Terminal, MCP Protocol)
## Installation
```bash
pip install naas-abi-core
```
### Optional Dependencies
```bash
# For Qdrant vector store support
pip install naas-abi-core[qdrant]
# For AWS S3 object storage support
pip install naas-abi-core[aws]
# For SSH tunnel support
pip install naas-abi-core[ssh]
# For OpenRouter integration
pip install naas-abi-core[openrouter]
# Install all optional dependencies
pip install naas-abi-core[all]
```
## Core Architecture
### Engine
The `Engine` is the central orchestrator that:
1. **Loads Configuration**: Reads and validates YAML configuration files
2. **Initializes Services**: Sets up storage, vector, triple store, and secret services based on module dependencies
3. **Loads Modules**: Discovers and loads modules with their agents, integrations, pipelines, and workflows
4. **Loads Ontologies**: Loads RDF ontologies into the triple store for semantic reasoning
5. **Initializes Components**: Calls `on_initialized()` on all modules after everything is loaded
**Example Usage:**
```python
from naas_abi_core.engine.Engine import Engine
# Initialize engine with default configuration (config.yaml)
engine = Engine()
# Load all modules
engine.load()
# Or load specific modules
engine.load(module_names=["naas_abi", "my_custom_module"])
# Access loaded modules
for module_name, module in engine.modules.items():
print(f"Module: {module_name}")
print(f"Agents: {[agent.__name__ for agent in module.agents]}")
# Access services
triple_store = engine.services.triple_store
vector_store = engine.services.vector_store
object_storage = engine.services.object_storage
secret_service = engine.services.secret
```
### Modules
Modules are the primary organizational unit in ABI. Each module can contain:
- **Agents**: AI agents that can be used for conversations and task execution
- **Integrations**: Connections to third-party services and APIs
- **Pipelines**: Data transformation processes that convert raw data into semantic representations
- **Workflows**: Business logic that can be exposed as tools, API endpoints, or scheduled jobs
- **Ontologies**: RDF/Turtle files that define semantic knowledge structures
**Module Structure:**
```python
from naas_abi_core.module.Module import BaseModule, ModuleConfiguration
from naas_abi_core.engine.EngineProxy import EngineProxy
class MyModule(BaseModule):
class Configuration(ModuleConfiguration):
# Module-specific configuration
api_key: str
dependencies = ModuleDependencies(
modules=[], # Other modules this module depends on
services=[TripleStoreService, VectorStoreService] # Required services
)
def on_load(self):
# Called when module is loaded
# Load ontologies, agents, etc.
super().on_load()
def on_initialized(self):
# Called after all modules and services are initialized
# Safe to access other modules and services here
pass
```
### Services
Services form the foundational layer of ABI, implementing the Hexagonal Architecture (Ports & Adapters) pattern to provide flexible and system-agnostic interfaces. This architectural approach allows ABI to seamlessly integrate with existing systems while maintaining clean separation of concerns.
Each service defines a primary port (interface) that specifies its capabilities, while multiple secondary adapters can implement this interface for different backend systems. This means you can:
- Easily swap implementations without changing business logic
- Add new integrations by implementing new adapters
- Test components in isolation using mock adapters
For example, the Secret Service could connect to various backend systems through different adapters:
- Hashicorp Vault
- AWS Secrets Manager
- Azure Key Vault
- Environment Variables
- Local File System
- Google Cloud Secret Manager
- Kubernetes Secrets
This modular approach ensures that ABI can be deployed in any environment while maintaining consistent interfaces and behavior across different infrastructure choices.
#### Triple Store Service
Manages RDF knowledge graphs for semantic reasoning and ontology storage.
**Supported Backends:**
- Oxigraph (default)
- SPARQL endpoints
- Custom adapters
**Example:**
```python
from naas_abi_core.services.triple_store.TripleStoreService import TripleStoreService
triple_store = engine.services.triple_store
# Query the knowledge graph
results = triple_store.query("""
SELECT ?subject ?predicate ?object
WHERE {
?subject ?predicate ?object
}
LIMIT 10
""")
```
#### Vector Store Service
Manages vector embeddings for semantic search and similarity matching.
**Supported Backends:**
- Qdrant (optional, requires `[qdrant]` extra)
- Custom adapters
**Example:**
```python
from naas_abi_core.services.vector_store.VectorStoreService import VectorStoreService
vector_store = engine.services.vector_store
# Store embeddings
vector_store.upsert(
collection_name="intents",
vectors=[embedding],
ids=["intent_1"],
payloads=[{"text": "user query"}]
)
# Search similar vectors
results = vector_store.search(
collection_name="intents",
query_vector=query_embedding,
limit=5
)
```
#### Object Storage Service
Manages file storage for documents, reports, and generated content.
**Supported Backends:**
- AWS S3 (optional, requires `[aws]` extra)
- MinIO
- Local file system
- Custom adapters
**Example:**
```python
from naas_abi_core.services.object_storage.ObjectStorageService import ObjectStorageService
object_storage = engine.services.object_storage
# Upload a file
object_storage.upload(
bucket="my-bucket",
key="documents/report.pdf",
file_path="/path/to/report.pdf"
)
# Download a file
object_storage.download(
bucket="my-bucket",
key="documents/report.pdf",
file_path="/path/to/downloaded.pdf"
)
```
#### Secret Service
Manages secrets and credentials securely across different storage systems.
**Supported Backends:**
- Environment variables
- Naas Secret Manager
- Hashicorp Vault
- AWS Secrets Manager
- Azure Key Vault
- Google Cloud Secret Manager
- Kubernetes Secrets
- Local file system
- Custom adapters
**Example:**
```python
from naas_abi_core.services.secret.Secret import Secret
secret_service = engine.services.secret
# Get a secret
api_key = secret_service.get("OPENAI_API_KEY")
# List all secrets
all_secrets = secret_service.list()
```
#### Cache Service
Provides intelligent caching for API calls, tool results, and model responses to optimize performance and manage rate limits.
**Example:**
```python
from naas_abi_core.services.cache.CacheService import CacheService
cache = CacheService()
# Cache a function result
@cache.cache(ttl=3600)
def expensive_api_call(param: str):
# Expensive operation
return result
# Force refresh
result = expensive_api_call(param, force_refresh=True)
```
## Core Concepts
### Integration
Integrations provide standardized connections to third-party services and data sources. They handle:
- Authentication and authorization
- API communication
- Data format standardization
- Error handling and retries
**Example:**
```python
from naas_abi_core.integration.integration import Integration, IntegrationConfiguration
class MyAPIConfiguration(IntegrationConfiguration):
api_key: str
base_url: str
class MyAPIIntegration(Integration):
def __init__(self, configuration: MyAPIConfiguration):
super().__init__(configuration)
# Initialize connection
def fetch_data(self):
# Implement API call
pass
```
### Pipeline
Pipelines are responsible for data ingestion and transformation into the ontological layer. They:
- Utilize integrations to fetch data
- Transform raw data into semantic representations
- Maintain data consistency and quality
- Map external data models to ABI's ontology
**Example:**
```python
from naas_abi_core.pipeline.pipeline import Pipeline, PipelineConfiguration, PipelineParameters
from rdflib import Graph
class MyPipelineConfiguration(PipelineConfiguration):
integration_config: dict
class MyPipelineParameters(PipelineParameters):
source: str
class MyPipeline(Pipeline):
def __init__(self, configuration: MyPipelineConfiguration):
super().__init__(configuration)
def run(self, parameters: MyPipelineParameters) -> Graph:
# Fetch data from integration
# Transform to RDF
# Return Graph
graph = Graph()
# ... add triples to graph
return graph
def trigger(self, event, ontology_name, triple) -> Graph:
# Event-driven pipeline execution
return self.run(MyPipelineParameters(source="event"))
```
### Workflow
Workflows leverage the ontological layer to implement business logic and provide data to consumers. They can be used by:
- Large Language Models (LLMs)
- Remote APIs and services
- Other automated processes
**Example:**
```python
from naas_abi_core.workflow.workflow import Workflow, WorkflowConfiguration, WorkflowParameters
from pydantic import BaseModel
class MyWorkflowParameters(WorkflowParameters):
input_data: str
class MyWorkflowConfiguration(WorkflowConfiguration):
processing_option: str
class MyWorkflow(Workflow[MyWorkflowParameters]):
def __init__(self, configuration: MyWorkflowConfiguration):
super().__init__(configuration)
def run(self, parameters: MyWorkflowParameters):
# Implement business logic
# Query knowledge graph
# Process data
# Return results
return {"result": "processed"}
```
### Agent
Agents are AI-powered assistants that can have conversations, use tools, and delegate to sub-agents.
**Features:**
- LangGraph-based conversation management
- Tool binding and execution
- Sub-agent delegation
- Intent-based routing
- Conversation persistence (PostgreSQL checkpointing)
- Event streaming for real-time updates
**Example:**
```python
from naas_abi_core.services.agent.Agent import Agent
from langchain_openai import ChatOpenAI
class MyAgent(Agent):
def __init__(self):
super().__init__(
name="MyAgent",
description="An agent that helps with specific tasks",
system_prompt="You are a helpful assistant...",
chat_model=ChatOpenAI(model="gpt-4"),
tools=[my_tool, my_workflow],
agents=[sub_agent] # Optional sub-agents
)
```
## Applications
### REST API
FastAPI-based REST API that automatically exposes all agents, workflows, and pipelines.
**Features:**
- OpenAPI/Swagger documentation
- OAuth2 authentication
- CORS support
- Automatic endpoint generation from agents/workflows/pipelines
**Usage:**
```python
from naas_abi_core.apps.api.api import api
# Run the API server
api()
```
**Endpoints:**
- `GET /` - API landing page
- `GET /docs` - Swagger UI documentation
- `GET /redoc` - ReDoc documentation
- `POST /agents/{agent_name}/completion` - Agent completion endpoint
- `POST /workflows/{workflow_name}/run` - Workflow execution endpoint
- `POST /pipelines/{pipeline_name}/run` - Pipeline execution endpoint
### Terminal Agent
Interactive terminal interface for chatting with agents.
**Usage:**
```python
from naas_abi_core.apps.terminal_agent.main import run_agent
from naas_abi_core.services.agent.Agent import Agent
agent = MyAgent()
run_agent(agent)
```
### MCP Server
Model Context Protocol (MCP) server for integration with Claude Desktop and VS Code.
**Features:**
- Dynamic agent discovery from OpenAPI spec
- HTTP and stdio transport modes
- Automatic tool registration
**Usage:**
```bash
# Start MCP server
python -m naas_abi_core.apps.mcp.mcp_server
# Or with HTTP transport
MCP_TRANSPORT=http python -m naas_abi_core.apps.mcp.mcp_server
```
## Configuration
ABI uses YAML configuration files (typically `config.yaml`) to configure:
- **Services**: Storage backends, connection details
- **Modules**: Which modules to load and their configurations
- **API**: API title, description, CORS settings
- **Global Config**: AI mode (cloud, local, airgap)
**Example Configuration:**
```yaml
api:
title: "My ABI API"
description: "API for my AI system"
cors_origins:
- "http://localhost:9879"
global_config:
ai_mode: "cloud" # or "local" or "airgap"
services:
triple_store:
type: "oxigraph"
url: "http://localhost:7878"
vector_store:
type: "qdrant"
url: "http://localhost:6333"
object_storage:
type: "minio"
endpoint: "http://localhost:9000"
secret:
type: "env"
modules:
- module: "naas_abi"
enabled: true
- path: "./src/my_module"
enabled: true
config:
api_key: "${MY_API_KEY}"
```
## Key Dependencies
- **rdflib**: RDF and ontology management
- **langgraph**: Agent conversation management
- **langgraph-checkpoint-postgres**: Conversation persistence
- **fastapi**: REST API framework
- **sparqlwrapper**: SPARQL query execution
- **pydantic**: Data validation and configuration
- **loguru**: Logging
- **langchain-openai**: OpenAI integration
## Architecture Patterns
### Hexagonal Architecture (Ports & Adapters)
All services follow the Hexagonal Architecture pattern:
- **Primary Port**: Interface defining service capabilities
- **Secondary Adapters**: Implementations for different backends
- **Benefits**: Easy swapping of implementations, testability, system-agnostic design
### Module System
- **Dependency Resolution**: Automatic dependency resolution and ordering
- **Lazy Loading**: Services loaded only when needed by modules
- **Lifecycle Hooks**: `on_load()` and `on_initialized()` for setup
- **Isolation**: Modules can be developed and tested independently
### Event-Driven Pipelines
Pipelines can be triggered by:
- Manual execution via API or code
- Ontology events (triple insertions/updates)
- Scheduled jobs
- Workflow triggers
## Development
### Running Tests
```bash
pytest
```
### Type Checking
```bash
mypy naas_abi_core
```
### Building
```bash
uv build
```
## See Also
- [ABI Main README](../../README.md) - Complete ABI framework documentation
- [naas-abi-cli](../naas-abi-cli/) - CLI tool for ABI projects
- [naas-abi-marketplace](../naas-abi-marketplace/) - Marketplace modules and agents
## License
MIT License
| text/markdown | null | Maxime Jublou <maxime@naas.ai>, Florent Ravenel <florent@naas.ai>, Jeremy Ravenel <jeremy@naas.ai> | null | null | MIT License | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"click<8.2,>=8.1.1",
"dagster-aws>=0.27.12",
"dagster-postgres>=0.27.12",
"dagster-webserver>=1.11.12",
"dagster>=1.11.12",
"docker>=7.1.0",
"dotenv>=0.9.9",
"fastapi<0.116,>=0.115.5",
"fastmcp>=2.13.2",
"langchain-openai<0.4,>=0.3.3",
"langgraph-checkpoint-postgres>=2.0.21",
"langgraph>=0.6.6... | [] | [] | [] | [
"Homepage, https://github.com/jupyter-naas/abi",
"Repository, https://github.com/jupyter-naas/abi/tree/main/libs/naas-abi-core"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:17:41.066019 | naas_abi_core-1.17.2.tar.gz | 195,975 | ca/b4/c9152a67e0ec19f26feb3835a5b72f66518a043a14e32e353133c5d6ab9d/naas_abi_core-1.17.2.tar.gz | source | sdist | null | false | 0609c7227c8ebb4bf818de956db53a8f | ecdf634a194ae1d5c2405f745898e0ec5677218b6a7f016dfcdd6fc79453db6a | cab4c9152a67e0ec19f26feb3835a5b72f66518a043a14e32e353133c5d6ab9d | null | [] | 292 |
2.4 | stinger-ipc | 0.7.0rc1 | Tools to create code to do IPC over MQTT | # Stinger IPC
StingerIPC provides inter-process communications (IPC) between a server and multiple clients running on the same or separate hosts. It uses an MQTT server to pass messages between processes, implementing several IPC patterns: signals, properties, and proceedures.
## Project Status
This project is in early stages of active development. You should not use it in any of your projects, both because it doesn't have enough features yet to be useful, and also because things will probably be horribly broken on future updates.
## Interface Description
StingerIPC takes a interface description file (.singeripc), and will generate code and documentation from it. A very brief example of a interface description is:
```yaml
stingeripc:
version: 0.0.7
interface:
name: Example
version: 0.0.1
signals:
foo:
payload:
- name: message
type: string
methods:
addNumbers:
arguments:
- name: left
type: integer
- name: right
type: integer
returnValues:
- name: sum
type: integer
```
## First class code generation
From the StingerIPC description file, we directly generate server and client code for these languages: Python3, C++11, and Rust.
### Server Code
From the above description file, StingerIPC generates server code, which can be used like this:
```py
# Python
conn = MqttConnection('localhost', 1883)
server = ExampleServer(conn)
server.emit_foo("Hello World")
@server.handle_add_numbers
def add_numbers(left: int, right: int) -> int:
return left + right
```
```c++
// C++
auto conn = std::make_shared<DefaultConnection>("localhost", 1883);
ExampleServer server(conn);
server.emitFoo("Hello World").wait();
server.registerAddNumbersHandler([](int left, int right) -> int
{
return left + right;
});
```
```rust
// Rust
let connection = Connection::new(String::from("tcp://localhost:1883"));
let mut server = SignalOnlyServer::new(connection);
server.emit_foo("Hello World".to_string());
server.register_add_numbers_handler(|left, right| {
left + right
});
```
### Client Code
From the above description file, StingerIPC generates client code which can be used like this:
```py
# Python
conn = MqttConnection('localhost', 1883)
client = ExampleClient(conn)
@client.receive_foo
def print_foo_receipt(message):
print(f"Got a 'foo' signal with message: {message}")
future = client.add_numbers(1, 2)
timeout = 5
print(future.result(timeout))
```
```c++
// C++
auto conn = std::make_shared<DefaultConnection>("localhost", 1883);
ExampleClient client(conn);
client.registerFooCallback([](const std::string& message) {
std::cout << message << std::endl;
});
std::cout << "One plus three is " << client.addNumbers(1, 3).wait() << std::endl;
```
```rust
// Rust
let connection = Connection::new(String::from("tcp://localhost:1883"));
let mut client = ExampleClient::new(connection);
client.set_signal_recv_callbacks_for_foo(|message| {
println!("{}", message);
});
client.add_numbers(1, 4);
```
## AsyncAPI and second-class code generation
[AsyncAPI](https://www.asyncapi.com/) is a specification format for describing asynchronous message APIs. Since StingerIPC uses and abstracts asynchonous messages between server and clients, we can describe a StingerIPC system with an AsyncAPI document.
From that AsyncAPI document, we can [generate code and documentation](https://www.asyncapi.com/tools/generator) in additional languages. While this code generation won't implement our standard IPC design patterns, it does make accessing the communications more easy.
## Inter-process communication (IPC)
The motivation for this project is that I've seen embedded Linux projects that run several daemons that need to talk to each other, and for whatever reasons D-Bus wasn't a good option.
So this project is a way for those daemons/programs to communicate with each other through an MQTT broker running on the same device. The design goals of this project have been tuned toward this use case.
That being said, there is nothing prohibiting Stinger-IPC to be used for RPC: remote proceedure calls. RPC typically involves being able to call into a system from a different system. That certainly can be done by having the remote systems connect into the same MQTT broker as the local system. However, this isn't the primary use case, and design goals aren't geared to make Stinger-IPC the best solution for RPC (though don't let that stop you from using it that way). Specifically, Stinger-IPC requires the server and all clients to be running the same version of code. For systems where all the software ships together, this usually isn't a problem, but could be a problem for systems with remote connections.
### Comparison to gRPC
gRPC is probably a better solution for handling RPC and connections from remote clients. It does a much better job at handling compatibility between different versions, supporting a wider number of languages, and transports messages more efficiently.
But gRPC, as most typically deployed, provide some challenges for use inside an embedded Linux system. gRPC typically wants secured HTTP/2 connections, which are just overkill for communications within a single device. Additionally, the protobuf code generation is just more complicated.
## Plugin System
The user can specify templates that override or add to the built-in templates. Those templates can come from a directory (use the `--template-path` command line flag) or from another package (use the `template-pkg` command line flag). When those templates need a way to extend the model system, it can add additional symbols by implementing the "stinger_symbols" entry point (see the `pyproject.toml` file for built-in examples on how to do this).
## Design goals
* Low learning curve.
* No fancy/tricky code. Generated code should look like a a human wrote it for humans to use it.
* Useable for embedded Linux systems.
* Be described by an AsyncAPI spec.
## License
### Generator Code
The stinger-ipc generator (Python scripts, templates, and related files) is licensed under the **MIT License**. See the [LICENSE](LICENSE) file for details.
### Generated Code
**Code generated by stinger-ipc is NOT subject to the MIT License.** As a special exception, you may use, modify, and distribute code generated by stinger-ipc under any license of your choosing, including proprietary licenses, without attribution or restriction.
This exception applies to all output produced by running the stinger-ipc code generator, regardless of the input files or configuration used. You are free to:
* Use generated code in commercial projects
* Relicense generated code under any terms
* Modify and distribute without attribution
* Include in proprietary software
The templates in `stingeripc/templates/` automatically include a license notice in generated files to clarify this.
| text/markdown | null | null | null | null | MIT License Copyright © 2018-2025 Jacob Brunson. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ----- Generated Code Exception Notwithstanding any other terms in this License, any code, artifacts, or files that are generated by the software in this repository ("Generated Code") are not subject to the licensing restrictions above and may be re-licensed, redistributed, or used under different terms by their recipients. Recipients of Generated Code may, at their sole discretion, apply any license or terms to such Generated Code, including placing it in the public domain, without seeking permission from the authors of this repository. The Generated Code is provided "AS IS" and without warranty of any kind. The authors, contributors, and copyright holders of this repository expressly disclaim all warranties and conditions relating to the Generated Code, whether express, implied, statutory, or otherwise, including but not limited to warranties of merchantability, fitness for a particular purpose, title, and non-infringement. In no event shall the authors, contributors, or copyright holders of this repository be liable for any direct, indirect, incidental, special, exemplary, or consequential damages arising out of or in connection with the use of the Generated Code. This exception does not affect or alter the license status of the original source code and other materials contained in this repository; it applies only to outputs produced by running the code generator(s) included herein. | null | [
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"go-task-bin>=3.46.2",
"jacobs-jinja-too>=0.2.12",
"jsonschema-rs>=0.34.0",
"packaging>=25.0",
"pydantic>=2.11.7",
"pydantic-asyncapi>=0.3.0",
"pyyaml>=6.0.2",
"ruamel-yaml>=0.18.14",
"semantic-version>=2.10.0",
"stevedore>=5.5.0",
"typer>=0.16.1",
"yamlloader>=1.5.1"
] | [] | [] | [] | [] | uv/0.9.12 {"installer":{"name":"uv","version":"0.9.12"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"25.10","id":"questing","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T18:16:35.245969 | stinger_ipc-0.7.0rc1.tar.gz | 136,733 | 10/d6/e16971ff423dfa7bd014374e1ecba29c7f210ca3ad7d428f34cbd139deee/stinger_ipc-0.7.0rc1.tar.gz | source | sdist | null | false | acd04e54ca544d0b097c248703a83238 | 1db6928f37b50f0e36ab666c7f288356e5557d72d18329583fd6541cbb406606 | 10d6e16971ff423dfa7bd014374e1ecba29c7f210ca3ad7d428f34cbd139deee | null | [
"LICENSE"
] | 243 |
2.4 | modelpricing-ai | 2026.2.17 | Python client for ModelPricing.ai cost estimates and tracking | # modelpricing-ai
Python client for the [ModelPricing.ai](https://modelpricing.ai) API — estimate LLM usage costs and track spending with a single call.
## Installation
```bash
pip install modelpricing-ai
```
For async support (requires [aiohttp](https://docs.aiohttp.org)):
```bash
pip install modelpricing-ai[async]
```
## Quick Start
```python
from modelpricing_ai import ModelPricingClient
with ModelPricingClient(api_key="YOUR_API_KEY") as client:
estimate = client.estimate(
model="gpt-4o-mini",
tokens_in=1000,
tokens_out=500,
trace_id={"requestId": "abc-123"},
)
print(f"Cost: ${estimate.total:.6f}")
```
## Async Usage
Install the `async` extra, then use `AsyncModelPricingClient` as an async context manager:
```python
import asyncio
from modelpricing_ai import AsyncModelPricingClient
async def main():
async with AsyncModelPricingClient(api_key="YOUR_API_KEY") as client:
estimate = await client.estimate(
model="gpt-4o-mini",
tokens_in=1000,
tokens_out=500,
trace_id={"requestId": "abc-123"},
)
print(f"Cost: ${estimate.total:.6f}")
asyncio.run(main())
```
## Response Structure
Both `estimate()` and `await estimate()` return an `EstimateResponse` object:
```python
estimate.total # float — total USD cost
estimate.model # str — canonical model name
estimate.traceId # dict | None — your pass-through trace ID
estimate.breakdown # EstimateBreakdownGroup
.input # EstimateBreakdown
.unit # str — e.g. "token"
.branch # str — pricing tier that matched
.qty # int — number of input tokens
.rate # float — per-unit rate
.subtotal # float — input cost
.output # EstimateBreakdown (same fields for output tokens)
```
## Configuration
| Parameter | Default | Description |
| ------------- | ------------------------------- | ------------------------------------------------------------------------ |
| `api_key` | _required_ | Your ModelPricing.ai API key (also reads `MODELPRICING_API_KEY` env var) |
| `base_url` | `"https://api.modelpricing.ai"` | API base URL (also reads `MODELPRICING_BASE_URL` env var) |
| `timeout` | `30.0` | Request timeout in seconds |
| `max_retries` | `3` | Maximum retry attempts for transient errors |
| `session` | `None` | Optional `requests.Session` (sync) or `aiohttp.ClientSession` (async) |
Parameters are resolved in order: constructor argument > environment variable > default.
```python
client = ModelPricingClient(
api_key="YOUR_API_KEY",
base_url="https://api.modelpricing.ai",
timeout=30.0,
max_retries=3,
)
```
## Error Handling
The client raises typed exceptions for different failure modes:
| Exception | HTTP Status | When |
| ----------------- | ----------- | ----------------------------- |
| `Unauthorized` | 401 | Invalid or missing API key |
| `ValidationError` | 422 | Invalid model name or metrics |
| `NotFound` | 404 | Unknown endpoint |
| `ServerError` | 5xx | Server-side failures |
All exceptions inherit from `ModelPricingError` and include a `status_code` attribute.
```python
from modelpricing_ai.errors import Unauthorized, ValidationError, ServerError
try:
estimate = client.estimate(model="gpt-4o-mini", tokens_in=1000, tokens_out=500)
except Unauthorized:
print("Check your API key")
except ValidationError as e:
print(f"Bad request: {e}")
except ServerError:
print("Server error — will be retried automatically")
```
## Retry Behavior
The client automatically retries on transient errors with exponential backoff:
- **Retries**: 5xx server errors and network/connection errors
- **No retry**: 4xx client errors (401, 404, 422)
- **Default**: 3 retries with exponential backoff (0.1 s initial, 2 s max)
```python
# Increase retries for unreliable networks
client = ModelPricingClient(api_key="YOUR_API_KEY", max_retries=5)
# Disable retries (no retry attempts)
client = ModelPricingClient(api_key="YOUR_API_KEY", max_retries=0)
```
## License
MIT
| text/markdown | ModelPricing.ai | null | null | null | MIT | ai, anthropic, cost, estimate, llm, model, openai, pricing, token, tracking, usage | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming La... | [] | null | null | >=3.8 | [] | [] | [] | [
"pydantic>=2.3.0",
"requests>=2.31.0",
"tenacity>=8.0.0",
"aiohttp>=3.9.0; extra == \"async\"",
"aiohttp>=3.9.0; extra == \"dev\"",
"build>=1.2.1; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"twine>=5.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://modelpricing.ai"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:15:33.498313 | modelpricing_ai-2026.2.17.tar.gz | 7,061 | 19/2a/b8229b057a017064a567f2dad456c6c6ad7161f8f63fb552774473a6df07/modelpricing_ai-2026.2.17.tar.gz | source | sdist | null | false | ca4b3c584e68715d31c9f67ea236ac45 | 857aeeca4028eb13fddbe244599cb882bc24ae2eca6de4983dfa66d4d9e082e2 | 192ab8229b057a017064a567f2dad456c6c6ad7161f8f63fb552774473a6df07 | null | [
"LICENSE"
] | 257 |
2.4 | django-stratagem | 2026.2.1b3 | Registry-based plugin architecture for Django applications | <p align="center">
<img src="https://raw.githubusercontent.com/OmenApps/django-stratagem/refs/heads/main/docs/_static/django-stratagem.png" alt="django-stratagem logo" width="400">
</p>
# django-stratagem
[](https://pypi.org/project/django-stratagem/)
[](https://pypi.org/project/django-stratagem/)
[](https://pypi.org/project/django-stratagem/)
[](https://django-stratagem.readthedocs.io/en/latest/)
[](https://opensource.org/licenses/MIT)
Many Django projects reach a point where you want to make the system **configurable** and need a some of the app's behavior to be **swappable**. For instance, if you need to support multiple payment processors and each merchant picks one. Maybe you offer several export formats and users choose CSV, XLSX, or PDF at download time. Maybe different customers get different notification channels depending on their plan.
The usual approach is a mess of nested `if/elif` chains, settings flags, or one-off plugin systems that each work a little differently. django-stratagem replaces all of those with a single pattern: you write each option as a small Python class, and the library auto-discovers it at startup, wires up model fields, populates form and admin dropdowns, and optionally exposes it through DRF.
**How it helps the developer:**
- Add a new option by creating one class in one file. No manual wiring, no migrations.
- Store a user's or tenant's selection in the database with a model field that understands your registry.
- Get dropdowns in forms and the admin automatically - choices stay in sync as you add or remove options.
- Control which options are available to which users using permissions, feature flags, or custom rules.
- Third-party packages can contribute their own options through a plugin entry point.
**What this gives your end users:**
- Admins see a clean dropdown of available options instead of typing class paths or magic strings.
- Options can be enabled, disabled, or restricted per user, role, or tenant without code changes.
- Deploying a new class is enough - no migration needed.
## Example use cases
- **Notification channels** - email, SMS, push, Slack, webhook - let admins pick which channels are active. ([Getting started](https://django-stratagem.readthedocs.io/en/latest/quickstart.html))
- **Payment gateways** - Stripe, PayPal, Braintree - store the chosen gateway per merchant in a model field and swap it at runtime.
- **Export/import formats** - CSV, Excel, PDF, JSON - register each format as an option, then offer them as choices in a [form](https://django-stratagem.readthedocs.io/en/latest/howto-forms-admin.html) or API endpoint.
- **Authentication backends** - LDAP, SAML, OAuth providers - enable or disable per-tenant with [conditional availability](https://django-stratagem.readthedocs.io/en/latest/howto-conditions.html) tied to feature flags or permissions.
- **Pricing / discount strategies** - percentage off, fixed amount, buy-one-get-one - attach the active strategy to a model and let business users pick it in the admin.
- **Report generators** - sales summary, inventory audit, user activity - each report type is a class, and adding a new report is just adding a new module.
## Installation
```bash
pip install django-stratagem
```
Add to `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
"django_stratagem",
# ...
]
```
## Quickstart
### 1. Define a Registry and Interface
```python
# myapp/registry.py
from django_stratagem import Registry, Interface
class NotificationRegistry(Registry):
implementations_module = "notifications"
class NotificationInterface(Interface):
registry = NotificationRegistry
def send(self, message: str, recipient: str) -> bool:
raise NotImplementedError
```
### 2. Create Implementations
```python
# myapp/notifications.py
from myapp.registry import NotificationInterface
class EmailNotification(NotificationInterface):
slug = "email"
description = "Send notifications via email"
priority = 10
def send(self, message, recipient):
# send email...
return True
class SMSNotification(NotificationInterface):
slug = "sms"
description = "Send notifications via SMS"
priority = 20
def send(self, message, recipient):
# send SMS...
return True
```
Implementations are auto-registered when their module is imported. django-stratagem discovers them automatically via `autodiscover_modules("notifications")` on app startup.
### 3. Use in Models
```python
# myapp/models.py
from django.db import models
from myapp.registry import NotificationRegistry
class NotificationConfig(models.Model):
# Stores a reference to the implementation class
strategy = NotificationRegistry.choices_field()
# Or store an instance (instantiated on access)
# strategy = NotificationRegistry.instance_field()
```
### 4. Use in Code
```python
from myapp.registry import NotificationRegistry
# Get all registered implementations
for impl_class in NotificationRegistry:
print(impl_class.slug)
# Get by slug
impl = NotificationRegistry.get(slug="email")
impl.send("Hello!", "user@example.com")
# Get class without instantiation
cls = NotificationRegistry.get_class(slug="email")
# Safe get with fallback
impl = NotificationRegistry.get_or_default(slug="nonexistent", default="email")
# Get choices for forms
choices = NotificationRegistry.get_choices()
# [("email", "Email Notification"), ("sms", "SMS Notification")]
```
## Features
### Conditional Availability
Use conditions to control when implementations are available:
```python
from django_stratagem import ConditionalInterface, PermissionCondition
class AdminNotification(ConditionalInterface):
registry = NotificationRegistry
slug = "admin_only"
condition = PermissionCondition("myapp.admin_notifications")
def send(self, message, recipient):
...
```
Built-in conditions: `FeatureFlagCondition`, `PermissionCondition`, `SettingCondition`, `CallableCondition`, and several more. Conditions support `&` (AND), `|` (OR), and `~` (NOT) operators.
### Hierarchical Registries
Define parent-child relationships between registries for advanced needs:
```python
from django_stratagem import HierarchicalRegistry, HierarchicalInterface
class CategoryRegistry(Registry):
implementations_module = "categories"
class SubcategoryRegistry(HierarchicalRegistry):
implementations_module = "subcategories"
parent_registry = CategoryRegistry
class MySubcategory(HierarchicalInterface):
registry = SubcategoryRegistry
slug = "sub_a"
parent_slug = "category_a" # Only valid under category_a
```
### Model Fields
| Field | Description |
|---|---|
| `RegistryClassField` | Stores class reference, returns class on access |
| `RegistryField` | Stores class reference, returns instance on access |
| `MultipleRegistryClassField` | Comma-separated classes |
| `MultipleRegistryField` | Comma-separated instances |
| `HierarchicalRegistryField` | With parent field dependency |
### Django Admin
```python
from django.contrib import admin
from django_stratagem.admin import ContextAwareRegistryAdmin
@admin.register(MyModel)
class MyModelAdmin(ContextAwareRegistryAdmin):
pass
```
### DRF Integration
Install with DRF support:
```bash
pip install django-stratagem[drf]
```
```python
from django_stratagem.drf.serializers import DrfRegistryField
class MySerializer(serializers.Serializer):
strategy = DrfRegistryField(registry=NotificationRegistry)
```
### Template Tags
```html
{% load stratagem %}
{% get_implementations my_registry as implementations %}
{% for slug, impl in implementations.items %}
{{ impl|display_name }} - {{ impl|registry_icon }}
{% endfor %}
```
### Plugin System
External packages can register implementations via entry points:
```toml
# In the plugin's pyproject.toml
[project.entry-points."django_stratagem.plugins"]
my_plugin = "my_plugin.stratagem_plugin"
```
### Management Commands
```bash
# List all registries and implementations
python manage.py list_registries
python manage.py list_registries --format json
# Clear registry caches
python manage.py clear_registries_cache
# Re-initialize registries
python manage.py initialize_registries
```
## Configuration
```python
# settings.py
DJANGO_STRATAGEM = {
"CACHE_TIMEOUT": 300, # Cache TTL in seconds (default: 300)
"SKIP_DURING_MIGRATIONS": True, # Skip registry ops during migrations (default: True)
"ENABLED_PLUGINS": None, # List of enabled plugin names, or None for all
"DISABLED_PLUGINS": [], # List of disabled plugin names
}
```
## License
MIT
| text/markdown | null | Jack Linke <jack@watervize.com> | null | null | null | django, registry, plugin, strategy, pattern, extensible | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming ... | [] | null | null | >=3.11 | [] | [] | [] | [
"django>=4.2",
"djangorestframework>=3.14; extra == \"drf\"",
"django-waffle>=4.0; extra == \"waffle\""
] | [] | [] | [] | [
"Homepage, https://github.com/OmenApps/django-stratagem",
"Documentation, https://django-stratagem.readthedocs.io/en/latest/",
"Repository, https://github.com/OmenApps/django-stratagem",
"Issues, https://github.com/OmenApps/django-stratagem/issues"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T18:15:32.687838 | django_stratagem-2026.2.1b3-py3-none-any.whl | 52,799 | 87/a7/b452fdb913bda72f41dc24fc4704049b6f3e12ca7806cef3f4ac15732a6b/django_stratagem-2026.2.1b3-py3-none-any.whl | py3 | bdist_wheel | null | false | b2c9841759eb90adc12979e6866d7000 | ca2ab64180c1d350010584c1ba14a42383074e80c91a446281901626e9170bc1 | 87a7b452fdb913bda72f41dc24fc4704049b6f3e12ca7806cef3f4ac15732a6b | MIT | [
"LICENSE"
] | 226 |
2.4 | dataforge-sdk | 10.0.3 | SDK for creating DataForge extensions | # dataforge-sdk
SDK for creating DataForge extensions.
## Postgres Utilities
The `dataforge.pg` module provides helper functions to execute SQL operations against the DataForge Postgres metastore:
```python
from dataforge.pg import select, update, pull
# Execute a SELECT query and return a Spark DataFrame
df = select("SELECT * FROM my_table")
# Execute an UPDATE/INSERT/DELETE query
update("UPDATE my_table SET col = 'value'")
# Trigger a new data pull for source_id 123
pull(123)
```
## IngestionSession
The `IngestionSession` class manages a custom data ingestion process lifecycle.
```python
from dataforge import IngestionSession
# Initialize a session (production use)
session = IngestionSession()
# Initialize a session (optional source_name/project_name for testing)
session = IngestionSession(source_name="my_source", project_name="my_project")
# Ingest data
# pass a function returning a DataFrame (recommended to integrate logging with DataForge)
session.ingest(lambda: spark.read.csv("s3://bucket/path/input.csv"))
# pass a DataFrame (can be used for testing, not recommended for production deployment)
df = spark.read.csv("s3://bucket/path/input.csv")
session.ingest(df)
# ingest empty dataframe to create 0-record input
session.ingest()
# Fail the process with error message
session.fail("Error message")
# Retrieve latest tracking fields
tracking = session.latest_tracking_fields()
# Retrieve connection parameters for the current source
connection_parameters = session.connection_parameters()
# Retrieve custom parameters for the current source
custom_parameters = session.custom_parameters()
```
## ParsingSession
The `ParsingSession` class manages a custom parse process lifecycle.
```python
from dataforge import ParsingSession
# Initialize a session (production use)
session = ParsingSession()
# Initialize a session (optional input_id for testing)
session = ParsingSession(input_id=123)
# Retrieve custom parameters
params = session.custom_parameters()
# Get the path of file to be parsed
path = session.file_path
# Run parsing: pass a DataFrame, a function returning a DataFrame or None (0-record file)
session.run(lambda: spark.read.json(session.file_path))
# Fail the process with error message
session.fail("Error message")
```
## PostOutputSession
The `PostOutputSession` class manages a custom post-output process lifecycle.
```python
from dataforge import PostOutputSession
# Initialize a session (production use)
session = PostOutputSession()
# Initialize a session (optional names for testing)
session = PostOutputSession(output_name="report", output_source_name="my_source", project_name="my_project")
# Get the path of file generated by preceding output process
path = session.file_path()
# Retrieve connection parameters for the current output
connection_parameters = session.connection_parameters()
# Retrieve custom parameters for the current source
custom_parameters = session.custom_parameters()
# Run post-output logic: pass a function encapsulating custom code
session.run(lambda: print(f"Uploading file from {path}"))
# Fail the process with error message
session.fail("Error message")
```
| text/markdown | null | Vadim Orlov <vorlov@dataforgelabs.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"psycopg2-binary>=2.9; extra == \"psycopg2\""
] | [] | [] | [] | [
"Homepage, https://docs.dataforgelabs.com",
"Issues, https://docs.dataforgelabs.com/hc/en-us/requests/new"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T18:15:17.863943 | dataforge_sdk-10.0.3.tar.gz | 13,539 | 83/05/9d092de6e855a86a685dab59cd18f03a984821d4f27aaf839b8e1a3db1e9/dataforge_sdk-10.0.3.tar.gz | source | sdist | null | false | 9317416485531a58c51114426585288e | 2f0a7bd8b3141356b6da271d583cd5e92fe7262be38ea81c49ee70898d8c2ca7 | 83059d092de6e855a86a685dab59cd18f03a984821d4f27aaf839b8e1a3db1e9 | null | [] | 264 |
2.1 | subnoto-api-client | 2.5.4 | Python client for the Subnoto Public API | # Subnoto Python SDK
Python client for the Subnoto Public API
Note: the SDK is only available on the linux/amd64 platform for now
## Installation
```bash
pip install subnoto-api-client
```
## Usage
The SDK provides both async and sync clients. Use `SubnotoClient` for async/await code or `SubnotoSyncClient` for synchronous code.
### Async Example
```python
import asyncio
from subnoto_api_client import SubnotoClient, SubnotoConfig
async def main():
config = SubnotoConfig(
api_base_url="https://enclave.subnoto.com",
access_key="your-access-key",
secret_key="your-secret-key-hex"
)
async with SubnotoClient(config) as client:
response = await client.post("/public/workspace/list", json={})
print(f"Workspaces: {response.json()}")
if __name__ == "__main__":
asyncio.run(main())
```
### Sync Example
```python
from subnoto_api_client import SubnotoSyncClient, SubnotoConfig
config = SubnotoConfig(
api_base_url="https://enclave.subnoto.com",
access_key="your-access-key",
secret_key="your-secret-key-hex"
)
with SubnotoSyncClient(config) as client:
response = client.post("/public/workspace/list", json={})
print(f"Workspaces: {response.json()}")
```
## Configuration
| Option | Type | Required | Description |
| -------------- | ----- | -------- | ------------------------------------------------------ |
| `api_base_url` | str | Yes | API base URL (e.g., `https://enclave.subnoto.com`) |
| `access_key` | str | Yes | API access key from your team settings |
| `secret_key` | str | Yes | API secret key (hex-encoded) from your team settings |
| `unattested` | bool | No | Use unattested mode for development (default: `False`) |
| `attester_key` | bytes | No | Public key for attestation verification |
| text/markdown | Subnoto | support@subnoto.com | null | null | Apache-2.0 | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Pr... | [] | https://subnoto.com | null | >=3.8 | [] | [] | [] | [
"httpx",
"attrs",
"http-message-signatures",
"typer"
] | [] | [] | [] | [
"Documentation, https://subnoto.com/documentation/developers/sdks/python",
"Homepage, https://subnoto.com",
"Repository, https://gitlab.com/subnoto/subnoto-monorepo-public"
] | twine/6.2.0 CPython/3.12.11 | 2026-02-19T18:14:59.952401 | subnoto_api_client-2.5.4-py3-none-manylinux2014_x86_64.whl | 1,490,647 | c4/db/be4baa3434dc15541e76f0d577b4b9d0ff60102fa7fb2d1dbe087905e4ba/subnoto_api_client-2.5.4-py3-none-manylinux2014_x86_64.whl | py3 | bdist_wheel | null | false | 25e37ff1bbbba54e5c66c75e9a38cd02 | 0ddfdb12a0e9deb7692bd410c2eb10c6e3215da69e01c2c8371bca6036822c9a | c4dbbe4baa3434dc15541e76f0d577b4b9d0ff60102fa7fb2d1dbe087905e4ba | null | [] | 98 |
2.1 | carconnectivity-webui-by-m7xlab | 1.1.5 | CarConnectivity plugin for a modern Django-based web UI with Apple-inspired design |
# CarConnectivity WebUI by m7xlab
[](https://pypi.org/project/carconnectivity-webui-by-m7xlab/)
[](https://pypi.org/project/carconnectivity-webui-by-m7xlab/)
[](https://pypi.org/project/carconnectivity-webui-by-m7xlab/)
[](https://github.com/m7xlab/CarConnectivity-plugin-webui/blob/main/LICENSE)
[](https://www.djangoproject.com/)
> Modern Django-based WebUI for CarConnectivity with Apple-inspired design
## About
This is a modern Django-based WebUI plugin for [CarConnectivity](https://github.com/tillsteinbach/CarConnectivity) - a Python API to connect to various car services.
**Note**: This is an enhanced fork maintained by m7xlab, featuring a complete rewrite with Django framework and Apple-inspired design system.
### Original Project
Based on the original [CarConnectivity-plugin-webui](https://github.com/tillsteinbach/CarConnectivity-plugin-webui) by Till Steinbach.
## ✨ New: Modern Django-Based UI with Apple-Inspired Design
The WebUI has been completely redesigned with:
- **Modern Framework**: Migrated from Flask to Django 5.0
- **Apple-Inspired Design**: Clean, minimalist interface with glassmorphism effects
- **Dark Mode**: Full dark mode support with automatic detection
- **Responsive**: Mobile-first design that works on all devices
- **Smooth Animations**: 60fps transitions and micro-interactions
- **Modern Icons**: Heroicons SVG icon library
- **Better Performance**: Optimized static file serving with WhiteNoise
- **Enhanced Security**: Django's built-in security features
<img src="https://raw.githubusercontent.com/GedasKr/CarConnectivity-webui-by-m7xlab/main/screenshots/screenshot1.png" width="300">
<img src="https://raw.githubusercontent.com/GedasKr/CarConnectivity-webui-by-m7xlab/main/screenshots/screenshot2.png" width="300">
<img src="https://raw.githubusercontent.com/GedasKr/CarConnectivity-webui-by-m7xlab/main/screenshots/screenshot3.png" width="300">
## How to install
### Install using PIP
If you want to use CarConnectivity Web UI, the easiest way is to obtain it from [PyPI](https://pypi.org/project/carconnectivity-webui-by-m7xlab/). Just install using:
```bash
pip3 install carconnectivity-webui-by-m7xlab
```
after you installed CarConnectivity
### Install from Source (Development)
```bash
git clone https://github.com/tillsteinbach/CarConnectivity-plugin-webui.git
cd CarConnectivity-plugin-webui
pip3 install -e .
```
## Configuration
In your carconnectivity.json configuration add a section for the webui plugin like this. A documentation of all possible config options can be found [here](https://github.com/tillsteinbach/CarConnectivity-plugin-webui/tree/main/doc/Config.md).
```
{
"carConnectivity": {
"connectors": [
...
]
"plugins": [
{
"type": "webui",
"config": {
"username": "admin", // Admin username for login
"password": "secret" // Admin password for login
}
}
]
}
}
```
## How to use
You will default find the webinterface on http port 4000 on the machine that is hosting carconnectivity. You can change interface with the `host` parameter and the port with the `port parameter`.
Always set your personal username and password to protect your data from theft.
## Updates
If you want to update, the easiest way is:
```bash
pip3 install carconnectivity-webui-by-m7xlab --upgrade
```
## Features
- 🎨 **Modern Design**: Apple-inspired UI with glassmorphism effects
- 🌓 **Dark Mode**: Automatic dark mode detection with manual toggle
- 📱 **Responsive**: Works perfectly on mobile, tablet, and desktop
- ⚡ **Fast**: Optimized performance with Django and WhiteNoise
- 🔒 **Secure**: Django's built-in security features
- ♿ **Accessible**: WCAG 2.1 AA compliant
- 🎭 **Smooth Animations**: 60fps transitions and micro-interactions
- 🎯 **Modern Icons**: Heroicons SVG icon library
## Logs
The **Log** page shows the **system log** of the CarConnectivity process that runs this WebUI.
- **Source**: Logs are not read from files or other services. The [CarConnectivity](https://github.com/tillsteinbach/CarConnectivity) core attaches an in-memory handler to Python’s `logging` module and appends `LogRecord` objects to a ring buffer. The WebUI reads that buffer and formats each record with a standard formatter (`%(asctime)s - %(name)s - %(levelname)s - %(message)s`). So you see **only logs from this process** (CarConnectivity + connectors + plugins in the same runtime).
- **Buffer size**: The UI shows only the **last N entries** in that buffer (N is defined in CarConnectivity core, often around a dozen). For a longer or full history, use **container logs** (e.g. `docker logs`, pod logs, or a log file if you redirect stdout/stderr to a file). Container logs capture everything the process writes to stdout/stderr—including the same Python log lines plus HTTP server access logs (e.g. Werkzeug), urllib3, and other libraries—so they are more complete and can look different from the UI.
- **Order**: The `?order=` query controls sort order on the log page:
- `order=desc` (default): **Latest first** — most recent entries at the top.
- `order=asc`: **Oldest first** — chronological order from the start of the buffer.
- **Other containers**: Logs from **other containers** (e.g. a separate database container, Grafana, or nginx) are **not** available here. To see those, use the container’s own logging (e.g. `docker logs`, Kubernetes logs, or Grafana’s log datasources).
| text/markdown | null | m7xlab <m7xlab@gmail.com> | null | m7xlab <m7xlab@gmail.com> | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: System Administrators",
"Framework :: Django",
"Framework :: Django :: 5.0",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
... | [] | null | null | >=3.9 | [] | [] | [] | [
"carconnectivity[images]>=0.11.6",
"Django~=5.0",
"pypng~=0.20220715.0"
] | [] | [] | [] | [
"Homepage, https://github.com/m7xlab/CarConnectivity-plugin-webui",
"Repository, https://github.com/m7xlab/CarConnectivity-plugin-webui",
"Issues, https://github.com/m7xlab/CarConnectivity-plugin-webui/issues",
"Documentation, https://github.com/m7xlab/CarConnectivity-plugin-webui/blob/main/README.md"
] | twine/6.1.0 CPython/3.8.15 | 2026-02-19T18:14:50.867803 | carconnectivity_webui_by_m7xlab-1.1.5.tar.gz | 44,085 | 67/49/5f60f2499bfc50eb92b79bb758fa0be61a04924095d4cf8d9e0c76b3acd2/carconnectivity_webui_by_m7xlab-1.1.5.tar.gz | source | sdist | null | false | 17e33289d878e33cb8a70272dab4cacd | b217365783263caeded47d507333dfbbd72a08da672a31753af586c60cee15bb | 67495f60f2499bfc50eb92b79bb758fa0be61a04924095d4cf8d9e0c76b3acd2 | null | [] | 258 |
2.4 | dissect.target | 3.25.dev62 | This module ties all other Dissect modules together, it provides a programming API and command line tools which allow easy access to various data sources inside disk images or file collections (a.k.a. targets) | # dissect.target
The Dissect module tying all other Dissect modules together. It provides a programming API and command line tools which
allow easy access to various data sources inside disk images or file collections (a.k.a. targets). For more information,
please see [the documentation](https://docs.dissect.tools/en/latest/projects/dissect.target/index.html).
## Requirements
This project is part of the Dissect framework and requires Python.
Information on the supported Python versions can be found in the Getting Started section of [the documentation](https://docs.dissect.tools/en/latest/index.html#getting-started).
## Installation
`dissect.target` is available on [PyPI](https://pypi.org/project/dissect.target/).
```bash
pip install dissect.target
```
This module is also automatically installed if you install the `dissect` package.
If you wish to use the YARA plugin (`target-query -f yara`), you can install `dissect.target[yara]` to automatically
install the `yara-python` dependency.
## Tools inside this project
### target-query
`target-query` is a tool used to query specific data inside one or more targets.
These queries are available in the form of functions that reside within [plugins](https://docs.dissect.tools/en/latest/advanced/plugins.html).
Each plugin is focussed on providing specific functionality.
This functionality can range from parsing log sources, such as command history logs (i.e. bash history,
PowerShell history, etc.), to returning the hostname and operating system version.
The most basic basic usage of `target-query` is to execute a function on a target:
```bash
target-query -f <FUNCTION_NAME> /example_path/target.vmdk
```
You can also use basic path expansion to execute functions over multiple targets. For example, to execute a function
on all ``.vmdk`` files in a directory:
```
target-query -f <FUNCTION_NAME> /example_path/*.vmdk
```
Not every target plugin will function on every target, they are OS specific.
More information on how to use `target-query` is found in [the documentation](https://docs.dissect.tools/en/latest/tools/target-query.html).
### target-shell
`target-shell` gives you the ability to access a target using a virtual shell environment. Once a shell is opened
on a target, type `help` to list the available commands. To see the documentation of each command,
you can use `help [COMMAND]`.
Opening a shell on a target is straight-forward. You can do so by specifying a path to a target as follows:
```bash
target-shell targets/EXAMPLE.vmx
WIN-EXAMPLE:/$ help
Documented commands (type help <topic>):
========================================
attr cls enter find info man registry volumes
cat cyber exit hash less pwd save zcat
cd debug file help ll python stat zless
clear disks filesystems hexdump ls readlink tree
WIN-EXAMPLE:/$ ls
$fs$
c:
efi
sysvol
```
Further interacting with the target can be done using the commands listed above.
You can exit the shell by running `exit` or by pressing `CTRL+D`.
More information on how to use `target-shell` is found in [the documentation](https://docs.dissect.tools/en/latest/tools/target-shell.html).
### target-fs
With `target-fs` you can interact with the filesystem of a target using a set of familiar Unix commands.
The basic structure of a `target-fs` command is as follows:
```bash
target-fs <path_to_target> <command> <path_for_command>
```
**NOTE:** As with any shell command, you have to properly escape backlashes and spaces. Unless you use single or double quotes (`'`, `"`).
More information on how to use `target-fs` is found in [the documentation](https://docs.dissect.tools/en/latest/tools/target-fs.html).
### target-reg
With `target-reg` you can easily query the registry of Windows targets and print the results in a tree. A `+` symbol indicates that it is a registry key (i.e. may have subkeys). A `-` symbol indicates a registry value.
```bash
user@dissect~$ target-reg targets/EXAMPLE.E01 -k "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft"
+ 'Microsoft' (last-modified-date-shows-here)
+ '.NETFramework' (last-modified-date-shows-here)
- 'Enable64Bit' value-shows-here
[...]
```
More information on how to use `target-reg` is found in [the documentation](https://docs.dissect.tools/en/latest/tools/target-reg.html).
### target-dump
With `target-dump` you can export records of a specific `function` used in target-query to a file.
The basic structure of a `target-dump` command is as follows:
```bash
target-dump -f <comma_seperated_functions> <path_to_target>
```
Futhermore, the tool can apply certain compression algorithms to the dump, to create small archives of the output.
More information on how to use `target-dump` is found in [the documentation](https://docs.dissect.tools/en/latest/tools/target-dump.html).
### target-dd
With `target-dd` you can export (a part of) a target to a file or to stdout. At the moment, `target-dd` can be used for targets that have only one disk.
The basic structure of a `target-dd` command is as follows:
```bash
target-dd --write <output_file> --offset <offset_on_target_in_bytes> --bytes <nr_of_bytes_to_read> <path_to_target>
```
More information on how to use `target-dd` is found in [the documentation](https://docs.dissect.tools/en/latest/tools/target-dd.html).
### target-mount
With `target-mount` you can mount the filesystem of a target to any arbitrary directory on your analysis machine, similar to the `mount` command on Unix systems.
To perform this function, we use `fusepy` to mount a filesystem in linux and mac.
This interacts with `fuselib` to mount disk images in linux userspace, so no administrative access is required.
`target-mount` has two required positional arguments:
* `TARGET` - Target to mount
* `MOUNT` - Directory to mount the target's filesystem on
The following example command can be used to mount a target to the directory ``mnt``:
```bash
user@dissect~$ target-mount targets/EXAMPLE.vmx ~/mnt/EXAMPLE
user@dissect~$ ls ~/mnt/EXAMPLE/
disks fs volumes
```
When mounting a target using `target-mount` the process is kept in the foreground. This will occupy your current
terminal session. It is recommended to either open a second terminal, let this command run in the background by
appending `&` to the command or use a terminal multiplexer like `tmux` to start a second session. Using one
of these methods enables you to interact with the mountpoint.
More information on how to use `target-mount` is found in [the documentation](https://docs.dissect.tools/en/latest/tools/target-mount.html).
## Build and test instructions
This project uses `tox` to build source and wheel distributions. Run the following command from the root folder to build
these:
```bash
tox -e build
```
The build artifacts can be found in the `dist/` directory.
`tox` is also used to run linting and unit tests in a self-contained environment. To run both linting and unit tests
using the default installed Python version, run:
```bash
tox
```
For a more elaborate explanation on how to build and test the project, please see [the
documentation](https://docs.dissect.tools/en/latest/contributing/tooling.html).
## Contributing
The Dissect project encourages any contribution to the codebase. To make your contribution fit into the project, please
refer to [the development guide](https://docs.dissect.tools/en/latest/contributing/developing.html).
## Copyright and license
Dissect is released as open source by Fox-IT (<https://www.fox-it.com>) part of NCC Group Plc
(<https://www.nccgroup.com>).
Developed by the Dissect Team (<dissect@fox-it.com>) and made available at <https://github.com/fox-it/dissect>.
License terms: AGPL3 (<https://www.gnu.org/licenses/agpl-3.0.html>). For more information, see the LICENSE file.
| text/markdown | null | Dissect Team <dissect@fox-it.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Internet :: Log Analysis",
"Topic :: Scientific/Engineering ... | [] | null | null | >=3.10 | [] | [] | [] | [
"defusedxml",
"dissect.cstruct<5,>=4",
"dissect.database<2,>=1.1.dev4",
"dissect.eventlog<4,>=3",
"dissect.evidence<4,>=3.13.dev3",
"dissect.hypervisor<4,>=3.21.dev5",
"dissect.ntfs<4,>=3.16.dev",
"dissect.regf<4,>=3.13",
"dissect.util<4,>=3",
"dissect.volume<4,>=3.17",
"flow.record~=3.21.0",
... | [] | [] | [] | [
"homepage, https://dissect.tools",
"documentation, https://docs.dissect.tools/en/latest/projects/dissect.target",
"repository, https://github.com/fox-it/dissect.target"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:14:44.418353 | dissect_target-3.25.dev62.tar.gz | 1,299,107 | 51/67/ba4be860ba534059cdf45b2348f2a9b40fe29e78455e0d065912713f1774/dissect_target-3.25.dev62.tar.gz | source | sdist | null | false | 4231c4bbe7409762dc2f16394dd8a5fd | b973dc7e8f5ac794aa45d7eb4df69efdfad3d501f79beaaf58a2402276679f88 | 5167ba4be860ba534059cdf45b2348f2a9b40fe29e78455e0d065912713f1774 | AGPL-3.0-or-later | [
"LICENSE",
"COPYRIGHT"
] | 0 |
2.4 | pycfast | 0.1.1 | Python interface for building, running, and analyzing CFAST fire simulation models | # PyCFAST
[](https://github.com/bewygs/pycfast/actions/workflows/test.yml)
[](https://bewygs.github.io/pycfast)
[](https://github.com/astral-sh/uv)
[](https://github.com/astral-sh/ruff)
[](https://github.com/python/mypy)
[](https://codecov.io/gh/bewygs/pycfast)
[](https://github.com/bewygs/pycfast/blob/main/LICENSE)
**PyCFAST** is a Python interface for the [**Consolidated Fire and Smoke Transport (CFAST)**](https://pages.nist.gov/cfast/)
fire simulation software. Its primary goal is to **automate CFAST calculations at scale**,
run parametric studies, sensitivity analyses, data generation, or optimization loops that would be
impractical through the graphical interface. It also provides a convenient way to
create CFAST input files, execute simulations, and analyze results using the versatility
and extensive ecosystem of Python.
## From CEdit GUI to Python
PyCFAST can be seen as an alternative to the CEdit graphical interface. It exposes Python objects with **rich interactive representations** that integrate naturally into your Python workflow. Instead of relying on static input files, you define and manipulate CFAST models programmatically.
<table>
<tr>
<td align="center"><strong>CEdit (GUI)</strong></td>
<td align="center"><strong>PyCFAST (Python)</strong></td>
</tr>
<tr>
<td><img src="docs/source/_static/images/cedit-compartments-tab.png" alt="CEdit Compartments Tab" width="400"></td>
<td>
```python
from pycfast import Compartments
room = Compartments(
id="Comp 1",
width=10.0, depth=10.0, height=10.0,
ceiling_mat_id="Gypboard",
wall_mat_id="Gypboard",
floor_mat_id="Gypboard",
)
room # displays interactive HTML card
```
</td>
</tr>
</table>
Every PyCFAST object such as compartments, fires, vents, devices, materials can render as an interactive HTML card when displayed in Jupyter notebooks or VS Code notebooks. These cards provide a visual summary of the component's properties and can be expanded to show more details.:
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/source/_static/images/pycfast-all-cards-dark.png">
<img src="docs/source/_static/images/pycfast-all-cards-light.png" alt="PyCFAST component cards" width="700">
</picture>
</p>
The complete model overview displays all components at a glance with expandable details:
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/source/_static/images/pycfast-model-card-dark.png">
<img src="docs/source/_static/images/pycfast-model-card-light.png" alt="PyCFAST model card" width="400">
</picture>
</p>
## Example Usage
You can define your own CFAST model directly in python by importing the required classes.
```python
from pycfast import (
CeilingFloorVents,
CFASTModel,
Compartments,
Fires,
MaterialProperties,
MechanicalVents,
SimulationEnvironment,
WallVents,
)
simulation_environment = SimulationEnvironment(...)
material_properties = [MaterialProperties(...)]
compartments = [Compartments(...)]
wall_vents = [WallVents(...)]
ceiling_floor_vents = [CeilingFloorVents(...)]
mechanical_vents = [MechanicalVents(...)]
fires = [Fires(...)]
model = CFASTModel(
simulation_environment=simulation_environment,
material_properties=material_properties,
compartments=compartments,
wall_vents=wall_vents,
ceiling_floor_vents=ceiling_floor_vents,
mechanical_vents=mechanical_vents,
fires=fires,
file_name="test_simulation.in",
cfast_exe="/path/to/cfast_executable",
extra_arguments=["-f"],
)
results = model.run()
# results is a dict of pandas DataFrames for each output CSV file
```
Or you can import your existing model from a CFAST input file:
```python
from pycfast.parsers import parse_cfast_file
model = parse_cfast_file("existing_model.in")
```
**Note:** When importing an existing model, ensure that all component names (such as TITLE, MATERIAL, ID, etc.) use **only alphanumeric characters**. Avoid **special characters** like quotes and slashes, as these may cause parsing issues and will be automatically sanitized where possible.
You can inspect any model interactively (displays the HTML card shown above), or use text-based methods:
```python
model # interactive HTML card in Jupyter/VS Code
model.summary() # text summary to stdout
model.save() # writes the CFAST input file to disk
model.view_cfast_input_file() # view the generated input file
```
With this library you can easily obtain a similar data generation workflow as below:
https://github.com/user-attachments/assets/359045a2-4645-4e95-a788-55bb6aff4b6c
Check out the [examples](https://pycfast.org/examples/) for more usage scenarios.
## Installation
PyCFAST requires **Python 3.10 or later**. It is fully tested with **CFAST 7.7.5** and is expected to be compatible with all **CFAST 7.7.x** versions.
### Pip or Conda
PyCFAST can be installed from [PyPI](https://pypi.org/project/pycfast) or [conda-forge](https://anaconda.org/conda-forge/pycfast):
```bash
pip install pycfast
```
```bash
conda install -c conda-forge pycfast
```
### Source
To install PyCFAST from source, clone the repository and install the required dependencies:
```bash
git clone https://github.com/bewygs/pycfast.git
cd pycfast
python -m pip install .
```
### CFAST Installation
Download and install CFAST from the [NIST CFAST website](https://pages.nist.gov/cfast/) or the [CFAST GitHub repository](https://github.com/firemodels/cfast). Follow the installation instructions for your operating system and ensure `cfast` is available in your PATH. If CFAST is installed in a non-standard location, you can manually specify the path with these methods :
- From an environment variable ``CFAST``:
```bash
export CFAST="/path/to/your/cfast/executable" # Linux/MacOS
set CFAST="C:\path\to\your\cfast\executable" # Windows (cmd)
$env:CFAST="C:\path\to\your\cfast\executable" # Windows (PowerShell)
```
- From Python code when defining the ``CFASTModel``:
```python
import pycfast
# set custom CFAST executable path via environment variable
import os
os.environ['CFAST'] = "/path/to/your/cfast/executable"
# Or directly when defining CFASTModel
model = pycfast.CFASTModel(cfast_path="/path/to/your/cfast/executable")
```
## Documentation
Full documentation, including the API reference and examples, is available online: [PyCFAST Documentation](https://pycfast.org/stable/)
## Examples
Some examples on how to use PyCFAST with various python libraries (Numpy, SciPy, SAlib, etc.) can be found in the [examples](https://pycfast.org/stable/examples/).
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for more information.
## References
If you use PyCFAST in your projects, please consider citing the following:
```bib
@misc{pycfast-zenodo-2025,
title = {PyCFAST},
author = {{Benoit Wygas}},
year = {2025},
publisher = {Zenodo},
version = {0.1.0},
doi = {},
url = {},
note = {Software release}
}
```
## Acknowledgments
This Python package was developed with the support of [**Orano**](https://www.orano.group/).
<a href="https://www.orano.group/">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="docs/source/_static/orano-logo-dark.svg">
<img src="docs/source/_static/orano-logo.svg" alt="Orano logo" width="150" >
</picture>
</a>
PyCFAST is built on top of the work of the CFAST development team at the
[National Institute of Standards and Technology (NIST)](https://www.nist.gov/).
We acknowledge their ongoing efforts in maintaining and improving the CFAST fire
modeling software.
| text/markdown | null | WYGAS Benoît <benoit.wgs@protonmail.com> | null | WYGAS Benoît <benoit.wgs@protonmail.com> | MIT License
Copyright (c) 2025 Benoît WYGAS — Orano.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| fire, simulation, cfast, modeling, fire-safety, engineering | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming La... | [] | null | null | >=3.10 | [] | [] | [] | [
"f90nml>=1.4.5",
"pandas>=2.0.0",
"pycfast[examples]; extra == \"docs\"",
"ipython; extra == \"docs\"",
"sphinx>=5.0; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"sphinx-gallery>=0.16; extra == \"docs\"",
"linkify-it-py; extra == \"docs\"",
"numpydoc>=1.9.0... | [] | [] | [] | [
"Homepage, https://github.com/bewygs/pycfast",
"Documentation, https://bewygs.github.io/pycfast",
"Repository, https://github.com/bewygs/pycfast",
"Issues, https://github.com/bewygs/pycfast/issues",
"Changelog, https://github.com/bewygs/pycfast/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:14:40.521448 | pycfast-0.1.1.tar.gz | 65,599 | 91/c5/246a0baf70b5b8159f84657cedb499204df0f20df25be726d9c2ee532382/pycfast-0.1.1.tar.gz | source | sdist | null | false | 5141cfdc25042baf7e08a2b1025ec7d2 | d90a1374315750dadafbdd05c0527d3822fe843a0a0cc9e90d7041536b6c5a28 | 91c5246a0baf70b5b8159f84657cedb499204df0f20df25be726d9c2ee532382 | null | [
"LICENSE"
] | 237 |
2.4 | langgraph-prebuilt | 1.0.8 | Library with high-level APIs for creating and executing LangGraph agents and tools. | # LangGraph Prebuilt
This library defines high-level APIs for creating and executing LangGraph agents and tools.
> [!IMPORTANT]
> This library is meant to be bundled with `langgraph`, don't install it directly
## Agents
`langgraph-prebuilt` provides an [implementation](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent) of a tool-calling [ReAct-style](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/#react-implementation) agent - `create_react_agent`:
```bash
pip install langchain-anthropic
```
```python
from langchain_anthropic import ChatAnthropic
from langgraph.prebuilt import create_react_agent
# Define the tools for the agent to use
def search(query: str):
"""Call to surf the web."""
# This is a placeholder, but don't tell the LLM that...
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."
tools = [search]
model = ChatAnthropic(model="claude-3-7-sonnet-latest")
app = create_react_agent(model, tools)
# run the agent
app.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]},
)
```
## Tools
### ToolNode
`langgraph-prebuilt` provides an [implementation](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.ToolNode) of a node that executes tool calls - `ToolNode`:
```python
from langgraph.prebuilt import ToolNode
from langchain_core.messages import AIMessage
def search(query: str):
"""Call to surf the web."""
# This is a placeholder, but don't tell the LLM that...
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."
tool_node = ToolNode([search])
tool_calls = [{"name": "search", "args": {"query": "what is the weather in sf"}, "id": "1"}]
ai_message = AIMessage(content="", tool_calls=tool_calls)
# execute tool call
tool_node.invoke({"messages": [ai_message]})
```
### ValidationNode
`langgraph-prebuilt` provides an [implementation](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_validator.ValidationNode) of a node that validates tool calls against a pydantic schema - `ValidationNode`:
```python
from pydantic import BaseModel, field_validator
from langgraph.prebuilt import ValidationNode
from langchain_core.messages import AIMessage
class SelectNumber(BaseModel):
a: int
@field_validator("a")
def a_must_be_meaningful(cls, v):
if v != 37:
raise ValueError("Only 37 is allowed")
return v
validation_node = ValidationNode([SelectNumber])
validation_node.invoke({
"messages": [AIMessage("", tool_calls=[{"name": "SelectNumber", "args": {"a": 42}, "id": "1"}])]
})
```
## Agent Inbox
The library contains schemas for using the [Agent Inbox](https://github.com/langchain-ai/agent-inbox) with LangGraph agents. Learn more about how to use Agent Inbox [here](https://github.com/langchain-ai/agent-inbox#interrupts).
```python
from langgraph.types import interrupt
from langgraph.prebuilt.interrupt import HumanInterrupt, HumanResponse
def my_graph_function():
# Extract the last tool call from the `messages` field in the state
tool_call = state["messages"][-1].tool_calls[0]
# Create an interrupt
request: HumanInterrupt = {
"action_request": {
"action": tool_call['name'],
"args": tool_call['args']
},
"config": {
"allow_ignore": True,
"allow_respond": True,
"allow_edit": False,
"allow_accept": False
},
"description": _generate_email_markdown(state) # Generate a detailed markdown description.
}
# Send the interrupt request inside a list, and extract the first response
response = interrupt([request])[0]
if response['type'] == "response":
# Do something with the response
...
``` | text/markdown | null | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programm... | [] | null | null | >=3.10 | [] | [] | [] | [
"langchain-core>=1.0.0",
"langgraph-checkpoint<5.0.0,>=2.1.0"
] | [] | [] | [] | [
"Source, https://github.com/langchain-ai/langgraph/tree/main/libs/prebuilt",
"Twitter, https://x.com/LangChain",
"Slack, https://www.langchain.com/join-community",
"Reddit, https://www.reddit.com/r/LangChain/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:14:39.083988 | langgraph_prebuilt-1.0.8.tar.gz | 164,442 | 0d/06/dd61a5c2dce009d1b03b1d56f2a85b3127659fdddf5b3be5d8f1d60820fb/langgraph_prebuilt-1.0.8.tar.gz | source | sdist | null | false | 7e3e20f8084242355c07237777e19f0d | 0cd3cf5473ced8a6cd687cc5294e08d3de57529d8dd14fdc6ae4899549efcf69 | 0d06dd61a5c2dce009d1b03b1d56f2a85b3127659fdddf5b3be5d8f1d60820fb | MIT | [
"LICENSE"
] | 1,107,044 |
2.4 | jira-test-reporting | 1.5.1 | A utility to report pytest results to Jira and Slack | ## Description
This repository contains utility scripts for
- reporting automated test results Jira
- sending a slack notification with proper stats and jira url that shows failed tests in the current test run id (trid)
specifically designed for use in CI/CD pipelines such as Bitbucket Pipelines.
## Prerequisites
- **Python**: Version 3.12 or higher.
- **Json Report**: Calling project should generate json report. How to [create pytest json report](https://pypi.org/project/pytest-json-report/)
- **Jira Access**: A Jira instance with API token authentication. How to [create jira api token](https://id.atlassian.com/manage-profile/security/api-tokens)
- **Slack Webhook**: A Slack webhook URL for sending notifications. How to [create slack incoming webhook](https://api.slack.com/messaging/webhooks#getting_started)
- **Configuration File**: A `_env_configs/third_party.conf` file with Jira and Slack settings.
## Installation
`pip install jira-test-reporting`
## Jira project preperation
### Create new Jira project and configure issue type "Task" with following fields
The script uses the following custom fields in Jira tasks:
- Test Environment : Field Type - Dropdown. `Important - Pre-populate the values`
- Test Area : Field Type - Dropdown - `Important - Pre-populate the values`
- Test Type : Field Type - Labels
- Test Run : Field Type - Short Text
- Test Tags : Field Type - Labels
- Test Status : Field Type - Dropdown `Important - Pre-populate the values`
- TRID : Field Type - Short Text
### Important Instructions
- In the pytest json report check block `"nodeid": "api_tests/Test_Pilot/test_jira_reporting_scenarios.py::Test_JIRA_Reporting_Scenarios::test_jira_reporting_test_passed",`
-- `api_tests` should be pre-populated under the Test Type field options
-- `Test_Pilot` should be pre-populated under the Test Area field options
- Similarly, in the pytest json report check block `"outcome": "passed"`
-- `Passed` should be pre-populated under the Test Status field options. For this field, the values should be pre-populate in the title case.
- Also, make sure that in your jira project, the issue type "Task" has default fields Description and Status
- In the caller projetct, create `_env_configs/third_party.conf` file with the following structure:
```ini
[DEFAULT]
jira_field_id_test_env = customfield_10208
jira_field_id_test_area = customfield_10236
jira_field_id_test_type = customfield_10301
jira_field_id_test_run_name = customfield_10205
jira_field_id_test_tags = customfield_10202
jira_field_id_test_status = customfield_10235
jira_field_id_test_run_id = customfield_10269
scm_url_variable = BITBUCKET_GIT_HTTP_ORIGIN
scm_build_number_variable = BITBUCKET_BUILD_NUMBER
```
The script also uses the following default fields in Jira tasks:
- `project` - reflects `jira_project_key`
- `summary` - test_name
- `description` - failure or passing description
- `status` - reflects test_status as in `jira_field_id_test_status`
```
The values for the fields above will be fetched directly from the json_report
New jira tasks will be created for non-existing tests
Existing tests will be updated
```
## Examples
### Jira Reported Tests Example

### Slack Notification Example
```
API Test Results
──────────────
🚀 *Test Run:* Release-X
🌎 *Environment:* Staging
❌ *Failed:* 4
──────────────
🧪 *Total Tests:* 148
✅ *Passed:* 143
🔄 *Executed:* 147
⏸️ *Skipped:* 1
📈 Click to open Test Report in Jira
📡 FYA: @User1 @User2
Execution Date: May-23-2025
```
## Usage
### Standalone
1. Ensure parameters in `_env_configs/third_party.conf` has valid values
2. Export environment variables as follows
```
export jira_host_url=https://my-jira-team.atlassian.net
export jira_username=whoami@my-jira-team.com
export jira_password=XXXXXXXXXXXXXXXXXXXXX
export jira_project_key=TQER
export slack_dev_channel_webhook=https://hooks.slack.com/services/AAAAAA/BBBBBBB/CCCCCCCCC
export slack_prod_channel_webhook=https://hooks.slack.com/services/AAAAAA/BBBBBBB/CCCCCCCCC
export slack_test_webhook=https://hooks.slack.com/services/AAAAAA/BBBBBBB/CCCCCCCCC
```
3. Run the script with command-line arguments to process a pytest report:
```bash
python -m jira_test_reporting.test_results_processor --test-env=Dev --test-run=Release-X --report=test-reports/pytest_report.json --notify-slack=yes
```
### CI-CD hooked example (this copies required files into your test_automation directory)
Assuming you have
- set `Repository Variables` (as in step #2 mentioned in the standalone setup above) in your scm tool. (How to: [bitbcket](https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/), [github](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables))
- configured pipeline in your SCM tool or a shell script in the caller project to execute the tests.
```bash
#!/bin/bash
# -----------------------------------------------------------------------------------------
# # Test Execution
# -----------------------------------------------------------------------------------------
.. pip install -r requirements.txt > /dev/null 2>&1
.. test execution code here
.. pytest -s --tb=no --no-header api_tests --testenv="$TEST_ENV" --json-report -v --json-report-indent=4 --json-report-omit collectors setup teardown --json-report-file=./test-reports/pytest_report.json
# JUST ADD FOLLOWING CODE BLOCK to report the issues
# -----------------------------------------------------------------------------------------
# Report test results to Jira
# -----------------------------------------------------------------------------------------
echo "Reporting test results into Jira and notifying slack"
if [ -n "$TEST_RUN_NAME" ]; then
python -m jira_test_reporting.test_results_processor --test-env="$TEST_ENV" --test-run="$TEST_RUN_NAME"
else
python -m jira_test_reporting.test_results_processor --test-env="$TEST_ENV"
fi
```
### Arguments
- `--test-env`: Test environment (default: `Dev`). Examples: `--test-env=dev`, `--test-env=stage`.
- `--test-run`: Test run identifier (default: `Daily Run`). Examples: `--test-run=Release-X`, `--test-run="Regression Tests"`.
- `--report`: Test report file path (default: `test-reports/pytest_report.json`). Examples: `--report=my-test-reports/my-pytest_report.json`
- `--notify-slack`: Whether or not you want to send out a notification to slack (default: `yes`). Examples: `--notify-slack=yes`, `--notify-slack=no`
- `--comments-cleanup`: whether or not you want to clean up the comments on the tests if it has grown a big pile. Examples: `--comments-cleanup=yes`, `--comments-cleanup=no`
## Troubleshooting
- **Jira Connection Errors**:
- Verify `jira_host_url`, `jira_username`, and `jira_password` in `_env_configs/third_party.conf`.
- Ensure the API token is valid and has “Create Issues” and “Edit Issues” permissions.
- **Slack Notification Failure**:
- Check the webhook URL in the config file.
- Ensure the Slack app is configured to allow incoming webhooks.
- **Pytest Report Issues**:
- Confirm `test-reports/pytest_report.json` exists and contains valid JSON.
- **Custom Field Errors**:
- Validate field IDs and allowed values in Jira Admin > Issues > Custom Fields.
## Contributing
Please read [CONTRIBUTE.md](https://github.com/sspatwardhan/jira-test-reporting/blob/main/CONTRIBUTE.md)
## License
This project is licensed under the MIT License. See `LICENSE` for details.
| text/markdown | null | Saurabh Patwardhan <patwardhansaurabhs@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"requests>=2.32.3",
"configparser>=7.1.0",
"jira>=3.8.0"
] | [] | [] | [] | [
"Source, https://github.com/sspatwardhan/jira-test-reporting",
"Issues, https://github.com/sspatwardhan/jira-test-reporting/issues",
"Contribute, https://github.com/sspatwardhan/jira-test-reporting/blob/main/CONTRIBUTE.md"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T18:14:30.219997 | jira_test_reporting-1.5.1.tar.gz | 14,961 | ee/3f/5f95f3403f07d318a1298f57df8ee86cd9b7efe316e6439fe228dffffef3/jira_test_reporting-1.5.1.tar.gz | source | sdist | null | false | 2a7f6c7f768c510d269e76210e420c80 | 6b139d0a773866837f0023bdd8b068c8c46fe0567fddb6e3d3355d28b46016f6 | ee3f5f95f3403f07d318a1298f57df8ee86cd9b7efe316e6439fe228dffffef3 | null | [
"LICENSE"
] | 270 |
2.4 | raps | 4.7.0 | Rust CLI for Autodesk Platform Services | # RAPS - Rust CLI for Autodesk Platform Services
[](https://badge.fury.io/py/raps)
[](https://opensource.org/licenses/Apache-2.0)
A fast, modern command-line interface for Autodesk Platform Services (APS), built with Rust.
## Installation
```bash
pip install raps
```
## Quick Start
```bash
# Check installation
raps --version
# Get help
raps --help
# Test authentication (requires APS credentials)
raps auth test
# List buckets
raps bucket list
```
## Configuration
Set your APS credentials as environment variables:
```bash
export APS_CLIENT_ID="your-client-id"
export APS_CLIENT_SECRET="your-client-secret"
```
Or use a `.env` file in your project directory.
## Features
- **Object Storage Service (OSS)**: Manage buckets and objects
- **Model Derivative**: Translate and extract model data
- **Data Management**: Work with hubs, projects, and folders
- **Design Automation**: Run Revit, AutoCAD, and Inventor engines
- **Authentication**: Support for 2-legged, 3-legged, and device code flows
- **MCP Server**: AI assistant integration via Model Context Protocol
## Documentation
For full documentation, visit [rapscli.xyz](https://rapscli.xyz).
## Alternative Installation Methods
### Shell Script (Linux/macOS)
```bash
curl -fsSL https://raw.githubusercontent.com/dmytro-yemelianov/raps/main/install.sh | bash
```
### PowerShell (Windows)
```powershell
irm https://raw.githubusercontent.com/dmytro-yemelianov/raps/main/install.ps1 | iex
```
### Homebrew (macOS)
```bash
brew install dmytro-yemelianov/tap/raps
```
### Scoop (Windows)
```powershell
scoop bucket add raps https://github.com/dmytro-yemelianov/scoop-bucket
scoop install raps
```
## License
Apache 2.0 - See [LICENSE](https://github.com/dmytro-yemelianov/raps/blob/main/LICENSE) for details.
| text/markdown; charset=UTF-8; variant=GFM | Dmytro Yemelianov | null | null | null | Apache-2.0 | autodesk, aps, forge, cad, bim, cli, rust | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Rust",
"Topic :: Software Development :: Build Tools"
] | [] | https://rapscli.xyz | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://rapscli.xyz/docs",
"Homepage, https://rapscli.xyz",
"Repository, https://github.com/dmytro-yemelianov/raps"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:13:59.924049 | raps-4.7.0-py3-none-win_amd64.whl | 9,596,313 | e8/44/77748c21d7ce271158874f2e8fdfd8baf35864849fc9144559c69d6a2385/raps-4.7.0-py3-none-win_amd64.whl | py3 | bdist_wheel | null | false | eadaa1f8c3c64cde3a6e3d0bdabd1f82 | bd6cd0d78ec695d1cc900a92ccb0252c0ee91494227257706cdf01cec6f81fa4 | e84477748c21d7ce271158874f2e8fdfd8baf35864849fc9144559c69d6a2385 | null | [] | 293 |
2.4 | xarray-einstats | 0.10.0 | Stats, linear algebra and einops for xarray | # xarray-einstats
[](https://xarray-einstats.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/arviz-devs/xarray-einstats/actions/workflows/test.yml)
[](https://codecov.io/gh/arviz-devs/xarray-einstats)
[](https://pypi.org/project/xarray-einstats)
[](https://anaconda.org/conda-forge/xarray-einstats)
[](https://doi.org/10.5281/zenodo.5895451)
Stats, linear algebra and einops for xarray
## Installation
To install, run
```
(.venv) $ pip install xarray-einstats
```
See the docs for more [extensive install instructions](https://einstats.python.arviz.org/en/latest/installation.html).
## Overview
As stated in their website:
> xarray makes working with multi-dimensional labeled arrays simple, efficient and fun!
The code is often more verbose, but it is generally because it is clearer and thus less error prone
and more intuitive.
Here are some examples of such trade-off where we believe the increased clarity is worth
the extra characters:
| numpy | xarray |
|---------|----------|
| `a[2, 5]` | `da.sel(drug="paracetamol", subject=5)` |
| `a.mean(axis=(0, 1))` | `da.mean(dim=("chain", "draw"))` |
| `a.reshape((-1, 10))` | `da.stack(sample=("chain", "draw"))` |
| `a.transpose(2, 0, 1)` | `da.transpose("drug", "chain", "draw")` |
In some other cases however, using xarray can result in overly verbose code
that often also becomes less clear. `xarray_einstats` provides wrappers
around some numpy and scipy functions (mostly `numpy.linalg` and `scipy.stats`)
and around [einops](https://einops.rocks/) with an api and features adapted to xarray.
Continue at the [getting started page](https://einstats.python.arviz.org/en/latest/getting_started.html).
## Contributing
xarray-einstats is in active development and all types of contributions are welcome!
See the [contributing guide](https://einstats.python.arviz.org/en/latest/contributing/overview.html) for details on how to contribute.
## Relevant links
* Documentation: https://einstats.python.arviz.org/en/latest/
* Contributing guide: https://einstats.python.arviz.org/en/latest/contributing/overview.html
* ArviZ project website: https://www.arviz.org
## Similar projects
Here we list some similar projects we know of. Note that all of
them are complementary and don't overlap:
* [xr-scipy](https://xr-scipy.readthedocs.io/en/latest/index.html)
* [xarray-extras](https://xarray-extras.readthedocs.io/en/latest/)
* [xhistogram](https://xhistogram.readthedocs.io/en/latest/)
* [xrft](https://xrft.readthedocs.io/en/latest/)
## Cite xarray-einstats
If you use this software, please cite it using the following template and the version
specific DOI provided by Zenodo. Click on the badge to go to the Zenodo page
and select the DOI corresponding to the version you used
[](https://doi.org/10.5281/zenodo.5895451)
* Oriol Abril-Pla. (2022). arviz-devs/xarray-einstats `<version>`. Zenodo. `<version_doi>`
or in bibtex format:
```none
@software{xarray_einstats2022,
author = {Abril-Pla, Oriol},
title = {{xarray-einstats}},
year = 2022,
url = {https://github.com/arviz-devs/xarray-einstats}
publisher = {Zenodo},
version = {<version>},
doi = {<version_doi>},
}
```
| text/markdown | null | ArviZ team <arviz.devs@gmail.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: Education",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python... | [] | null | null | >=3.12 | [] | [] | [] | [
"numpy>=2.0",
"scipy>=1.13",
"xarray>=2024.02.0",
"furo; extra == \"doc\"",
"myst-parser[linkify]; extra == \"doc\"",
"myst-nb; extra == \"doc\"",
"sphinx-copybutton; extra == \"doc\"",
"numpydoc; extra == \"doc\"",
"sphinx>=5; extra == \"doc\"",
"jupyter-sphinx; extra == \"doc\"",
"sphinx-desig... | [] | [] | [] | [
"documentation, https://einstats.python.arviz.org",
"funding, https://opencollective.com/arviz",
"source, https://github.com/arviz-devs/xarray-einstats",
"tracker, https://github.com/arviz-devs/xarray-einstats/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:13:55.245828 | xarray_einstats-0.10.0.tar.gz | 33,449 | 48/9b/305ee6a2dac75fc9c28105db061408df6ecbf0f7a1de37636e8e4ea47ca7/xarray_einstats-0.10.0.tar.gz | source | sdist | null | false | d3f9df44630328a2773fe5f184c6c164 | d432a363fc8f09baad164f9826dc711551c684b9abd8098c1b961d18663a627d | 489b305ee6a2dac75fc9c28105db061408df6ecbf0f7a1de37636e8e4ea47ca7 | null | [
"LICENSE"
] | 15,516 |
2.4 | geoguessr-async | 2.0.1 | An asynchronous API integrator for Geoguessr | This is a Geoguessr API client written in Python. It allows you to interact with the Geoguessr API, such as getting information about users, challenges, maps, and scores.
To install the package, run the following command:
```
pip install geoguessr-async
```
Once the package is installed, you can create a client object by passing your NCFA cookie to the constructor:
```Python
import asyncio
from geoguessr_async import Geoguessr
client = Geoguessr("your_ncfa_cookie")
```
*To get your NCFA cookie, login to geoguessr, open your dev tools (`Ctrl+Shift+I`), go to Application/Storage/Cookies and copy the value of `_ncfa`.*
You can then use the client object to get information about users, challenges, maps, and scores. For example, to get information about an user with his ID, you can use the following code:
```Python
user = await client.get_user_infos(userId)
```
The `Profile` object will contain information such as your username, country, and number of games played.
To get information about a challenge, you can use the following code:
```Python
challenge = await client.get_challenge_infos("https://geoguessr.com/challenge/xxxx")
```
The `Challenge` object will contain information such as the challenge name, description, and time limit.
To get information about a map, you can use the following code:
```Python
map = await client.get_map_infos("https://geoguessr.com/maps/xxxx")
```
The `Map` object will contain information such as the map name, size, and location.
To get information about a score, you can use the following code:
```Python
score = await client.get_challenge_score("https://geoguessr.com/challenge/xxxx")
```
The `Score` object will contain information such as the players' name, score, and time.
*When getting the score of a challenge, it automatically plays it, pinging (0, 0) each round.*
### Example: Get results of a challenge as a list of dictionnaries
```Python
score = await client.get_challenge_score("https://geoguessr.com/challenge/xxxx")
scores = []
for s in score:
d_player = {
"userId": s.userId,
"userName":s.playerName,
"total": s.totalScore,
"roundPoints": s.gamePlayerGuessesRoundScoreInPoints,
"roundTimes": s.gamePlayerGuessesTime
}
scores.append(d_player)
print(scores)
```
I hope you find this package useful! Please let me know if you have any questions or feedback. | text/markdown | null | Antoine BLAISE <antoine.blaise34@gmail.com> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiohttp>=3.8.5"
] | [] | [] | [] | [
"Homepage, https://github.com/toinoublz/geoguessr_async"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T18:13:31.264967 | geoguessr_async-2.0.1.tar.gz | 33,242 | bd/f4/c51acf3167bd1a16e9c8a70c9db6f3880afb3f15b298698d88f3fdd14c38/geoguessr_async-2.0.1.tar.gz | source | sdist | null | false | c0ec4ebeea8252093f6b8c95648be0d8 | 82d72eb8adb03791726b53ced9b4b722e6c09eb4a9aa67bed65692325555dfbd | bdf4c51acf3167bd1a16e9c8a70c9db6f3880afb3f15b298698d88f3fdd14c38 | null | [
"LICENSE"
] | 299 |
2.4 | coldfearlngbundler | 0.0.1 | work with .lng file from Cold Fear game | # ColdFearLngBundler
work with .lng files from Cold Fear game.
## Installation
`pip install coldfearlngbundler`
## Usage
### Bundle .lng files from input directory
`coldfearlngbundler bundle [-h] [-o OUTPUT] input_dir`
### Unbundle .lng file to output directory
`coldfearlngbundler unbundle [-h] [-o OUTPUT] input_file` | text/markdown | null | null | meltnoexit | meltnoexit <xy@meltnt.org> | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"construct==2.10.70",
"pillow==12.1.1"
] | [] | [] | [] | [
"Homepage, https://github.com/meltnoexit/coldfearlngbundler",
"Issues, https://github.com/meltnoexit/coldfearlngbundler/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:13:26.224407 | coldfearlngbundler-0.0.1.tar.gz | 5,722 | ff/ee/d7259a7424267980dd1b651a9658b9e74d5d676abf393dbe2802994b12e2/coldfearlngbundler-0.0.1.tar.gz | source | sdist | null | false | 3126554fe9a9846c7b0f98e527202216 | 5961b04d8567e97a85a9d3a2130d8d13581e2b9511e2611adfcf7775b1c7b4fe | ffeed7259a7424267980dd1b651a9658b9e74d5d676abf393dbe2802994b12e2 | MIT | [
"LICENSE"
] | 244 |
2.4 | pytest-fkit | 0.9.5 | A pytest plugin that prevents crashes from killing your test suite | # pytest-fkit
**F**ix **K**rashes **I**n **T**ests - A pytest plugin that prevents crashes from killing your entire test suite.
When a test crashes Python (SIGABRT, SIGSEGV, etc.), it catches the crash and converts it to a normal pytest ERROR instead of killing your entire test run.
**Features:**
- Parallel workers with GPU affinity
- **Sliced test distribution** (default) - tests are pre-distributed across workers for deterministic, efficient execution
- Crash isolation - each test runs in its own subprocess
- Automatic GPU error detection and retry
- Fault tolerance (workers can fail without stopping the test run)
## The Problem
When running large test suites (like HuggingFace Transformers), sometimes a test causes Python to crash with a signal like SIGABRT:
```
Fatal Python error: Aborted
Thread 0x0000799e2ea00640 (most recent call first):
File "/transformers/src/transformers/models/dots1/modeling_dots1.py", line 331 in forward
...
```
This kills pytest entirely, and all remaining tests in your suite never run.
## The Solution
pytest-fkit runs each test in an isolated subprocess. If a test crashes:
- ✅ The crash is caught and reported as a pytest ERROR
- ✅ The remaining tests continue running
- ✅ You get a full report with all test results, including which ones crashed
## Installation
```bash
cd pytest-fkit
pip install -e .
```
Or install from your test requirements:
```bash
pip install pytest-fkit
```
## Usage
### Basic Usage
Just add the `--fkit` flag to your pytest command:
```bash
pytest --fkit
```
### With Timeout
Set a timeout per test (default is 600 seconds / 10 minutes):
```bash
pytest --fkit --fkit-timeout=300 # 5 minute timeout per test
```
### Parallel Workers with Sliced Distribution
Run tests in parallel with automatic slicing:
```bash
# Auto-detect workers based on GPU count
pytest --fkit --fkit-workers=auto
# Specific number of workers
pytest --fkit --fkit-workers=4
# Control GPUs per worker (for multi-GPU tests)
pytest --fkit --fkit-workers=4 --fkit-gpus-per-worker=2
```
**Sliced Scheduling (default)**: Tests are pre-distributed across workers:
1. Tests are sorted by nodeid for reproducibility
2. Round-robin distribution: test[i] goes to worker[i % num_workers]
3. Each worker runs its slice with crash isolation (subprocess per test)
4. Workers run in parallel for maximum throughput
**Example with 4 workers and 100 tests:**
- Worker 0: tests 0, 4, 8, 12, ... (25 tests)
- Worker 1: tests 1, 5, 9, 13, ... (25 tests)
- Worker 2: tests 2, 6, 10, 14, ... (25 tests)
- Worker 3: tests 3, 7, 11, 15, ... (25 tests)
### Execution Modes
```bash
# Batch mode (default) - pre-sliced, deterministic distribution
pytest --fkit --fkit-workers=4 --fkit-mode=batch
# Isolate mode - dynamic queue, on-demand assignment
pytest --fkit --fkit-workers=4 --fkit-mode=isolate
```
| Mode | Description | Best For |
|------|-------------|----------|
| `batch` | Tests pre-sliced to workers | Most use cases, reproducible |
| `isolate` | Dynamic work queue | Highly variable test durations |
### GPU Allocation Examples
**8 GPUs with multi-GPU tests (need 2 GPUs each):**
```bash
pytest --fkit --fkit-workers=4 --fkit-gpus-per-worker=2
# Worker 0: GPU 0,1
# Worker 1: GPU 2,3
# Worker 2: GPU 4,5
# Worker 3: GPU 6,7
```
**8 GPUs with single-GPU tests:**
```bash
pytest --fkit --fkit-workers=8 --fkit-gpus-per-worker=1
# Worker 0: GPU 0
# Worker 1: GPU 1
# ...
# Worker 7: GPU 7
```
### Crash Isolation
Each test runs in its own subprocess, so crashes are contained:
1. **Crash Detection**: SIGABRT, SIGSEGV, and other signals are caught
2. **Error Conversion**: Crashes are converted to pytest ERROR results
3. **Suite Continuation**: Remaining tests continue running on the worker
4. **Full Results**: You get a complete report even if some tests crash
**Example scenario:**
```
Worker 0 (GPU 0,1): test_bert PASSED → test_llama PASSED → test_crash 💥 CRASH → test_gpt2 PASSED
Worker 1 (GPU 2,3): test_vit PASSED → test_whisper PASSED → test_t5 PASSED
Worker 2 (GPU 4,5): test_clip PASSED → test_blip PASSED → test_stable PASSED
Worker 3 (GPU 6,7): test_sam PASSED → test_dino PASSED → test_mae PASSED
# Crash on Worker 0 is isolated - other tests continue
# Final report shows 1 crash, 11 passed
```
### Skip Crash Isolation for Specific Tests
If you have tests that don't play well with subprocess isolation, mark them:
```python
import pytest
@pytest.mark.fkit_skip
def test_something_special():
# This test will run normally without subprocess isolation
pass
```
### Mark GPU Requirements
Mark tests for documentation (future: optimal GPU scheduling):
```python
import pytest
@pytest.mark.fkit_multi_gpu
def test_distributed_training():
# This test needs multiple GPUs
pass
@pytest.mark.fkit_single_gpu
def test_simple_forward():
# This test needs only one GPU
pass
```
## How It Works
### Architecture (Batch Mode - Default)
```
┌─────────────────────────────────────────────┐
│ Test Collection (sorted) │
│ [test0, test1, test2, test3, test4, ...] │
└────────────────────┬────────────────────────┘
│
Round-Robin Slicing
│
┌─────────────────────────┼─────────────────────────┐
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Worker 0 │ │ Worker 1 │ │ Worker 2 │
│ GPU 0,1 │ │ GPU 2,3 │ │ GPU 4,5 │
├───────────────┤ ├───────────────┤ ├───────────────┤
│ Slice: │ │ Slice: │ │ Slice: │
│ test0 │ │ test1 │ │ test2 │
│ test3 │ │ test4 │ │ test5 │
│ test6 │ │ test7 │ │ test8 │
│ ... │ │ ... │ │ ... │
└───────┬───────┘ └───────┬───────┘ └───────┬───────┘
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Subprocess │ │ Subprocess │ │ Subprocess │
│ per test │ │ per test │ │ per test │
│ (isolated) │ │ (isolated) │ │ (isolated) │
└───────────────┘ └───────────────┘ └───────────────┘
```
### Flow
1. **GPU Detection**: Automatically detects AMD (ROCm) or NVIDIA GPUs
2. **Worker Creation**: Creates N worker threads, each with dedicated GPUs
3. **Test Slicing**: Tests sorted and distributed via round-robin
4. **Parallel Execution**: Each worker runs its slice independently
5. **Subprocess Isolation**: Each test runs in its own subprocess (crash protection)
6. **Result Reporting**: Results stream back to pytest as tests complete
## Example Output
```
🚀 pytest-fkit: 4 workers, 8 AMD GPUs, 2 GPU(s)/worker
GPU allocations: ['0,1', '2,3', '4,5', '6,7']
Mode: batch - sliced scheduling (tests pre-distributed to workers)
🔄 Running 1000 tests across 4 workers (sliced scheduling - each worker gets 1/4 of tests)...
📊 Test distribution across 4 workers:
Worker 0: 250 tests
Worker 1: 250 tests
Worker 2: 250 tests
Worker 3: 250 tests
Worker 0 (GPUs: 0,1): 250 tests
Worker 1 (GPUs: 2,3): 250 tests
Worker 2 (GPUs: 4,5): 250 tests
Worker 3 (GPUs: 6,7): 250 tests
tests/models/bert/test_modeling_bert.py::BertModelTest::test_forward PASSED
tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_forward PASSED
tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_forward PASSED
======================================================================
✅ Completed 1000 tests
Passed: 950, Failed: 45, Skipped: 5
💥 Crashes: 2
======================================================================
=============== pytest-fkit summary ===============
💥 2 test(s) CRASHED (converted to ERROR by pytest-fkit):
- tests/models/dots1/test_modeling_dots1.py::Dots1ModelTest::test_model_15b
✅ pytest-fkit prevented 2 crashes from killing your test suite!
```
## Command Line Options
| Option | Default | Description |
|--------|---------|-------------|
| `--fkit` | `False` | Enable crash isolation |
| `--fkit-timeout` | `600` | Timeout per test in seconds |
| `--fkit-workers` | `1` | Number of parallel workers (`auto` for GPU-based) |
| `--fkit-gpus-per-worker` | `2` | GPUs assigned to each worker |
| `--fkit-mode` | `batch` | `batch` (pre-sliced) or `isolate` (dynamic queue) |
| `--fkit-threads-per-worker` | `auto` | CPU threads per worker (`auto` = cores/workers) |
| `--fkit-max-retries` | `3` | Max retries for transient errors |
## Environment Variables Set Per Worker
| Variable | Description |
|----------|-------------|
| `CUDA_VISIBLE_DEVICES` | GPU IDs for NVIDIA / compatibility |
| `HIP_VISIBLE_DEVICES` | GPU IDs for AMD ROCm (0-based within ROCR set) |
| `ROCR_VISIBLE_DEVICES` | Physical GPU IDs for AMD ROCm runtime |
| `FKIT_WORKER_ID` | Worker index (0, 1, 2, ...) |
| `FKIT_GPU_IDS` | Assigned physical GPU IDs string |
| `MASTER_PORT` | Per-worker NCCL port (29500 + worker_id) |
| `MASTER_ADDR` | NCCL address (127.0.0.1) |
| `NCCL_ASYNC_ERROR_HANDLING` | Enabled (prevents NCCL hangs) |
| `NCCL_SOCKET_IFNAME` | Loopback interface (avoids NIC issues) |
| `OMP_NUM_THREADS` | CPU threads per worker |
## Crash Recovery
After a test crash (SIGABRT, SIGSEGV, etc.):
1. **5s cooldown** for GPU driver to reclaim resources
2. **GPU health probe** - spawns subprocess to allocate a tensor and sync
3. If probe fails, **10s extended cooldown** + second probe
4. If still unhealthy, **worker disabled** and remaining tests redistributed to healthy workers
5. If healthy, continue with next test
This prevents the cascade where one crash leaves the GPU unusable and all subsequent tests on that worker fail with "No HIP GPUs are available".
## GPU Error Patterns Detected
The following error patterns trigger automatic retry:
- `No HIP GPUs are available` / `No CUDA GPUs are available`
- `CUDA out of memory` / `hipErrorOutOfMemory` / `HIP out of memory`
- `hipErrorNoDevice` / `cudaErrorNoDevice`
- `NCCL Error 2: unhandled system error` / `NCCL error`
- Network/DNS errors (DNS resolution, connection refused, timeouts)
- HuggingFace Hub HTTP errors (502, 503, 504)
## Performance Considerations
- **Overhead**: ~100-500ms per test for subprocess spawning
- **Parallelism**: N workers = ~N× throughput (minus overhead)
- **GPU Memory**: Each worker has dedicated GPUs - no memory contention
- **Deterministic**: Same test distribution every run (batch mode)
- **Crash Isolation**: One crash doesn't affect other tests
### Recommended Configurations
| Scenario | Workers | GPUs/Worker | Mode | Command |
|----------|---------|-------------|------|---------|
| 8 GPUs, multi-GPU tests | 4 | 2 | batch | `--fkit-workers=4 --fkit-gpus-per-worker=2` |
| 8 GPUs, single-GPU tests | 8 | 1 | batch | `--fkit-workers=8 --fkit-gpus-per-worker=1` |
| 4 GPUs, mixed tests | 2 | 2 | batch | `--fkit-workers=2 --fkit-gpus-per-worker=2` |
| No GPUs (CPU tests) | auto | - | batch | `--fkit-workers=auto` |
| Highly variable durations | 4 | 2 | isolate | `--fkit-workers=4 --fkit-mode=isolate` |
## Configuration File
Enable pytest-fkit in `pytest.ini` or `pyproject.toml`:
```ini
# pytest.ini
[pytest]
addopts = --fkit --fkit-timeout=600 --fkit-workers=auto
```
```toml
# pyproject.toml
[tool.pytest.ini_options]
addopts = ["--fkit", "--fkit-timeout=600", "--fkit-workers=auto"]
```
## Comparison with pytest-xdist
| Feature | pytest-fkit | pytest-xdist |
|---------|-------------|--------------|
| Crash isolation | ✅ Yes (per-test subprocess) | ❌ No |
| GPU affinity | ✅ Yes (automatic) | ❌ Manual |
| Parallel execution | ✅ Yes | ✅ Yes |
| Sliced scheduling | ✅ Yes (round-robin) | ✅ Yes (load-based) |
| GPU error retry | ✅ Yes (isolate mode) | ❌ No |
| Worker fault tolerance | ✅ Yes | ⚠️ Limited |
| Memory isolation | ✅ Per-test | ⚠️ Per-worker |
| Reproducible distribution | ✅ Yes (deterministic) | ⚠️ Varies |
| Overhead | Higher | Lower |
**Use pytest-fkit when:**
- Tests can crash Python (GPU drivers, C extensions)
- You need automatic GPU affinity
- You need per-test isolation
- GPU availability is unreliable
- You want automatic retry on GPU errors
**Use pytest-xdist when:**
- Tests are stable (no crashes)
- You need minimal overhead
- Tests don't use GPUs
## Compatibility
- Python 3.8+
- pytest 6.0+
- Linux, macOS (Windows support TBD)
- AMD ROCm GPUs (detected via `rocm-smi`)
- NVIDIA GPUs (detected via `nvidia-smi`)
## License
MIT
| text/markdown | Cemberk | null | null | null | MIT | pytest, testing, crash, isolation, subprocess | [
"Framework :: Pytest",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT ... | [] | null | null | >=3.8 | [] | [] | [] | [
"pytest>=6.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Cemberk/pytest-fkit",
"Repository, https://github.com/Cemberk/pytest-fkit",
"Issues, https://github.com/Cemberk/pytest-fkit/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T18:12:18.767654 | pytest_fkit-0.9.5.tar.gz | 53,188 | df/6f/55b6f0d1ac58755aacd791466128b611f52a0b338e0172383f21bc9e4c62/pytest_fkit-0.9.5.tar.gz | source | sdist | null | false | 23b55bdfd540f7bf9b5500dd14d4651d | 56eb86e8c13bb3c6018052349dcb114fcdcf094a24cbbf7628c20d07d0eae05d | df6f55b6f0d1ac58755aacd791466128b611f52a0b338e0172383f21bc9e4c62 | null | [
"LICENSE"
] | 237 |
2.4 | bollard | 0.1.2 | A Pythonic client for the Docker/Podman Engine API. | # Bollard
A Pythonic, zero-dependency client for the Docker and Podman Engine APIs.
Prioritizes descriptive naming, context managers, and cross-platform ease of use.
## Installation
```bash
pip install bollard
```
## Project Structure
For a detailed overview of the project's architecture, see [ARCHITECTURE.md](ARCHITECTURE.md).
## Key Features
- **Pythonic API**: `list_containers` instead of `ps`; `remove_image` instead of `rmi`.
- **Zero Dependencies**: Uses only the Python standard library (`http.client`, `socket`, `json`).
- **Smart Connection**: Auto-detects Docker/Podman sockets (Unix, Windows Pipes, `DOCKER_HOST`).
- **Windows Friendly**: Auto-starts the Podman machine on Windows if connection fails.
- **Resource Safety**: Context managers for client connections and ephemeral containers.
- **Streaming Output**: Real-time progress updates for long-running operations like pull and build.
- **.dockerignore Support**: Respects `.dockerignore` files when building images.
- **Full Lifecycle**: Manage Containers, Images, Networks, and Volumes.
## Usage
### Basic Connection
```python
from bollard import DockerClient
with DockerClient() as client:
for image in client.list_images():
print(f"Image: {image.tags[0]}")
```
### Managing Containers
```python
with DockerClient() as client:
# Run a container and get logs
container = client.run_container("alpine:latest", command="echo 'Hello World'")
print(container.logs())
# Stop and remove
container.stop()
container.remove(force=True)
```
### Ephemeral Containers (Auto-Cleanup)
Use `with client.container(...)` to automatically remove the container after the block exits, even if errors occur.
```python
with DockerClient().container("alpine") as container:
container.exec(["echo", "Running inside container"])
# Container is automatically removed
```
### Streaming Image Operations
Methods `pull_image`, `build_image`, and `push_image`, when called with `progress=True`, return a generator that yields progress updates.
```python
with DockerClient() as client:
# Pull an image with progress
for progress in client.pull_image("alpine:latest", progress=True):
if "status" in progress:
print(f"{progress['status']} {progress.get('progress', '')}")
# Build from directory
for log in client.build_image(".", "my-app:latest", progress=True):
if "stream" in log:
print(log["stream"], end="")
```
### Managing Networks & Volumes
Create and manage Docker networks and volumes using Resource objects.
```python
with DockerClient() as client:
# Networks
net = client.create_network("my-net", driver="bridge")
print(net.id)
net.remove()
# Volumes
vol = client.create_volume("my-data")
print(vol.name)
vol.remove()
```
### File Operations
Copy files and directories in and out of containers directly from the `Container` object.
```python
with DockerClient() as client:
with client.container("alpine:latest", command="sleep 60") as container:
# Copy host -> container
container.copy_to("local_data/", "/dest/path/")
# Copy container -> host
container.copy_from("/src/path/data.txt", "local_output/")
```
### Real-world Example: Stirling-PDF Conversion
This example demonstrates running a Stirling-PDF container to convert a "Hello World" HTML file into a PDF and then retrieving it.
```python
from bollard import DockerClient
from time import sleep
env = {"SECURITY_ENABLE_LOGIN": "false"}
with DockerClient().container("frooodle/s-pdf", environment=env) as container:
# Wait for the container to be ready
res = ""
while "HTTP" not in res:
res = container.exec("curl -s -I http://localhost:8080")
sleep(1)
# Convert the HTML to PDF
container.exec("sh -c 'echo \"<h1>Test PDF</h1>\" > /test.html'")
result = container.exec(
'curl -s -w "%{http_code}" '
"-F 'fileInput=@/test.html' "
"http://localhost:8080/api/v1/convert/html/pdf "
"-o /test.pdf"
)
# Copy the PDF to the host
container.copy_from("/test.pdf", ".")
```
### Kubernetes YAML Support
Execute Kubernetes YAML files directly using Podman's native `play kube` feature.
```python
with DockerClient() as client:
# Requires a valid Kubernetes YAML file (Pod, Deployment, etc.)
result = client.play_kube("pod.yaml")
# Returns the JSON response from Podman describing created resources
print("Created Pods:", result.get("Pods"))
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/Julynx/bollard"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T18:11:24.724651 | bollard-0.1.2.tar.gz | 40,311 | 5a/a8/d3b375d396bedae2504bae4dccbbdcfcd49bde093aacb13baec945c7d975/bollard-0.1.2.tar.gz | source | sdist | null | false | dc5e03dcf7937878b8371bf9b20b2dfa | bafdda1f9d87a5724f837ae0e3c36fc2742de5f440fca71f12222ab52e600a49 | 5aa8d3b375d396bedae2504bae4dccbbdcfcd49bde093aacb13baec945c7d975 | null | [] | 249 |
2.4 | pyspiral | 0.10.6 | Python client for Spiral. | # PySpiral
| text/markdown; charset=UTF-8; variant=GFM | null | SpiralDB <hello@spiraldb.com> | null | null | Proprietary License | null | [
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Langua... | [] | https://spiraldb.com | null | >=3.11 | [] | [] | [] | [
"betterproto2>=0.9.0",
"google-re2>=1.1.20240702",
"grpclib>=0.4.7",
"httpx>=0.27.0",
"pyarrow>=21.0.0",
"pydantic[email]<2.13,>=2.12.4",
"pyjwt[crypto]>=2.9.0",
"pyperclip>=1.9.0",
"questionary>=2.0.1",
"typer>=0.16",
"zstandard>=0.23.0",
"orjson>=3.10.0",
"substrait<0.27",
"dask==2025.12... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T18:10:58.447945 | pyspiral-0.10.6-cp311-abi3-macosx_11_0_arm64.whl | 28,738,445 | 8b/95/5f5ba58ffab13a11207388b1daf081afa6429c9df3012d6435bf33df8e7c/pyspiral-0.10.6-cp311-abi3-macosx_11_0_arm64.whl | cp311 | bdist_wheel | null | false | 4730779b4d6458066d66773f83a7625a | 06fc47d7e4ffc7d1e07468fea614b5d6bdb318f15018fce3dea62d31ffe8bf21 | 8b955f5ba58ffab13a11207388b1daf081afa6429c9df3012d6435bf33df8e7c | null | [] | 367 |
2.4 | spectre-core | 3.1.0 | The core Python package used by the spectre program. | # spectre-core
## Description
Contains server-side implementations for [_Spectre_](https://github.com/jcfitzpatrick12/spectre.git).
**⚠️ Note:**
This repository is not intended for direct consumption.
## Contributing
If you'd like to raise an issue, or want to make a change, please refer to the _Contributing_ section in the [README](https://github.com/jcfitzpatrick12/spectre/blob/main/README.md) for _Spectre_.
| text/markdown | null | null | null | Jimmy Fitzpatrick <jcfitzpatrick12@gmail.com> | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.10.2 | [] | [] | [] | [
"numpy==1.24.0",
"pyfftw==0.15.0",
"astropy==6.0.1",
"pydantic==2.12.3",
"matplotlib==3.5.0",
"watchdog==4.0.0",
"pytest; extra == \"test\"",
"mypy; extra == \"test\"",
"black; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/jcfitzpatrick12/spectre-core",
"Issues, https://github.com/jcfitzpatrick12/spectre-core/issues"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T18:09:56.850114 | spectre_core-3.1.0.tar.gz | 99,204 | 22/0d/4b33b438fba3ffdf0ee285bb3ed4f62fc9cdb9e44ae160226ddd71ac6c18/spectre_core-3.1.0.tar.gz | source | sdist | null | false | 61810fdeb9b542dcad244241e8aeb032 | 9b2e854ec1b90612f34873af932454014512d2816ed7f965a9fd96806600eaa8 | 220d4b33b438fba3ffdf0ee285bb3ed4f62fc9cdb9e44ae160226ddd71ac6c18 | null | [
"LICENSE"
] | 241 |
2.4 | vericorp-company-verify | 2.0.0 | Python SDK for the VeriCorp API — European company verification | # vericorp-company-verify
Python SDK for the [VeriCorp Company Verify API](https://rapidapi.com/vericorptestcollab/api/vericorp) — European company verification.
## Install
```bash
pip install vericorp-company-verify
```
## Quick Start
```python
from vericorp_company_verify import VeriCorp
client = VeriCorp("your-rapidapi-key")
# Look up a company
company = client.lookup("PT502011378")
print(company.name) # UNIVERSIDADE DO MINHO
print(company.address) # Address(street='LG DO PACO', city='BRAGA', ...)
# Validate a VAT number
result = client.validate("DE811871080")
print(result.vat_valid) # True
# List supported countries
countries = client.countries()
print(countries.total) # 29
```
## Async
```python
from vericorp_company_verify import AsyncVeriCorp
async with AsyncVeriCorp("your-rapidapi-key") as client:
company = await client.lookup("DK10150817")
print(company.name)
```
## Methods
| Method | Description |
|--------|-------------|
| `lookup(tax_id)` | Look up company by tax ID |
| `lookup_gb(company_number)` | Look up UK company by number |
| `validate(tax_id)` | Validate a VAT number |
| `batch(tax_ids)` | Batch lookup (max 10) |
| `countries()` | List supported countries |
| `health()` | API health check |
## Error Handling
```python
from vericorp_company_verify.errors import InvalidTaxIdError, NotFoundError, RateLimitError
try:
company = client.lookup("INVALID")
except InvalidTaxIdError:
print("Bad tax ID format")
except NotFoundError:
print("Company not found")
except RateLimitError as e:
print(f"Rate limited, retry after {e.retry_after}s")
```
## License
MIT
| text/markdown | VeriCorp | null | null | null | null | api, company, europe, tax-id, validation, vat, vies | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming L... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.24",
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/vericorptest-collab/vericorp-python",
"Documentation, https://rapidapi.com/vericorptestcollab/api/vericorp"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T18:09:44.738390 | vericorp_company_verify-2.0.0.tar.gz | 5,370 | 5d/72/0f199841fe741b66d29fbf3b137831a0d9927eb76d80ffff3fe1c8db72cb/vericorp_company_verify-2.0.0.tar.gz | source | sdist | null | false | 4737fc71e3cdcd7304f08d003b9d643b | d53cd70aa0cf65b9558571ab6eaa0347e96efa2b9ce244032c7dbfe5af760afc | 5d720f199841fe741b66d29fbf3b137831a0d9927eb76d80ffff3fe1c8db72cb | MIT | [] | 238 |
2.4 | pulumi-kubernetes-coredns | 0.2.0a1771522639 | Strongly-typed CoreDNS installation | # Pulumi Kubernetes CoreDNS Component
This repo contains the Pulumi CoreDNS component for Kubernetes. CoreDNS is a fast and flexible
DNS server, providing DNS services to your cluster.
This component wraps [the official CoreDNS Helm Chart](https://github.com/coredns/helm),
and offers a Pulumi-friendly and strongly-typed way to manage CoreDNS installations.
For examples of usage, see [the official documentation](https://coredns.io/),
or refer to [the examples](/examples) in this repo.
## To Use
To use this component, first install the Pulumi Package:
Afterwards, import the library and instantiate it within your Pulumi program:
## Configuration
This component supports all of the configuration options of the [official Helm chart](
https://github.com/coredns/helm#configuration), except that these
are strongly typed so you will get IDE support and static error checking.
The Helm deployment uses reasonable defaults, including the chart name and repo URL, however,
if you need to override them, you may do so using the `helmOptions` parameter. Refer to
[the API docs for the `kubernetes:helm/v3:Release` Pulumi type](
https://www.pulumi.com/docs/reference/pkg/kubernetes/helm/v3/release/#inputs) for a full set of choices.
For complete details, refer to the Pulumi Package details within the Pulumi Registry.
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, kubernetes, coredns, kind/component, category/infrastructure | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.0.0",
"pulumi-kubernetes<5.0.0,>=4.19.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-kubernetes-coredns"
] | twine/5.0.0 CPython/3.11.8 | 2026-02-19T18:09:37.019668 | pulumi_kubernetes_coredns-0.2.0a1771522639.tar.gz | 21,607 | b3/ad/2e3caa84f884a471b056d0ec11b46a80477825678cc2ec4a7c465c6a1d11/pulumi_kubernetes_coredns-0.2.0a1771522639.tar.gz | source | sdist | null | false | f746787d346d0e5b6b4b9b0d81eb8e3a | d1ef2c5cbb0ce07a5129b8a40e9ffc50b589d9f98cd8a876a9486ba4b5b583e6 | b3ad2e3caa84f884a471b056d0ec11b46a80477825678cc2ec4a7c465c6a1d11 | null | [] | 219 |
2.4 | scpn-fusion | 3.5.0 | SCPN Fusion Core - Advanced Plasma Physics & Control Suite | # SCPN Fusion Core
<p align="center">
<img src="docs/assets/repo_header.png" alt="SCPN Fusion Core — Neuro-Symbolic Tokamak Control">
</p>
[](https://github.com/anulum/scpn-fusion-core/actions/workflows/ci.yml) [](https://github.com/anulum/scpn-fusion-core/actions/workflows/docs.yml) [](https://codecov.io/gh/anulum/scpn-fusion-core) [](https://anulum.github.io/scpn-fusion-core/) [](https://pypi.org/project/scpn-fusion/) [](https://zenodo.org/) [](https://arxiv.org/) [](LICENSE)    
A **neuro-symbolic control framework for tokamak fusion reactors** with
physics-informed surrogate models and optional Rust acceleration. SCPN
Fusion Core compiles plasma control logic — expressed as stochastic Petri
nets — into spiking neural network controllers that run at sub-millisecond
latency, backed by a Grad-Shafranov equilibrium solver, VMEC 3D equilibrium
interface, 2D MPI domain decomposition, 1.5D radial transport, BOUT++
coupling, and AI surrogates for turbulence, disruption prediction, and
real-time digital twins.
**What makes it different:** Most fusion codes are physics-first (solve
equations, then bolt on control). SCPN Fusion Core is **control-first** —
it provides a contract-checked neuro-symbolic compilation pipeline where
plasma control policies are expressed as Petri nets, compiled to stochastic
LIF neurons, and executed against physics-informed plant models. The physics
modules are deliberately reduced-order (not gyrokinetic) to enable
real-time control loop closure at 1 kHz+ rates.
> **Honest scope:** This is not a replacement for TRANSP, JINTRAC, or GENE.
> It does not solve 5D gyrokinetics or full 3D MHD. It is a
> **control-algorithm development and surrogate-modeling framework** with
> enough physics fidelity to validate reactor control strategies against
> real equilibrium data (8 SPARC EFIT GEQDSKs, 100+ multi-machine
> synthetic equilibria, 20-shot ITPA H-mode confinement database, 16
> DIII-D reference disruption shots). Validated against IPB98(y,2)
> confinement scaling with 28.6% full-physics relative RMSE
> (13.5% neural-surrogate fit lane) and >60% disruption prevention rate on
> 10-shot reference replay. Physics hardened in v3.1.0: Greenwald density
> limit, 25 keV temperature cap, Q <= 15 ceiling, TBR corrected to
> [1.0, 1.4] range (Fischer/DEMO), per-timestep energy conservation
> enforcement.
## Design Philosophy
| Principle | Implementation |
|-----------|---------------|
| **Control-first** | Petri net → SNN compilation pipeline is the core innovation, not an add-on |
| **Graceful degradation** | Every module works without Rust, without SC-NeuroCore, without GPU |
| **Explicit over silent** | 263 hardening tasks replaced silent clamping/coercion with explicit errors |
| **Formal safety interlocks** | Inhibitor-arc safety net disables control transitions on hard-limit violations |
| **Real data validation** | 8 SPARC EFIT + 100 multi-machine GEQDSKs + 20-shot ITPA database + 10 disruption shots |
| **Reduced-order by design** | Physics models are fast enough for real-time control (ms, not hours) |
## Architecture
```
scpn-fusion-core/
├── src/scpn_fusion/ # Python package (46 modules)
│ ├── core/ # Plasma physics engines
│ │ ├── fusion_kernel.py Grad-Shafranov + transport solver
│ │ ├── compact_reactor_optimizer MVR-0.96 compact reactor search
│ │ ├── mhd_sawtooth.py MHD sawtooth crash simulator
│ │ ├── rf_heating.py ICRH/ECRH/LHCD heating models
│ │ ├── divertor_thermal_sim.py Divertor heat-flux solver
│ │ ├── hall_mhd_discovery.py Hall-MHD two-fluid effects
│ │ ├── sandpile_fusion_reactor Legacy SOC research lane (not in validated transport path)
│ │ ├── neural_equilibrium.py Neural-network equilibrium solver
│ │ ├── fno_turbulence_suppressor Fourier Neural Operator turbulence model
│ │ ├── turbulence_oracle.py ITG/TEM turbulence predictor
│ │ ├── wdm_engine.py Warm dense matter EOS
│ │ ├── geometry_3d.py 3D flux-surface geometry
│ │ ├── global_design_scanner.py Multi-objective design space explorer
│ │ └── integrated_transport Coupled transport solver
│ ├── control/ # Reactor control & AI
│ │ ├── tokamak_flight_sim.py Real-time flight simulator
│ │ ├── tokamak_digital_twin.py Digital twin with live telemetry
│ │ ├── fusion_optimal_control Model-predictive controller
│ │ ├── fusion_sota_mpc.py State-of-the-art MPC
│ │ ├── disruption_predictor.py ML disruption early-warning
│ │ ├── spi_mitigation.py Shattered pellet injection
│ │ ├── fusion_control_room.py Integrated control room sim
│ │ ├── neuro_cybernetic_controller SNN-based feedback controller
│ │ └── advanced_soc_fusion_learning Legacy SOC RL utilities
│ ├── nuclear/ # Nuclear engineering
│ │ ├── blanket_neutronics.py Tritium breeding ratio solver
│ │ ├── nuclear_wall_interaction PMI / first-wall damage
│ │ ├── pwi_erosion.py Plasma-wall erosion model
│ │ └── temhd_peltier.py Thermoelectric MHD effects
│ ├── diagnostics/ # Synthetic diagnostics
│ │ ├── synthetic_sensors.py Virtual instrument suite
│ │ └── tomography.py Soft X-ray tomographic inversion
│ ├── engineering/ # Balance of plant
│ │ └── balance_of_plant.py Thermal cycle, turbine, cryo
│ ├── scpn/ # Neuro-symbolic compiler
│ │ ├── compiler.py Petri nets → stochastic neurons
│ │ ├── controller.py SNN-driven plasma control
│ │ ├── structure.py Petri net data structures
│ │ ├── contracts.py Formal verification contracts
│ │ ├── safety_interlocks.py Inhibitor-arc safety interlock runtime
│ │ └── artifact.py Compilation artifact storage
│ ├── hpc/ # High-performance computing
│ │ └── hpc_bridge.py C++/Rust FFI bridge
│ └── ui/ # Dashboard
│ └── app.py Streamlit real-time dashboard
├── scpn-fusion-rs/ # Rust workspace (11 crates)
│ ├── crates/
│ │ ├── fusion-types/ # Shared data types
│ │ ├── fusion-math/ # Linear algebra, FFT, interpolation
│ │ ├── fusion-core/ # Grad-Shafranov, transport in Rust
│ │ ├── fusion-physics/ # MHD, heating, turbulence
│ │ ├── fusion-nuclear/ # Neutronics, wall erosion
│ │ ├── fusion-engineering/ # Balance of plant
│ │ ├── fusion-control/ # PID, MPC, disruption predictor
│ │ ├── fusion-diagnostics/ # Sensor models
│ │ ├── fusion-ml/ # Inference engine
│ │ ├── fusion-gpu/ # GPU abstraction layer
│ │ └── fusion-python/ # PyO3 bindings → scpn_fusion_rs.pyd
│ └── Cargo.toml # Workspace manifest
├── tests/ # Python test suite
├── docs/ # Technical documentation
├── validation/ # ITER validation configurations
├── calibration/ # Optimization tools
└── schemas/ # JSON schemas
```
## Quick Start
```bash
# Clone
git clone https://github.com/anulum/scpn-fusion-core.git
cd scpn-fusion-core
# Install (Python)
pip install -e .
# Run a simulation
scpn-fusion kernel # Grad-Shafranov equilibrium
scpn-fusion optimizer # Compact reactor search (MVR-0.96)
scpn-fusion flight # Tokamak flight simulator
scpn-fusion neural --surrogate # Neural equilibrium surrogate
scpn-fusion all --surrogate --experimental # one command for full unlocked suite
python examples/run_3d_flux_quickstart.py --toroidal 24 --poloidal 24
python examples/run_3d_flux_quickstart.py --toroidal 24 --poloidal 24 --preview-png artifacts/SCPN_Plasma_3D_quickstart.png
# Run tests
pytest tests/ -v
# Generate validation RMSE dashboard
python validation/rmse_dashboard.py
# Benchmark transport source MW->keV/s power-balance contract
python validation/benchmark_transport_power_balance.py
```
The 3D quickstart writes an OBJ mesh to `artifacts/SCPN_Plasma_3D_quickstart.obj` and can optionally render a PNG preview.
### Docker (One-Click Run)
```bash
# One-click dashboard
docker compose up --build
# Or build and run manually
docker build -t scpn-fusion-core .
docker run -p 8501:8501 scpn-fusion-core
# With dev dependencies (for running tests inside the container)
docker build --build-arg INSTALL_DEV=1 -t scpn-fusion-core:dev .
docker run scpn-fusion-core:dev pytest tests/ -v
```
### Public Demo (Shot Replay)
- Demo playbook: [`docs/STREAMLIT_DEMO_PLAYBOOK.md`](docs/STREAMLIT_DEMO_PLAYBOOK.md)
- One-click container launch: `docker compose up --build`
- YouTube embed: pending upload for v3.5.0 release notes
### Pure Python (No Rust Toolchain Required)
The entire simulation suite works without Rust. Every module auto-detects the
Rust extension and falls back to NumPy/SciPy:
```bash
pip install "scpn-fusion[full]" # from PyPI (pulls optional physics + Rust wheel)
# OR
pip install -e . # from source (pure Python, no cargo needed)
```
Legacy wrapper remains available:
```bash
python run_fusion_suite.py kernel
```
If the Rust extension is not available, you'll see a one-time info message at
import and all computations run on NumPy. The only difference is speed
(Rust kernels are ~10-50x faster for equilibrium solves).
### Rust Acceleration (Optional)
```bash
cd scpn-fusion-rs
cargo build --release
cargo test
# Build Python bindings (requires maturin)
pip install maturin
cd crates/fusion-python
maturin develop --release
```
The Python package auto-detects the Rust extension and falls back to NumPy if unavailable.
### Testing
```bash
# Python unit + property-based tests
pip install -e ".[dev]"
pytest tests/ -v
# Rust unit + property-based tests
cd scpn-fusion-rs
cargo test --all-features
# Rust benchmarks
cargo bench
```
The test suites include property-based tests powered by [Hypothesis](https://hypothesis.readthedocs.io/) (Python) and [proptest](https://crates.io/crates/proptest) (Rust), covering numerical invariants, topology preservation, and solver convergence properties.
## Tutorial Notebooks
| Notebook | Description |
|----------|-------------|
| `01_compact_reactor_search` | MVR-0.96 compact reactor optimizer walkthrough |
| `02_neuro_symbolic_compiler` | Petri net → stochastic neuron compilation pipeline |
| `neuro_symbolic_control_demo_v2` | Golden Base v2 hero control demo (formal proofs + closed-loop + replay) |
| `03_grad_shafranov_equilibrium` | Free-boundary equilibrium solver tutorial |
| `04_divertor_and_neutronics` | Divertor heat flux & tritium breeding ratio |
| `05_validation_against_experiments` | Cross-validation vs SPARC GEQDSK & ITPA scaling |
| `06_inverse_and_transport_benchmarks` | Inverse solver & neural transport surrogate benchmarks |
## Validation Against Experimental Data
The `validation/` directory contains reference data from real tokamaks for cross-checking simulation outputs:
| Dataset | Source | Contents |
|---------|--------|----------|
| **SPARC GEQDSK** | [SPARCPublic](https://github.com/cfs-energy/SPARCPublic) | 8 EFIT equilibrium files (B=12.2 T, I_p up to 8.7 MA) |
| **Multi-machine GEQDSK** | Synthetic Solov'ev | 100 equilibria across DIII-D, JET, EAST, KSTAR, ASDEX-U |
| **ITPA H-mode** | Verdoolaege et al., NF 61 (2021) | Confinement data from 11 tokamaks, 20 shots |
| **IPB98(y,2)** | ITER Physics Basis | Scaling law coefficients + published uncertainties |
| **DIII-D disruption shots** | Reference profiles (10 shots) | 5 disruptions + 5 safe, locked mode/VDE/tearing/density/beta |
| **ITER configs** | Internal | 4 coil-optimised ITER configurations |
| **SPARC config** | Creely et al., JPP 2020 | Machine parameters for compact high-field design |
| **DIII-D config** | Luxon, NF 42 (2002) | Medium-size US tokamak parameters |
| **JET config** | Pamela et al. (2007) | Largest tokamak, DT fusion data |
```bash
# Run validation script
python validation/validate_against_sparc.py
# Run real-shot validation gate (v2.0.0)
python validation/validate_real_shots.py
# Generate RMSE dashboard
python validation/rmse_dashboard.py
# Benchmark transport source MW->keV/s power-balance contract
python validation/benchmark_transport_power_balance.py
# Run disturbance rejection benchmark
python validation/benchmark_disturbance_rejection.py
# Read a GEQDSK equilibrium
python -c "from scpn_fusion.core.eqdsk import read_geqdsk; eq = read_geqdsk('validation/reference_data/sparc/lmode_vv.geqdsk'); print(f'B={eq.bcentr:.1f}T, Ip={eq.current/1e6:.1f}MA')"
```
## Simulation Modes (Tiered by Maturity)
### Production — Hardened, CI-gated, validated against real data
| Mode | Description | Tests | Hardening |
|------|-------------|-------|-----------|
| `kernel` | Grad-Shafranov equilibrium (Picard+SOR/Multigrid) + coupled 1.5D transport | Converges on 8 SPARC GEQDSKs | H8: 94 Rust validation tasks |
| `neuro-control` | SNN-based cybernetic controller (SC-NeuroCore or NumPy LIF fallback) | Deterministic replay, fault injection | H5: 37 SCPN controller tasks |
| `optimal` | Model-predictive controller with gradient-descent trajectory optimization | Disturbance rejection, bounded actions | H7+H8: strict input guards |
| `flight` | Real-time tokamak flight simulator with actuator lag dynamics | Deterministic summary API | H7: RNG isolation + guards |
| `digital-twin` | Live digital twin with RL-trained MLP policy + chaos monkey faults | Fault campaigns, bit-flip resilience | H6+H7+H8: 20+ tasks |
| `safety` | ML disruption predictor (deterministic scoring + optional Transformer) | Anomaly campaigns, checkpoint fallback | H7: scoped RNG + guards |
| `control-room` | Integrated control room with analytic/kernel-backed equilibrium | CI-safe non-plot mode | H7: deterministic runtime |
### Validated — Real implementations, tested, but not yet hardened to production level
| Mode | Description | Status |
|------|-------------|--------|
| `optimizer` | Compact reactor design search (MVR-0.96) | Multi-objective, validated constraints |
| `breeding` | Tritium breeding blanket neutronics (1D transport) | Real albedo model, TBR trends |
| `nuclear` | Plasma-wall interaction & first-wall erosion | PWI angle-energy invariants tested |
| `diagnostics` | Synthetic sensors + soft X-ray tomographic inversion | Forward models, SciPy fallback |
| `spi` | Shattered pellet injection mitigation | Z_eff + CQ time constant |
| `learning` | Legacy self-organized criticality RL utilities | Maintained for research reproducibility |
| `divertor` | Divertor thermal load simulation | TEMHD Peltier effects |
| `heating` | RF heating (ICRH / ECRH / LHCD ray tracing) | Resonance layer + deposition |
| `sawtooth` | MHD sawtooth crash dynamics | Spectral solver |
| `scanner` | Multi-objective global design scanner | Scoped RNG |
| `sandpile` | Legacy SOC sandpile criticality model | Not part of release-gated transport metrics |
### Reduced-order / Surrogate — Functional but limited physics scope
| Mode | Description | Limitation |
|------|-------------|------------|
| `neural` | Neural-network equilibrium solver (PCA + MLP) | Baseline pretrained bundles shipped (ITPA MLP + EUROfusion-proxy FNO); facility-specific retraining still recommended |
| `geometry` | 3D flux-surface geometry (Fourier boundary) | Parameterization only; no force-balance solve |
| `wdm` | Warm dense matter equation of state | Reduced EOS model |
### Experimental — Requires external SCPN framework components
```bash
scpn-fusion quantum --experimental
SCPN_EXPERIMENTAL=1 scpn-fusion vibrana
```
These modes (quantum, vibrana, lazarus, director) are integration bridges
to external components not shipped in this repo.
## Minimum Viable Reactor (MVR-0.96)
The compact reactor optimizer (`scpn-fusion optimizer`) performs multi-objective design-space exploration to find the smallest tokamak configuration that achieves Q >= 10 ignition. The "0.96" refers to the normalized minor radius target. Key parameters explored:
- Major/minor radius, elongation, triangularity
- Magnetic field strength, plasma current
- Heating power allocation (NBI, ICRH, ECRH)
- Tritium breeding ratio constraints
- Divertor heat-flux limits
## Neuro-Symbolic Compiler
The `scpn/` subpackage implements a **Petri net → stochastic neuron** compiler —
the core innovation that distinguishes SCPN Fusion Core from conventional fusion codes.
### Pipeline
```
Petri Net (places + transitions + contracts)
│
▼ compiler.py — structure-preserving mapping
Stochastic LIF Network (neurons + synapses + thresholds)
│
▼ controller.py — closed-loop execution
Real-Time Plasma Control (sub-ms latency, deterministic replay)
│
▼ artifact.py — versioned, signed compilation artifact
Deployment Package (JSON + schema version + git SHA)
```
### Stages
1. **Petri Net Definition** — plasma control logic expressed as place/transition nets with formal contracts (`structure.py`, `contracts.py`)
2. **Compilation** — Petri net transitions mapped to stochastic LIF neurons using [SC-NeuroCore](https://github.com/anulum/sc-neurocore) when available, NumPy fallback otherwise (`compiler.py`)
3. **Execution** — SNN-driven real-time plasma control with sub-millisecond latency and deterministic replay (`controller.py`)
4. **Verification** — formal contract checking ensures compiled artifacts preserve Petri net invariants (boundedness, liveness, reachability)
5. **Artifact Export** — versioned compilation artifacts with package version, schema version, and git SHA stamping (`artifact.py`)
### Why This Matters
Most fusion control systems bolt a PID or MPC controller onto a physics code.
SCPN Fusion Core inverts this: **control logic is the primary artifact**, expressed
in a formally verifiable Petri net formalism, then compiled to a spiking neural
network that executes at hardware-compatible latencies. The physics modules exist
to provide a realistic plant model for the controller to operate against.
This architecture enables:
- **Formal verification** of control policies before deployment
- **Hardware targeting** — the same Petri net compiles to NumPy (simulation), SC-NeuroCore (FPGA-accurate), or future neuromorphic silicon
- **Graceful degradation** — every path has a pure-Python fallback
- **Deterministic replay** — identical inputs produce identical outputs across platforms (37 dedicated hardening tasks in H5 wave)
## SC-NeuroCore Integration
SCPN Fusion Core has an **optional** dependency on [sc-neurocore](https://github.com/anulum/sc-neurocore). When installed, the neuro-symbolic compiler uses hardware-accurate stochastic LIF neurons and Bernoulli bitstream encoding. Without it, all paths fall back to NumPy float computation:
```python
try:
from sc_neurocore import StochasticLIFNeuron, generate_bernoulli_bitstream
_HAS_SC_NEUROCORE = True
except ImportError:
_HAS_SC_NEUROCORE = False # NumPy float-path fallback
```
## Rust Workspace
The `scpn-fusion-rs/` directory contains an 11-crate Rust workspace that mirrors the Python package structure. Key features:
- **Performance**: `opt-level = 3`, fat LTO, single codegen unit for maximum optimization
- **FFI**: `fusion-python` crate provides PyO3 bindings producing `scpn_fusion_rs.so/.pyd`
- **2D MPI domain decomposition**: Additive Schwarz overlapping-domain solver with Rayon-parallel subdomain solves
- **VMEC 3D equilibrium interface**: Fourier-mode stellarator/tokamak equilibrium coupling
- **BOUT++ coupling**: Data exchange interface for edge/SOL turbulence codes
- **Dependencies**: `ndarray`, `nalgebra`, `rayon` (parallelism), `rustfft`, `serde`
- **No external runtime**: pure Rust with no C/Fortran dependencies
## Benchmarks
### What's Validated
| Component | Status | Evidence |
|-----------|--------|----------|
| **Grad-Shafranov solver** | Converges on SPARC GEQDSK equilibria | `validation/validate_against_sparc.py` — axis position, q-profile, GS operator checks |
| **IPB98(y,2) scaling** | Confinement time matches published law | `tests/test_uncertainty.py` — regression against ITPA 20-shot dataset |
| **Inverse reconstruction** | Levenberg-Marquardt with Tikhonov + Huber | Criterion benchmarks: `inverse_bench.rs` (FD vs analytical Jacobian) |
| **SOR solver** | Criterion-benchmarked | `sor_bench.rs` — 65×65 and 128×128 grid sizes |
| **GMRES(30) solver** | Criterion-benchmarked | `gmres_bench.rs` — 33×33 and 65×65 grids, SOR-preconditioned |
| **Multigrid V-cycle** | Criterion-benchmarked | `multigrid_bench.rs` — 33×33, 65×65, 129×129 grids; head-to-head vs SOR & GMRES |
| **Property-based tests** | Hypothesis + proptest | Numerical invariants, topology preservation, convergence |
### Performance Estimates (Not Yet Independently Verified)
These numbers are internal measurements. We encourage you to reproduce them
with `cargo bench` and `benchmarks/collect_results.sh` on your hardware.
| Metric | Value | How Measured | Caveat |
|--------|-------|-------------|--------|
| **SOR step** @ 65×65 | µs-range | Criterion `sor_bench.rs` | Single relaxation step, not full solve |
| **GMRES(30)** @ 65×65 | ~45 iters to converge | Criterion `gmres_bench.rs` | SOR-preconditioned, restart=30 |
| **Multigrid V(3,3)** @ 65×65 | ~8 cycles to converge | Criterion `multigrid_bench.rs` | Standard V-cycle with 3 pre/post-smoothing sweeps |
| **Multigrid V(3,3)** @ 129×129 | ~10 cycles to converge | Criterion `multigrid_bench.rs` | Near-optimal O(N) complexity |
| **Full equil. (Picard+SOR)** | ~5 s (Python) | `profiling/profile_kernel.py` | Jacobi + Picard, not multigrid |
| **Inverse reconstruction** | ~4 s (5 LM iters, Rust) | Criterion `inverse_bench.rs` | Dominated by forward solve time |
| **Neural transport MLP** | ~5 µs/point (synthetic baseline weights) | Criterion `neural_transport_bench.rs` | Baseline pretrained bundle shipped; retrain for facility-specific regimes |
| **Memory** | ~0.7 MB (65×65 equil.) | Estimated from array sizes | — |
### Solver Comparison (65×65 grid, ITER-like config)
Run `cargo bench -p fusion-math` to reproduce on your hardware. Python
comparison: `python benchmarks/solver_comparison.py`.
| Solver | Grid | Convergence | Benchmark File |
|--------|------|-------------|----------------|
| SOR (ω=1.8) | 65×65 | 200 iters (fixed) | `sor_bench.rs` |
| GMRES(30) + SOR precond | 65×65 | ~45 iters | `gmres_bench.rs` |
| Multigrid V(3,3) | 65×65 | ~8 cycles | `multigrid_bench.rs` |
| Multigrid V(3,3) | 129×129 | ~10 cycles | `multigrid_bench.rs` |
| SOR (Python) | 65×65 | 200 iters | `benchmarks/solver_comparison.py` |
| Newton-K (Python) | 65×65 | ~15 iters | `benchmarks/solver_comparison.py` |
> **Note on comparisons:** Earlier versions of this README cited "50× faster
> than Python" and "200,000× faster than gyrokinetic." These comparisons mixed
> different algorithms (multigrid vs SOR) and compared a microsecond-latency
> MLP surrogate against first-principles gyrokinetic solvers — an apples-to-
> oranges comparison. The Criterion benchmarks above provide reproducible
> head-to-head solver comparisons on identical grids and problems.
### Published Task-2 Surrogate Snapshot
Task-2 includes a reproducible benchmark lane that publishes:
- TM1 and TokamakNET proxy disruption AUC metrics (`AUC >= 0.95` gate)
- host-measured latency metrics (estimate + wall clock)
- consumer-hardware latency projections (RTX 3060/4090 class, model-based)
- explicit pretrained-surrogate coverage vs lanes that still need user training
```bash
python validation/task2_pretrained_surrogates_benchmark.py --strict
```
Outputs:
- `validation/reports/task2_pretrained_surrogates_benchmark.json`
- `validation/reports/task2_pretrained_surrogates_benchmark.md`
### Community Context
For context, here are representative runtimes from published fusion codes
(2024–2025 literature). These are not direct comparisons with SCPN.
| Code | Category | Typical Runtime | Language | Reference |
|------|----------|-----------------|----------|-----------|
| GENE | 5D gyrokinetic | ~10⁶ CPU-h | Fortran/MPI | Jenko 2000 |
| JINTRAC | Integrated modelling | ~10 min/shot | Fortran/Python | Romanelli 2014 |
| CHEASE | Fixed-boundary equilibrium | ~5 s | Fortran | Lütjens 1996 |
| EFIT | Current-filament reconstruction | ~2 s | Fortran | Lao 1985 |
| TORAX | Integrated (JAX) | ~30 s (GPU) | Python/JAX | — |
| DREAM | Disruption / runaway electrons | ~1 s | C++ | Hoppe 2021 |
Struggling with convergence? See the [Solver Tuning Guide](docs/SOLVER_TUNING_GUIDE.md) + benchmarks notebook Part F.
### Results
The full benchmark outputs (with actual numbers from a real run) are published
in [`RESULTS.md`](RESULTS.md). Key highlights include ITER-like Q ≥ 10
operating-point identification, TBR > 1 from the 3-group blanket model,
sub-ms hardware-in-the-loop control latency, and a 50-run disruption
mitigation ensemble. Re-run `python validation/collect_results.py` on your
own hardware to reproduce.
Controller benchmark interpretation is trade-off based:
SNN is best on latency, MPC is best on disruption rate/reward, H-infinity is
the strongest robust middle ground, and PID remains the classical baseline.
### Physics Model Limitations (Honest Assessment)
This section documents the **actual** fidelity of each physics module.
Run `pytest tests/test_ipb98y2_benchmark.py -v` and
`pytest tests/test_gs_convergence.py -v` to reproduce the numbers below.
| Module | What It Is | What It Is Not |
|--------|-----------|----------------|
| **Equilibrium** | Picard iteration + Red-Black SOR (+ optional Anderson acceleration). GMRES(30) and multigrid V-cycle available in Rust. Newton-Kantorovich available in Python. Converges on 3 SPARC L-mode GEQDSKs. Default 65×65 grid. | Not EFIT-quality inverse reconstruction. Not free-boundary (coil currents are fixed). Rust multigrid not yet wired into the Python kernel path (use Rust API directly). |
| **Transport** | 1.5D Bohm/gyro-Bohm critical-gradient model with Chang-Hinton neoclassical option. CN temperature evolution. Unit-consistent MW->keV/s auxiliary source normalisation with per-step power-balance telemetry (`_last_aux_heating_balance`). IPB98(y,2) confinement time evaluation. | No ITG/TEM/ETG turbulent transport channels. No NBI slowing-down. No impurity transport (beyond simple diffusion). No sawtooth mixing in transport. Actual RMSE vs IPB98(y,2) on the 20-shot ITPA dataset is printed by `test_ipb98y2_benchmark.py`; source-power contract benchmark is `validation/benchmark_transport_power_balance.py`. |
| **Stability** | Vertical n-index stability analysis. | No kink mode analysis. No peeling-ballooning (no access to edge bootstrap current calculation). No Mercier criterion. No resistive wall modes. |
| **Neural Equilibrium** | PCA + MLP surrogate trained on 78 samples (3 SPARC L-mode configs at varying currents). | 78 training samples is far below what is needed for generalization. The surrogate is useful for fast controller prototyping on the specific SPARC L-mode family it was trained on, not for arbitrary equilibria. |
| **FNO Turbulence** | Fourier Neural Operator trained on synthetic data (not real gyrokinetic output). | Not a replacement for GENE/GS2. The FNO learns a proxy mapping, not real turbulent transport coefficients. |
| **Neural Transport MLP** | 20-row illustrative dataset from ITPA. Baseline pretrained bundle shipped. | 20 rows cannot capture the full H-mode confinement parameter space. Facility-specific retraining is mandatory for any quantitative use. |
| **Grid Resolution** | Default 65×65 for prototyping. 129×129 and 257×257 tested in edge-case suite. | Production equilibrium codes use 257+ with multigrid. Our 65×65 default is appropriate for control-loop closure testing, not for publication-quality equilibrium reconstruction. |
### Resources
- **Full comparison tables:** [`docs/BENCHMARKS.md`](docs/BENCHMARKS.md)
- **Repro tooling:** [`benchmarks/`](benchmarks/) (Criterion collection + hardware metadata + Python solver comparison)
- **Static figures for PDF/arXiv:** [`docs/BENCHMARK_FIGURES.md`](docs/BENCHMARK_FIGURES.md) (includes LaTeX table snippets)
- **Interactive notebook:** [`examples/06_inverse_and_transport_benchmarks.ipynb`](examples/06_inverse_and_transport_benchmarks.ipynb)
- **Pre-built HTML notebooks:** [`docs/notebooks/`](docs/notebooks/) (also served via [GitHub Pages](https://anulum.github.io/scpn-fusion-core/notebooks/))
## Documentation
Full documentation is hosted on **[GitHub Pages](https://anulum.github.io/scpn-fusion-core/)**.
| Resource | Description |
|----------|-------------|
| [Python API Reference](https://anulum.github.io/scpn-fusion-core/python/) | Sphinx-generated docs for all Python modules |
| [Rust API Reference](https://anulum.github.io/scpn-fusion-core/rust/fusion_core/) | Rustdoc for the 10-crate workspace |
| [Tutorial Notebooks](https://anulum.github.io/scpn-fusion-core/notebooks/) | 6 interactive Jupyter tutorials |
### User Guides (on GitHub Pages)
| Guide | Topics |
|-------|--------|
| [Equilibrium Solver](https://anulum.github.io/scpn-fusion-core/python/userguide/equilibrium.html) | Grad-Shafranov, boundary conditions, GEQDSK I/O |
| [Transport & Stability](https://anulum.github.io/scpn-fusion-core/python/userguide/transport.html) | 1.5D transport, IPB98 scaling, MHD stability |
| [Control Systems](https://anulum.github.io/scpn-fusion-core/python/userguide/control.html) | PID, MPC, SNN controllers, digital twin, SOC learning |
| [Nuclear Engineering](https://anulum.github.io/scpn-fusion-core/python/userguide/nuclear.html) | Blanket neutronics, PWI, divertor, TEMHD |
| [Diagnostics](https://anulum.github.io/scpn-fusion-core/python/userguide/diagnostics.html) | Synthetic sensors, forward models, SXR tomography |
| [Neuro-Symbolic Compiler](https://anulum.github.io/scpn-fusion-core/python/userguide/scpn_compiler.html) | Petri net → SNN 5-stage pipeline |
| [HPC / Rust Acceleration](https://anulum.github.io/scpn-fusion-core/python/userguide/hpc.html) | 10-crate workspace, FFI, GPU roadmap |
| [Validation](https://anulum.github.io/scpn-fusion-core/python/userguide/validation.html) | SPARC, ITER, ITPA benchmarks |
### Technical Documents
- [Solver Tuning Guide](docs/SOLVER_TUNING_GUIDE.md) (relaxation, Tikhonov, Huber, grid sizing, common pitfalls)
- [Benchmarks & Comparisons](docs/BENCHMARKS.md)
- [Benchmark Figures (static export)](docs/BENCHMARK_FIGURES.md)
- [HIL Demo Register Map & Latency Budget](docs/hil_demo.md)
- [Compact Reactor Findings](docs/COMPACT_REACTOR_FINDINGS.md)
- [Physics Methods](docs/PHYSICS_METHODS_COMPLETE.md)
- [ITER Validation](docs/VALIDATION_AGAINST_ITER.md)
- [Neuro-Symbolic Compiler Architecture](docs/NEURO_SYMBOLIC_LOGIC_COMPILER_REPORT.md)
- [Packet C Control API](docs/PACKET_C_CONTROL_API_COMPREHENSIVE_STUDY.md)
- [Future Applications](docs/FUTURE_APPLICATIONS.md)
- [Phase 1 3D Execution Plan](docs/PHASE1_3D_EXECUTION_PLAN.md)
- [3D Gap Audit](docs/3d_gaps.md)
- [Next Sprint Execution Queue](docs/NEXT_SPRINT_EXECUTION_QUEUE.md)
- [Profiling Quickstart](profiling/README.md)
- [Comprehensive Technical Study](SCPN_FUSION_CORE_COMPREHENSIVE_STUDY.md) (30,000+ words)
### Paper Manuscripts
Two companion papers are in preparation:
1. **Equilibrium Solver Paper** -- Grad-Shafranov + multigrid + inverse reconstruction, validated against 8 SPARC GEQDSK equilibria
2. **SNN Controller Paper** -- Petri net to spiking neural network compilation pipeline with formal verification and deterministic replay
## Code Health & Hardening
The codebase has undergone **8+ systematic hardening waves** (263 tasks total, all
completed across S2-S4 and H5-H8 waves) that replaced silent clamping, `unwrap()`
calls, and implicit coercion with explicit `FusionResult<T>` error propagation
throughout the Rust workspace.
| Wave | Scope | Tasks | Highlights |
|------|-------|-------|------------|
| **S2** | Scaffold integrity | 8 | Module wiring, import consistency |
| **S3** | CI pipeline | 6 | `cargo fmt --check`, `clippy`, test gates |
| **S4** | Baseline coverage | 4 | Property-based tests (Hypothesis + proptest) |
| **H5** | SCPN compiler & controller | 37 | Deterministic replay, fault injection, contract verification |
| **H6** | Digital twin + RL | 9 | Chaos monkey campaigns, bit-flip resilience |
| **H7** | Control + diagnostics | 90 | Scoped RNG isolation, sensor model guards, MPC input validation |
| **H8** | All 10 Rust crates | 94 | Every `unwrap()` → `FusionResult`, input validation guards, shape checks |
Every production-path module now returns structured errors rather than panicking.
The full task registry is at [`docs/PHASE3_EXECUTION_REGISTRY.md`](docs/PHASE3_EXECUTION_REGISTRY.md).
## Known Limitations & Roadmap
This project is honest about what it does and does not do.
### What it does not do (yet)
| Gap | Status | Notes |
|-----|--------|-------|
| **3D MHD / stellarator equilibrium** | VMEC interface + Fourier parameterization | `vmec_interface.rs` + `equilibrium_3d.py`; external VMEC binary required for full solve |
| **Gyrokinetic turbulence** | Not planned | Use GENE/GS2 externally; SCPN provides surrogate coupling points |
| **5D kinetic transport** | Not planned | Deliberately reduced-order for real-time control |
| **GPU acceleration** | Deterministic runtime bridge + optional torch fallback ([GPU Roadmap](docs/GPU_ACCELERATION_ROADMAP.md)) | CUDA-native kernels remain roadmap work |
| **Pre-trained neural weights** | 3 of 7 shipped (MLP ITPA, FNO JET, Neural Equilibrium SPARC) | Remaining 4 surrogate lanes (neural transport, heat ML shadow, gyro-Swin, turbulence oracle) still require site-specific user training |
| **Point-wise RMSE validation** | Partial | Topology checks (axis, q-profile, GS sign) on 8 SPARC files; not yet point-wise psi comparison |
### What it does well
| Strength | Evidence |
|----------|----------|
| **Neuro-symbolic control pipeline** | Petri net → SNN compilation with formal verification (37 hardening tasks) |
| **Surrogate modeling** | FNO turbulence (41 KB trained weights), neural transport MLP, neural equilibrium |
| **Digital twin + RL** | In-situ Q-learning policy training with chaos monkey fault injection |
| **Code health** | 263 hardening tasks, 100% explicit error handling in Rust, property-based tests |
| **Real data validation** | 8 SPARC GEQDSK files (CFS), 20-shot ITPA H-mode confinement database |
| **Graceful degradation** | Every module works without Rust, without SC-NeuroCore, without GPU |
### Alignment with DOE Fusion S&T Roadmap
The project's control-first architecture aligns with DOE priorities for:
- **Plasma control systems** needed for ITER and pilot plant operations
- **AI/ML integration** in fusion (surrogate models, disruption prediction, real-time optimization)
- **Digital twin** capabilities for reactor design validation
- **Workforce development** — accessible Python+Rust codebase with 6 tutorial notebooks
Physics-first capabilities (gyrokinetics, 3D MHD, kinetic transport) are explicitly
deferred to established codes. SCPN Fusion Core is designed to **consume** their
outputs as training data for surrogates, not to **replace** them.
## Validation Data Licensing
The `validation/reference_data/` directory contains third-party data used
exclusively for regression testing. Each dataset has its own licensing terms:
| Dataset | License / Source | Redistribution |
|---------|-----------------|----------------|
| **SPARC GEQDSK** | MIT ([cfs-energy/SPARCPublic](https://github.com/cfs-energy/SPARCPublic)) | See `validation/reference_data/sparc/LICENSE` |
| **ITPA H-mode** | 20-row illustrative subset from Verdoolaege et al., NF 61 (2021) | See `validation/reference_data/itpa/README.md` |
| **ITER configs** | Internally generated from published parameters | No restrictions |
| **JET / DIII-D** | Manually constructed from published literature | No restrictions |
| **EU-DEMO / K-DEMO** | Synthetic reference configurations | No restrictions |
The SPARC data carries an MIT license from Commonwealth Fusion Systems. The
ITPA subset is a small illustrative extract from a published paper and is not
the full ITPA global confinement database. For the authoritative ITPA dataset,
contact the ITPA Confinement Database Working Group.
## Citation
If you use SCPN Fusion Core in your research, please cite using the [CITATION.cff](CITATION.cff) file or:
```bibtex
@software{scpn_fusion_core,
title = {SCPN Fusion Core: Tokamak Plasma Physics Simulation and Neuro-Symbolic Control Suite},
author = {Sotek, Miroslav and Reiprich, Michal},
year = {2026},
url = {https://github.com/anulum/scpn-fusion-core},
version = {3.0.0}
}
```
This software is archived on **Zenodo** (DOI pending first release deposit) and published on **Academia.edu**.
## Authors
- **Miroslav Sotek** — ANULUM CH & LI — [ORCID](https://orcid.org/0009-0009-3560-0851)
- **Michal Reiprich** — ANULUM CH & LI
## License
GNU Affero General Public License v3.0 — see [LICENSE](LICENSE).
For commercial licensing inquiries, contact: protoscience@anulum.li
| text/markdown | Miroslav Sotek, Michal Reiprich | null | null | null | AGPL-3.0-or-later | null | [
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering :: Physics",
"Development Status :: 5 - Production/Stable"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=8.1",
"numpy",
"matplotlib",
"scipy",
"streamlit",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"hypothesis>=6.0; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"nengo>=4.0; extra == \"snn\"",
"freegs>=0.6; extra == \"benchmark\"",
"freegs>=0.6; extra == \"... | [] | [] | [] | [
"Homepage, https://github.com/anulum/scpn-fusion-core",
"Documentation, https://anulum.github.io/scpn-fusion-core/",
"Repository, https://github.com/anulum/scpn-fusion-core",
"Bug Tracker, https://github.com/anulum/scpn-fusion-core/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:09:20.892640 | scpn_fusion-3.5.0.tar.gz | 548,549 | 4a/ab/cf25875ab1d1d443f17f55c6c14c8b95c322cbb3cdd15c31205c1a31316b/scpn_fusion-3.5.0.tar.gz | source | sdist | null | false | 24c76212fcc1aeb5e9be05568d914d68 | 98da331ccd2ab6a5ced0559519ed9efed5997fb95d6d8f281d6f160d1e527223 | 4aabcf25875ab1d1d443f17f55c6c14c8b95c322cbb3cdd15c31205c1a31316b | null | [
"LICENSE"
] | 238 |
2.4 | xscribe | 0.2.0 | Transcribe video and audio to markdown with timestamps. Supports local files and streams. | # xscribe
**Download and transcribe any online video in minutes.**
Turn any video or audio file into a clean, timestamped markdown transcript. Just point xscribe at a file or stream URL and get a readable transcript — no cloud services, no subscriptions, everything runs locally on your machine.
Powered by [faster-whisper](https://github.com/SYSTRAN/faster-whisper).
## Install
```bash
pip install xscribe
```
Missing dependencies (ffmpeg, yt-dlp) are detected automatically — xscribe will offer to install them for you on first run.
Optionally, pre-download the transcription model so your first transcription is fast:
```bash
xscribe setup
```
## Quick start
**Transcribe a video file on your computer:**
```bash
xscribe interview.mp4
```
This creates `interview.md` in your current folder with the full transcript and timestamps.
**Transcribe an online video stream:**
```bash
xscribe "https://stream.example.com/video/playlist.m3u8"
```
xscribe will download the video first, then transcribe it.
## Usage examples
```bash
# Transcribe a podcast episode
xscribe episode-42.mp3
# Transcribe a lecture recording
xscribe lecture.mov
# Transcribe a YouTube stream URL
xscribe "https://manifest.googlevideo.com/.../playlist.m3u8"
# Use a more accurate model (slower but better for tricky audio)
xscribe meeting.mp4 -m large-v3
# Save the transcript to a specific file
xscribe keynote.mp4 -o keynote-notes.md
# Force a specific language instead of auto-detect
xscribe video.mp4 -l es
# Transcribe multiple files at once
xscribe recording1.mp4 recording2.mp4 recording3.mp4
# Pre-download a specific model
xscribe setup -m large-v3
```
**Supported file types:** mp4, mp3, wav, mov, mkv, webm, m4a, flac, ogg, and anything else ffmpeg can read.
## How to get an .m3u8 URL from any website
Most streaming videos use .m3u8 playlist URLs behind the scenes. Here's how to find them:
1. Open the website with the video in Chrome or any browser
2. Right-click anywhere on the page and select **Inspect** (or press `F12`)
3. Click the **Network** tab in the developer tools panel
4. Play the video on the page
5. In the Network tab's filter/search bar, type `.m3u8`
6. You'll see one or more requests appear — right-click the URL and select **Copy URL**
7. Paste it into xscribe: `xscribe "https://...your-copied-url.m3u8"`
## Options
| Flag | Description |
|------|-------------|
| `-m, --model` | Whisper model size (see below) |
| `-l, --lang` | Force language code (e.g. `en`, `es`, `fr`, `de`, `ja`) |
| `-o, --output` | Custom output file path |
| `-v, --version` | Show version |
## Model sizes
| Model | Flag | Best for |
|-------|------|----------|
| Tiny | `-m tiny` | Quick and dirty, when you just need the gist |
| Base | *(default)* | General use, good balance of speed and quality |
| Small | `-m small` | Better accuracy, still reasonably fast |
| Medium | `-m medium` | High accuracy for important transcripts |
| Large | `-m large-v3` | Best possible accuracy, but slowest |
The model downloads automatically the first time you use it and gets cached for future runs. Use `xscribe setup -m <model>` to pre-download.
## Output format
xscribe saves transcripts as markdown files with timestamps:
```markdown
# Transcription
**Source:** `interview.mp4`
---
**[00:03]** Hello and welcome to the show.
**[00:07]** Today we're joined by a special guest...
**[01:24]** Let's dive into the first topic.
```
## License
MIT
| text/markdown | null | null | null | null | null | audio, markdown, transcription, video, whisper | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Topic :: Multimedia :: Sound/Audio :: Speech"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"faster-whisper"
] | [] | [] | [] | [
"Homepage, https://github.com/edbutlerx/xscribe",
"Issues, https://github.com/edbutlerx/xscribe/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:09:06.528544 | xscribe-0.2.0.tar.gz | 7,090 | e8/70/6f4e314cc8fc02bd58caea2f4136256d202ef2002afd144795a07be27820/xscribe-0.2.0.tar.gz | source | sdist | null | false | d84e558bf320d49eb7a0d7bf3605db19 | 02c34d488cf34bc6d1ab83b4c7f3d88c1ef4b2df57e3a7006cdffcab7b230ca5 | e8706f4e314cc8fc02bd58caea2f4136256d202ef2002afd144795a07be27820 | MIT | [
"LICENSE"
] | 241 |
2.4 | opencode-autopilot | 0.1.0b2 | Autonomous overnight engineer for OpenCode projects. Bootstrap, build, and improve while you sleep. | # opencode-autopilot
> Autonomous overnight engineer for OpenCode projects.
## Why I built this
I'm **mystic9t** — I work a day job and build hobby projects in whatever time I have at night. One of those is [Vibes](https://vibes.mystic9t.fyi), an astrology and wellness web app I genuinely enjoy but rarely have time to iterate on.
I kept seeing people talk about OpenClaw — an autonomous agent loop built on top of Claude Code. It looked powerful, but every time I tried to make sense of it, I hit a wall. The setup was confusing, the derivatives didn't click for me either, and I wanted something that worked without a steep learning curve.
So I did what any developer does: I went down a rabbit hole. I read what other people were doing — the loops, the session triggers, the heartbeat memory patterns — and I stitched together my own version built specifically for OpenCode and its default model, Big Pickle.
The first night I ran it on Vibes, I woke up to **two fully functional new features**, committed and working. They weren't perfect — I ran a dedicated cleanup session — but the core work was done while I slept. That felt like something worth packaging up.
**opencode-autopilot** is the result. A structured way to give OpenCode a persistent memory, a blueprint system, and a session loop so it keeps working instead of stopping after 10 minutes waiting for you.
---
## Roadmap
### v0.1.0 (Current Beta)
- ✅ **Kilocode support** — Auto-detect and use Kilocode as an alternative to OpenCode
- ✅ **Smart tool switching** — Automatically fallback to the other tool when one hits rate limits
- ✅ **Rate limit resume** — Pause overnight runs when both tools are rate-limited, auto-resume when limits reset
### Coming Soon
- Interactive mode for step-by-step approval
- Session resume with `--resume` flag improvements
- Plugin system for custom agents
---
## Commands
| Command | What it does |
| --- | --- |
| `opencode-autopilot` | Show help and available commands |
| `opencode-autopilot run` | Run autonomous improvement sessions on existing projects |
| `opencode-autopilot run --gg [topic]` | Full trust mode — agent researches, decides what to build, and builds it |
| `opencode-autopilot config` | Set persistent defaults for model/agent |
---
## Requirements
- [OpenCode](https://opencode.ai) or [Kilocode](https://kilocode.ai) installed and in PATH
- Python 3.12+
---
## Installation
```bash
pip install opencode-autopilot
```
Or run directly with uvx:
```bash
uvx opencode-autopilot gg
```
---
## Default model
Defaults to **`opencode/big-pickle`** — OpenCode's built-in free model, available to every OpenCode user with no setup.
Switch models per-run or permanently:
```bash
# Per-run
opencode-autopilot run --model anthropic/claude-sonnet-4-5
# Set project default
opencode-autopilot config --model anthropic/claude-sonnet-4-5
# Set global default (all projects)
opencode-autopilot config --model anthropic/claude-sonnet-4-5 --global
# Check what's active
opencode-autopilot config --show
```
---
## Usage
```bash
# Existing project — agent improves what's there
opencode-autopilot run
# Full trust — agent researches, decides, and builds with no input from you
opencode-autopilot run --gg
# Full trust with a loose nudge
opencode-autopilot run --gg "something for people who read too much"
# Resume an interrupted cycle from session 6
opencode-autopilot run --resume 6
# Fewer sessions, shorter intervals
opencode-autopilot run --sessions 6 --interval 15
# Show help
opencode-autopilot --help
opencode-autopilot run --help
```
---
## How the memory system works
The agent writes to `HEARTBEAT/` — never committed to git, always local. It tracks:
- What was done each session and the build status
- **Settled Decisions** — things tried and abandoned so it never repeats them
- Paid feature ideas logged separately for you to review
- Plans written during exploration sessions
---
## License
MIT © [mystic9t](https://github.com/mystic9t/opencode-autopilot)
| text/markdown | null | mystic9t <53944357+mystic9t@users.noreply.github.com> | null | null | MIT | agent, ai, autonomous, autopilot, big-pickle, bootstrap, coding-agent, opencode, overnight | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"jinja2>=3.1.0",
"typer>=0.15.0"
] | [] | [] | [] | [
"Homepage, https://github.com/mystic9t/opencode-autopilot",
"Repository, https://github.com/mystic9t/opencode-autopilot"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:08:23.431021 | opencode_autopilot-0.1.0b2.tar.gz | 21,339 | a1/7e/d027b4df48a615271baeb3cfc134155cc93d5f6c5e5fc2bf71e1a42e89ac/opencode_autopilot-0.1.0b2.tar.gz | source | sdist | null | false | f92fca1650abe2a45294d6019fb6d0e2 | 975de0b8fdd277059339e5d6bcb7804c633160a9292b7eda061e6348faa2a0f9 | a17ed027b4df48a615271baeb3cfc134155cc93d5f6c5e5fc2bf71e1a42e89ac | null | [
"LICENSE"
] | 214 |
2.4 | monocle-test-tools | 0.7.5 | Testing and validation framework for monocle AI agent tracing | # Monocle Test Tools
A comprehensive testing and validation framework for monocle AI agent tracing. This package provides tools for validating agent behavior, tool invocations, inference responses, and overall AI system performance.
## Features
- **Agentic Response**: Verify that agent request got the appropreate response.
- **Agent Invocation**: Verify that specific agents are invoked and delegate tasks correctly.
- **Tool Validation**: Ensure tools are called with expected inputs and produce expected outputs
- **Inference Testing**: Test model inference responses against expected schemas or content
- **Cost/Performance/Quality**: Verify token usage, error states, warnings
- **Evaluation**: Integrate with any third party or custom evaluation tools to validate LLM responses
## How does it work
The test tool runs your agent or workflow code with Monocle instrumentation enabled. It examines the traces generated by the genAI components used in your code (eg Google ADK, LangGraph etc) and verifies the test conditions you want to validated.
## Installation
```bash
pip install monocle_test_tools
```
## Quick Start
Here's a test that executes a ```root_travel_agent``` with a few inputs and validates it's response and tools invoked.
```python
from monocle_test_tools import TestCase, MonocleValidator
from adk_travel_agent import root_travel_agent
# Test cases for testing travel booking agent
agent_test_cases:list[TestCase] = [
{
"test_input": ["Book a flight from San Francisco to Mumbai for 26th Nov 2025. Book a two queen room at Marriot Intercontinental at Juhu, Mumbai for 27th Nov 2025 for 4 nights."],
"test_output": "A flight from San Francisco to Mumbai has been booked, along with a four night stay in a two queen room at the Marriot Intercontinental in Juhu, Mumbai, starting November 27th, 2025.",
"comparer": "similarity",
},
{
"test_input": ["Book a flight from San Francisco to Mumbai for 26th Nov 2025. Book a two queen room at Marriot Intercontinental at Juhu, Mumbai for 27th Nov 2025 for 4 nights."],
"test_spans": [
{
"span_type": "agentic.tool.invocation",
"entities": [
{"type": "tool", "name": "adk_book_hotel"},
{"type": "agent", "name": "adk_hotel_booking_agent"}
],
}
]
}
]
# Run test cases using Monocle test framework
@MonocleValidator().monocle_testcase(agent_test_cases)
async def test_run_workflows(my_test_case: TestCase):
await MonocleValidator().test_workflow_async(root_travel_agent, my_test_case)
if __name__ == "__main__":
pytest.main([__file__])
```
## Test format
### Testcase
A TestCase defines the input, expected output, and evaluation criteria for testing
AI agent behaviors. It can contain multiple test spans representing different
interaction points (tool invocations, agent delegations, etc.) within the test.
Each test case can specify comparison methods for evaluating test results against
expected outcomes and can be configured to expect certain errors or warnings.
```json
{
"test_input": "Input data provided to the test case, can be a prompt or structured data.",
"test_output": "Expected output that the test should produce.",
"comparer": "Method used to compare actual results with expected results. The default comparer is does exact match. The 'similarty' comparer does a fuzzy match using bert score",
"test_spans": "Array of TestSpan objects defining specific interactions to test."
}
```
### TestSpan
Represents a specific interaction or event within a test case in the Monocle testing framework.
A TestSpan defines a single testable unit of interaction such as a tool invocation,
agent delegation, or inference process. Each span captures the entities involved,
inputs and outputs, and validation criteria for that specific interaction.
Test spans enforce specific validation rules based on their type. For example:
- Tool invocation spans must include at least one tool entity
- Agentic delegation spans must include at least two agent entities (delegator and delegatee)
- Agentic invocation spans must include at least one agent entity
```json
"span_type": "Type of interaction this span represents (e.g., tool invocation, agent delegation)"
"entities": "List of entities (tools, agents) involved in this interaction. Each entity has two attributes, name and type. The type can be 'tool' or 'agent' or 'inference'"
"input": "Input provided to this interaction"
"output": "Expected output from this interaction"
"test_type": "Whether this is a 'positive' (expected to succeed) or 'negative' (expected to fail) test"
```
Check out these [examples](tests/integration/test_adk_travel_agent.py) of test cases.
.
| text/markdown | null | "Okahu Inc." <okahu-pypi@okahu.ai> | null | null | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Testing... | [] | null | null | >=3.8 | [] | [] | [] | [
"bert-score>=0.3.0",
"gitpython==3.1.45",
"jsonschema>=4.0.0",
"monocle-apptrace>=0.5.0",
"pydantic>=2.11.7",
"pytest-asyncio>=0.26.0",
"pytest>=8.0.0",
"sentence-transformers==3.3.0",
"transformers>=4.0.0",
"black>=23.0.0; extra == \"all\"",
"crewai==0.95.0; extra == \"all\"",
"flake8>=6.0.0;... | [] | [] | [] | [
"Homepage, https://github.com/monocle2ai/monocle",
"Issues, https://github.com/monocle2ai/monocle/issues",
"Repository, https://github.com/monocle2ai/monocle",
"Documentation, http://monocle2ai.org"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T18:08:14.507817 | monocle_test_tools-0.7.5.tar.gz | 38,855 | 74/48/ea569b7b3e6f9432c2b3bb40a152389f866b7e8d23c852ba734c37882bee/monocle_test_tools-0.7.5.tar.gz | source | sdist | null | false | 2655aaf6de496b677ef90bb1004698b5 | 2ea47f5ef18c930371dee5513fe6f7793a5c01af5397f88a5ba1cc0e88162e2a | 7448ea569b7b3e6f9432c2b3bb40a152389f866b7e8d23c852ba734c37882bee | null | [] | 234 |
2.4 | monocle-mcp | 0.7.5 | Monocle MCP server: prompts and tools for enabling and analyzing Monocle tracing in GenAI apps. | # Monocle MCP server
**Monocle** is a community-driven OSS framework for tracing GenAI app code governed as a [Linux Foundation AI & Data project](https://lfaidata.foundation/projects/monocle/) that helps developers and platform engineers building or managing GenAI apps monitor these in prod by making it easy to instrument their code to capture traces that are compliant with open-source cloud-native observability ecosystem.
## What is Monocle MCP server
The MCP server provided by monocle includes curated prompts and tools that help you enable Monocle tracing in your application and analyze the traces generated by Monocle.
## Use Monocle MCP
First install monocle-apptrace: pip install monocle-mcp
Open bash and run the following command to run the monocle mcp server with stdio:
monocle_apptrace
If you are using VS Code you can add following entry to your .vscode/mcp.json
```json
"monocle-mcp-server": {
"type": "stdio",
"command": "uvx",
"args": [
"monocle_apptrace"
],
"env": {}
}
```
| text/markdown | null | "Okahu Inc." <okahu-pypi@okahu.ai> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=8.0.0",
"mcp>=1.12.1",
"monocle-apptrace>=0.5.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T18:08:13.315468 | monocle_mcp-0.7.5.tar.gz | 6,936 | 4e/4b/ffb330b01d8d59ed8b646fbbd3b187db7fbd73ed770df8bfea6685570642/monocle_mcp-0.7.5.tar.gz | source | sdist | null | false | 408534f289429842047bfb0006afe659 | 8e49520195e6d006a0d9821b7838ff1f355ebc9f3b05f6b633916ad98c0f1d94 | 4e4bffb330b01d8d59ed8b646fbbd3b187db7fbd73ed770df8bfea6685570642 | null | [] | 230 |
2.4 | monocle-apptrace | 0.7.5 | package with monocle genAI tracing | # Monocle Apptrace
**Monocle** helps developers and platform engineers building or managing GenAI apps monitor these in prod by making it easy to instrument their code to capture traces that are compliant with open-source cloud-native observability ecosystem.
**Monocle** is a community-driven OSS framework for tracing GenAI app code governed as a [Linux Foundation AI & Data project](https://lfaidata.foundation/projects/monocle/).
## Use Monocle
- Get the Monocle package
```
pip install monocle_apptrace
```
- Instrument your app code
- Import the Monocle package
```
from monocle_apptrace.instrumentor import setup_monocle_telemetry
```
- Setup instrumentation in your ```main()``` function
```
setup_monocle_telemetry(workflow_name="your-app-name")
```
- (Optionally) Modify config to alter where traces are sent
See [Monocle user guide](Monocle_User_Guide.md) for more details.
| text/markdown | null | "Okahu Inc." <okahu-pypi@okahu.ai> | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"opentelemetry-api<2.0.0,>=1.20.0",
"opentelemetry-instrumentation>=0.41b0",
"opentelemetry-sdk<2.0.0,>=1.20.0",
"requests",
"rfc3986>=2.0.0",
"wrapt>=1.14.0",
"bert-score; extra == \"ai-test\"",
"transformers; extra == \"ai-test\"",
"boto3==1.40.52; extra == \"aws\"",
"azure-ai-inference; extra =... | [] | [] | [] | [
"Homepage, https://github.com/monocle2ai/monocle",
"Issues, https://github.com/monocle2ai/monocle/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T18:08:12.506017 | monocle_apptrace-0.7.5.tar.gz | 135,733 | cf/bd/ba3faa7c72a614653b58f8f70f55995688cf720cb51a9270cb9ebd4c7019/monocle_apptrace-0.7.5.tar.gz | source | sdist | null | false | eb00d15a55187cc6854ad8f9ae0c4ec7 | c20c2291eaa223ac1cd5d9d4ca500e0c71ef0ee80f30e81e4e89be5c20dd47d3 | cfbdba3faa7c72a614653b58f8f70f55995688cf720cb51a9270cb9ebd4c7019 | null | [] | 660 |
2.4 | Topsis-Ansh-12345678 | 0.1 | TOPSIS implementation using Python | # Topsis-Ansh-12345678
A Python package implementing the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method.
## Installation
pip install Topsis-Ansh-12345678
## Usage
topsis <InputDataFile> <Weights> <Impacts> <OutputFile>
Example:
topsis data.csv 1,1,1,2 +,+,-,+ result.csv
## Input Requirements
- Minimum 3 columns
- First column: Alternatives (non-numeric allowed)
- Remaining columns: Numeric values only
- Weights separated by commas
- Impacts must be + or -
## Output
Adds:
- Topsis Score
- Rank
Higher score = better alternative
| text/markdown | Ansh | null | null | null | null | null | [] | [] | null | null | >=3.6 | [] | [] | [] | [
"pandas",
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T18:07:30.858674 | topsis_ansh_12345678-0.1.tar.gz | 1,701 | 18/d8/e59d4d1901d297fc2f325e3a255dcddcf0ba7951268108a2bf76488a59be/topsis_ansh_12345678-0.1.tar.gz | source | sdist | null | false | 790afbdd6b6dcce0e078ed8faa31c6db | a182fb914dcec51902fedd5fa6450ee457479ab65570080ad9f087732f8e4aa9 | 18d8e59d4d1901d297fc2f325e3a255dcddcf0ba7951268108a2bf76488a59be | null | [
"LICENSE"
] | 0 |
2.1 | notbank | 2.3.0a1 | Notbank API client library | # Notbank Python SDK
[main page](https://notbank.exchange)
[sign up in Notbank](https://www.cryptomkt.com/account/register).
## Installation
To install Notbank use pip
```bash
pip install notbank
```
## Documentation
This sdk makes use of the [api](https://apidoc.notbank.exchange) of Notbank.
## Quick start
### Client creation
There are two communication protocols supported by the Notbank client. Communication via websocket, and via rest. Communication via websocket requires connection and permits subscriptions, other than that they are equivalent.
```python
from notbank_python_sdk.requests_models import *
from notbank_python_sdk.client_connection_factory import new_rest_client_connection
from notbank_python_sdk.error import NotbankException
from notbank_python_sdk.notbank_client import NotbankClient
try:
# a rest client via http
rest_connection = new_rest_client_connection()
client = NotbankClient(rest_connection)
except NotbankException as e:
print(e)
```
### Error handling
All internal notbank client and notbank server errors inherit from NotbankException, and all client methods may throw it (e.g. invalid request, request timeout, ...)
```python
# client : NotbankClient : ....
try:
orderbook = client.get_order_book(OrderBookRequest("BTCUSDT", 1, 1))
except NotbankException as e:
print(e)
```
### Put order at the top of book example
```python
import random
from decimal import Decimal
from notbank_python_sdk.notbank_client import NotbankClient
from notbank_python_sdk.client_connection_factory import new_rest_client_connection
account_id: int = 13 # must be user account id
# instantiate client
connection = new_rest_client_connection()
client = NotbankClient(connection)
# authentication (same for rest client or websocket client)
authenticate = client.authenticate(
AuthenticateRequest(
api_public_key="api-public-key",
api_secret_key="api-secret-key",
user_id="user-id",
)
)
if not authenticate.authenticated:
raise Exception("client not authenticated")
# get USDT user balance (also known as position)
positions = client.get_account_positions(
GetAccountPositionsRequest(account_id))
usdt_balance = None
product = "USDT"
market_pair = "BTCUSDT"
for position in positions:
if position.product_symbol == product:
usdt_balance = position
if usdt_balance is None:
raise Exception("user has no balance")
# define order_amount (between all usdt_balance and half usdt_balance)
total_balance = usdt_balance.amount
quantity_to_spend = total_balance - \
Decimal(random.random()) * (total_balance/2)
# define order_price (around market top)
orderbook = client.get_order_book(
OrderBookRequest(market_pair, level=2, depth=5))
top_orderbook = orderbook.bids[0]
delta = Decimal(random.randrange(10, 100))/1000
order_price = top_orderbook.price + delta
order_quantity = quantity_to_spend / order_price
# send order
instrument = client.get_instrument_by_symbol(market_pair)
request = SendOrderRequest(
instrument=instrument,
account_id=account_id,
time_in_force=TimeInForce.GTC,
side=Side.Buy,
quantity=order_quantity,
limit_price=order_price,
order_type=OrderType.Limit,
)
response = client.send_order(request)
# handle order result
if response.status == SendOrderStatus.REJECTED:
# order was rejected
raise Exception("rejected order")
else:
# order was accepted
order_id = response.order_id
print(order_id)
# close client
client.close()
```
### websocket
There are two websocket clients, and can be instanced with the functions `new_websocket_client_connection` and `new_restarting_websocket_client_connection`.
The restarting websocket will reconnect forever when the connection goes down unexpectedly, re-authenticating if it was authenticated, and re-subscribing to already stablished subscriptions. While reconnecting, calls to the websocket will throw. For subscriptions, reconnection will call again the snapshot hooks.
```python
from notbank_python_sdk.requests_models import *
from notbank_python_sdk.client_connection_factory import new_websocket_client_connection, new_restarting_websocket_client_connection
from notbank_python_sdk.error import NotbankException
from notbank_python_sdk.notbank_client import NotbankClient
try:
# a websocket client
websocket_connection = new_websocket_client_connection()
client = NotbankClient(websocket_connection)
except NotbankException as e:
print(e)
try:
# a restarting websocket client
restarting_websocket_connection = new_restarting_websocket_client_connection()
client = NotbankClient(restarting_websocket_connection)
except NotbankException as e:
print(e)
```
| text/markdown | Notbank | null | null | null | null | api, notbank, cryptomkt, cryptomarket, bitcoin, client, cryptocurrency | [
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3 :: Only",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries :: Python Modules"... | [] | https://github.com/notbank-exchange/notbank-python | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [] | twine/4.0.2 CPython/3.7.16 | 2026-02-19T18:06:48.251065 | notbank-2.3.0a1.tar.gz | 64,340 | 53/67/3f136178340b09876e0030a8406be2ee76a3e1ee235377260ac7ff280d1a/notbank-2.3.0a1.tar.gz | source | sdist | null | false | a988f656420629c6b110cd5f6d5bac57 | a17d7188dab2fc3b152ddb2f9788990bbd7cd46233d92e5be0fa8462c7cf0646 | 53673f136178340b09876e0030a8406be2ee76a3e1ee235377260ac7ff280d1a | null | [] | 205 |
2.4 | policyengine-us-data | 1.69.3 | A package to create representative microdata for the US. | # PolicyEngine US Data
## Installation
While it is possible to install via PyPi:
```bash
pip install policyengine-us-data
```
the recommended installation is
```
pip install -e .[dev]
```
which installs the development dependencies in a reference-only manner (so that changes
to the package code will be reflected immediately); `policyengine-us-data` is a dev package
and not intended for direct access.
## SSA Data Sources
The following SSA data sources are used in this project:
- [Latest Trustee's Report (2025)](https://www.ssa.gov/oact/TR/2025/index.html) - Source for `social_security_aux.csv` (extracted via `extract_ssa_costs.py`)
- [Single Year Supplementary Tables (2025)](https://www.ssa.gov/oact/tr/2025/lrIndex.html) - Long-range demographic and economic projections
- [Single Year Age Demographic Projections (2024 - latest published)](https://www.ssa.gov/oact/HistEst/Population/2024/Population2024.html) - Source for `SSPopJul_TR2024.csv` population data
## Building the Paper
### Prerequisites
The paper requires a LaTeX distribution (e.g., TeXLive or MiKTeX) with the following packages:
- graphicx (for figures)
- amsmath (for mathematical notation)
- natbib (for bibliography management)
- hyperref (for PDF links)
- booktabs (for tables)
- geometry (for page layout)
- microtype (for typography)
- xcolor (for colored links)
On Ubuntu/Debian, you can install these with:
```bash
sudo apt-get install texlive-latex-base texlive-latex-recommended texlive-latex-extra texlive-fonts-recommended
```
On macOS with Homebrew:
```bash
brew install --cask mactex
```
### Building
To build the paper:
```bash
make paper
```
To clean LaTeX build files:
```bash
make clean-paper
```
The output PDF will be at `paper/main.pdf`.
## Building the Documentation
### Prerequisites
The documentation uses Jupyter Book 2 (pre-release) with MyST. To install:
```bash
# Install Jupyter Book 2 pre-release
pip install --pre "jupyter-book==2.*"
# Install MyST CLI
npm install -g mystmd
```
### Building
To build and serve the documentation locally:
```bash
cd docs
myst start
```
Or alternatively from the project root:
```bash
jupyter book start docs
```
Both commands will start a local server at http://localhost:3001 where you can view the documentation.
The legacy Makefile command:
```bash
make documentation
```
Note: The Makefile uses the older `jb` command syntax which may not work with Jupyter Book 2. Use `myst start` or `jupyter book start docs` instead.
| text/markdown | null | PolicyEngine <hello@policyengine.org> | null | null | null | null | [
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14.0,>=3.12 | [] | [] | [] | [
"policyengine-us>=1.516.0",
"policyengine-core>=3.23.6",
"pandas>=2.3.1",
"requests>=2.25.0",
"tqdm>=4.60.0",
"microdf_python>=1.2.1",
"setuptools>=60",
"microimpute>=1.1.4",
"pip-system-certs>=3.0",
"google-cloud-storage>=2.0.0",
"google-auth>=2.0.0",
"scipy>=1.15.3",
"statsmodels>=0.14.5",... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:05:57.314943 | policyengine_us_data-1.69.3.tar.gz | 50,930,435 | 71/0f/dd14e8173cb350b650d90b2556523e0c10da00047e013a11454998d5e2e0/policyengine_us_data-1.69.3.tar.gz | source | sdist | null | false | cb1a7db52352b4d0d67797466fee944f | 7352b03fe5ddd13d624c254f943504488e91d8177ca3016368903703234bc9dd | 710fdd14e8173cb350b650d90b2556523e0c10da00047e013a11454998d5e2e0 | null | [] | 243 |
2.4 | cade-cli | 0.5.0 | Cade - The CLI Agent from Arcade.dev | # Cade
An AI-powered CLI agent for coding and everyday tasks. Powered by [Arcade.dev](https://arcade.dev).
## Installation
### Prerequisites
- Python 3.11+
- Arcade account ([arcade.dev](https://arcade.dev))
- AI provider API key: `OPENAI_API_KEY` or `ANTHROPIC_API_KEY`
### Homebrew (macOS/Linux)
```bash
brew tap ArcadeAI/tap
brew install cade
```
### Install with uv
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
uv tool install cade-cli
```
### Install with pip
```bash
pip install cade-cli
```
### From Source
```bash
git clone https://github.com/arcadeai-labs/cade.git
cd cade
uv venv --python 3.11
source .venv/bin/activate
uv sync
```
### Authenticate
```bash
cade login
```
## Usage
### Start a Chat
```bash
cade
```
### Resume a Thread
```bash
cade -r # Resume most recent
cade resume "my-project" # Resume by name
```
### Authentication
Cade uses Arcade Cloud for authentication and shares credentials with arcade-cli.
```bash
cade login # Log in to Arcade Cloud
cade logout # Log out
cade whoami # Show current login status
```
### Context Management
Switch between organizations and projects for Arcade Cloud features.
```bash
cade context show # Show current org/project
cade context list # List available orgs and projects
cade context switch -i # Interactive selection
cade context switch --org my-org --project my-project
```
### Single Message Mode
```bash
cade -m "What files are in this directory?"
cat error.log | cade -m "What went wrong?"
```
### Options
| Option | Description |
|--------|-------------|
| `-r`, `--resume` | Resume the most recent thread |
| `-m`, `--message` | Single message mode (non-interactive) |
| `-L`, `--local-only` | Disable remote tools (use only local tools) |
| `-v`, `--verbose` | Enable debug logging |
| `--version` | Show version |
### In-Chat Commands
| Command | Description |
|---------|-------------|
| `/help` | Show available commands |
| `/logs` | View recent log entries |
| `/clear` | Clear the screen |
| `/copy` | Copy last response to clipboard |
| `Ctrl+C` | Exit |
## Thread Management
```bash
cade thread list # List all threads
cade thread list --branch main # Filter by branch
cade thread get <thread-id> # Get thread details
cade thread get <thread-id> --messages # Show messages
cade thread delete <thread-id> # Delete thread
```
## Tool Management
Tools come from three sources: local, Arcade Cloud, and MCP servers.
```bash
cade tool list # List all tools
cade tool list --source local # Filter by source
cade tool search "file" # Search tools
cade tool info Local_ReadFile # Tool details
```
### Built-in Tools
| Tool | Description |
|------|-------------|
| `Local_ReadFile` | Read file contents |
| `Local_WriteFile` | Write or append to files |
| `Local_ListFiles` | List directory contents |
| `Local_SearchText` | Search for text patterns |
| `Local_ExecuteShell` | Run shell commands |
| `Local_CreateDirectory` | Create directories |
| `Local_DeleteFile` | Delete files |
| `Local_GetGitStatus` | Get git status |
## MCP Servers
Connect to [MCP](https://modelcontextprotocol.io/) servers for extended capabilities.
```bash
cade mcp list # List servers
cade mcp add my-server http://localhost:8080 # Add server
cade mcp add my-server http://localhost:8080 --auth bearer -t <token> # With auth
cade mcp test my-server # Test connection
cade mcp enable my-server # Enable
cade mcp disable my-server # Disable
cade mcp rm my-server # Remove
```
## Configuration
Config is stored in `~/.cadecoder/`:
| File | Description |
|------|-------------|
| `cadecoder.toml` | Settings |
| `cadecoder_history.db` | Thread history |
| `cadecoder.log` | Logs |
| `mcp_servers.yaml` | MCP server configs |
### Environment Variables
| Variable | Description |
|----------|-------------|
| `OPENAI_API_KEY` | OpenAI API key |
| `OPENAI_BASE_URL` | Custom OpenAI-compatible API endpoint |
| `ANTHROPIC_API_KEY` | Anthropic API key |
| `ARCADE_API_KEY` | Arcade API key (alternative to OAuth) |
| `ARCADE_BASE_URL` | Custom Arcade API endpoint |
| `CADE_LOCAL_ONLY` | Set to `1` to disable remote tools |
| `CADECODER_HOME` | Override config directory |
### Example Config
```toml
# ~/.cadecoder/cadecoder.toml
default_model = "gpt-4.1"
debug_mode = false
use_responses_api = true
[responses_config]
enabled = true
streaming_enabled = true
[model_settings]
provider = "openai"
model = "gpt-4.1"
[tool_settings]
# Tool filtering is managed via MCP server configuration
# See: cade mcp add --help
```
## Using Local or Custom LLMs
Cade works with any OpenAI-compatible API, including local servers (Ollama, vLLM, llama.cpp) and alternative cloud providers (Together AI, Groq, Fireworks).
### Local-Only Mode
When using local LLMs, you can skip Arcade Cloud authentication entirely with `--local-only`:
```bash
# Local Ollama server without Arcade Cloud
cade chat --local-only --endpoint "http://localhost:11434/v1" --model "llama3"
# Or via environment variable
CADE_LOCAL_ONLY=1 cade chat --endpoint "http://localhost:11434/v1" --model "llama3"
```
This disables remote tools and uses only local tools. Cade will also gracefully fall back to local-only mode if Arcade Cloud authentication is not configured.
### Via CLI Flags
```bash
# Local Ollama server
cade chat --endpoint "http://localhost:11434/v1" --model "glm-4.7-flash:latest"
# vLLM server
cade chat -e http://localhost:8000/v1 -m mistral-7b
```
### Via Environment Variables
```bash
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_API_KEY="ollama" # Dummy key for local model
cade chat --model glm-4.7-flash:latest
```
### Via Config File
```toml
# ~/.cadecoder/cadecoder.toml
default_model = "glm-4.7-flash:latest"
[model_settings]
host = "http://localhost:11434/v1"
api_key = "ollama"
```
After configuring the TOML file:
```bash
cade chat
```
### `cade chat` Configuration Precedence
Settings are resolved in this order (first is used):
1. CLI flags (`--endpoint`, `--model`)
2. Environment variables (`OPENAI_BASE_URL`, `OPENAI_API_KEY`)
3. Config file (`model_settings.host`, `model_settings.api_key`)
4. Hardcoded defaults
## Contributing
### Development Setup
```bash
git clone https://github.com/arcadeai-labs/cade.git
cd cade
uv sync --extra dev
```
### Run Tests
```bash
pytest
ruff check src/
ruff format src/
```
### Code Style
- Python 3.11+ with modern type hints (`dict`, `list`, `| None`)
- Ruff for linting and formatting
- Pytest for testing
- Docstrings for public functions and classes
### Submitting Changes
1. Fork the repository
2. Create a feature branch
3. Make changes with tests
4. Run `ruff check . && pytest`
5. Open a Pull Request
## Resources
- [arcade.dev](https://arcade.dev)
- [Documentation](https://docs.arcade.dev)
- [Issues](https://github.com/arcadeai-labs/cade/issues)
- [Releases](https://github.com/arcadeai-labs/cade/releases)
## License
MIT
| text/markdown | null | "Arcade AI Inc." <dev@arcade.dev> | null | null | MIT License
Copyright (c) 2024 Arcade AI Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | agent, ai, arcade, cli, coding-assistant, llm, mcp | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"T... | [] | null | null | >=3.11 | [] | [] | [] | [
"anthropic<1.0.0,>=0.34.0",
"arcade-core<5.0.0,>=4.1.0",
"arcade-mcp-server>=1.0.0",
"arcade-tdk>=2.0.0",
"authlib<2.0.0,>=1.6.0",
"httpx<1.0.0,>=0.27.0",
"openai<2.0.0,>=1.0.0",
"prompt-toolkit<4.0.0,>=3.0.52",
"pydantic[email]<3.0.0,>=2.0.0",
"pyperclip<2.0.0,>=1.8.0",
"pyyaml<7.0.0,>=6.0",
... | [] | [] | [] | [
"Homepage, https://arcade.dev",
"Documentation, https://docs.arcade.dev",
"Repository, https://github.com/arcadeai-labs/cade",
"Issues, https://github.com/arcadeai-labs/cade/issues",
"Changelog, https://github.com/arcadeai-labs/cade/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:05:33.150233 | cade_cli-0.5.0.tar.gz | 95,896 | 02/22/a34b7c683a1f4bbd908fd2e2f24f672f19554972747fa43dbfcd5fac62a0/cade_cli-0.5.0.tar.gz | source | sdist | null | false | fac0bbcf6ca32df1ca0539c9845bf844 | c42a845e301815525fea6cbc5a32b228ce6d6bc4c0dfa2efcce254f8609024f4 | 0222a34b7c683a1f4bbd908fd2e2f24f672f19554972747fa43dbfcd5fac62a0 | null | [
"LICENSE"
] | 238 |
2.4 | pwrforge | 0.0.2 | C/C++ package and software development life cycle manager. Continuation of scargo project. | # pwrforge
pwrforge project was written by PWR team and is continuation of Spyrosoft Solutions S.A. scargo project. Find more information at [tft.pwr.edu.pl](https://tft.pwr.edu.pl/).
<p align="center">
<img src="docs/source/_static/pwr_logo_color.png" alt="pwr" width="200"/>
</p>
# Overview
This is the documentation for pwrforge - a Python-based C/C++ package and software development life cycle manager inspired by RUST cargo idea.
pwrforge can:
- Create a new project (binary or library) for embedded systems and x86
- Build the project
- Run static code analyzers
- Fix chosen problem automatically based on the checker analysis
- Run unit tests
- Generate documentation from the source code
- Work with the predefined docker environment depending on the chosen architecture
- Generate mocks and test skeletetons
# Installation
## Installing pwrforge on Ubuntu 24.04+ (PEP 668-compliant systems)
Ubuntu 24.04 and newer follow PEP 668, which restricts the use of pip in the system Python environment to prevent accidental damage to system-managed packages.
To safely install pwrforge, use a virtual environment:
```
python3.12 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install pwrforge
```
This ensures isolated and conflict-free usage of pwrforge without requiring elevated privileges or --break-system-packages.
## Install on ubuntu <=22.04, windows or macos
pwrforge is available on [pypi](https://pypi.org/project/pwrforge/), so you can install it with pip:
```pip install pwrforge```
If system does not find 'pwrforge' command after installing, add the installation directory to your env paths. On Ubuntu you can find installation directory by running:
```$ pip show "pwrforge"```
Then add to PATH e.g.:
```$ export PATH=~/.local/bin:${PATH}```
# Working with pwrforge

# Project dependencies
## Working with docker (recommended)
- docker with docker-compose - https://docs.docker.com/engine/install/ubuntu/
- pip
- python3 - `sudo apt install python3.12-venv python3.12-distutils -y`
# Work environment
You can always change work environment between docker or native after project is created.
Just edit the pwrforge.toml file ([project] -> build-env = "docker" or build-env = "native").
For it may be needed dependencies manually which are included in `.devcontainer/Dockerfile`
Its recommended to work in virtual environment (venv) or conda environment e.g.:
- pip install virtualenv
- virtualenv -p /usr/bin/python3.12 venv
- source venv/bin/activate
## Working in docker
1) If you create a new project, run `docker compose run pwrforge-dev` to run project development image depending on chosen architecture. All dependencies should be already there.
Run pwrforge commands as you would do natively.
2) If you create a project with --docker flag (`pwrforge new <my_proj> --docker ...`) or with any docker flag, by default each pwrforge command will be triggered in docker.
## Working natively
1) Create a project with --no-docker flag (`pwrforge new <my_proj> --no-docker ...`).
## Create the requirements for docker env
From version 2.3.2 the pwrforge is install in docker but overload by docker compose volume data, to get present version from your native env.
During deployment the requirements file is created using following command
- `pip-compile --all-extras --output-file=ci/requirements.txt pyproject.toml`
- `pip-compile --output-file=pwrforge/file_generators/templates/docker/requirements.txt.j2 pyproject.toml`
to have all newest dependencies. This solutions allow as to have pwrforge install in docker for ci/cd and be able to use newest features without official releases.
## Testing custom pwrforge generated project locally
You can make changes in pwrforge and install it locally using ```pip install .``` command when you are in the main project folder.
To test the custom pwrforge version and have this custom pwrforge available also inside the docker (crucial for testing), in created project update docker-compose.yaml:
volumes:
- ..:/workspace
- /dev:/dev
- ~/.local/lib/python3.12/site-packages/pwrforge:/usr/local/lib/python3.12/dist-packages/pwrforge
Where ```~/.local/lib/python3.12/site-packages/pwrforge``` is a path to pwrforge on your local machine. It the following path is not working, find installation dir using ```pip show pwrforge```.
To keep this setup between ```pwrforge update``` commands, in pwrforge.toml file update also ```update-exclude``` as in following example:
update-exclude = [".devcontainer/docker-compose.yaml"]
pip install --upgrade build
python -m build --wheel
# Known Issues
## MacOs with ARM processors
- On macOS devices with ARM processors (such as M1 and M3), USB device passthrough to Docker containers is not supported. While most development tasks can be performed within the Docker container, actions that involve direct interaction with USB devices, such as flashing firmware or monitoring hardware, must be executed natively on the host system.
## Windows
- On Windows devices, USB device passthrough is not supported in Docker containers when using Docker Desktop. To work around this limitation, you can use WSL2 (Windows Subsystem for Linux) or run a virtual machine with a Linux distribution like Ubuntu 22.04 to enable USB device access.
# Potential issues
pip install -e ".[dev]"
## Docker permissions on Ubuntu
When using the `docker-compose` command, you may encounter permission errors due to insufficient permissions for accessing the Docker daemon socket. To resolve this issue, ensure that your user has the necessary permissions by adding your user to the `docker` group or granting appropriate access rights to the Docker daemon socket.
To add your user to the `docker` group, run the following command:
- `newgroup docker`
- `sudo usermod -aG docker $USER`
- `sudo systemctl restart docker`
# Contributing
See contributing guide on https://pwr.github.io/pwrforge/contributing.html
| text/markdown | Andrzej Aksenczuk | andrzej.aksenczuk@pwr.edu.pl | null | null | Copyright (c) 2025 Wroclaw University of Science and Technology.
Copyright (c) 2022 Spyrosoft Solution S.A.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | firmware, package, embedded, cli | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS :: MacOS X",
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Programming Langua... | [] | null | null | >=3.10 | [] | [] | [] | [
"Sphinx; extra == \"doc\"",
"black==24.8.0; extra == \"dev\"",
"clang==17.0.6",
"click==8.1.3",
"cmake==3.30.5",
"coloredlogs==15.0.1",
"conan==2.8.1",
"coverage<7.7.0,>=7.6.1; extra == \"dev\"",
"docker==7.1.0",
"esptool==4.7.0",
"flake8>=6.1.0; extra == \"dev\"",
"flit==3.8.0; extra == \"dev... | [] | [] | [] | [
"Documentation, https://pwr.github.io/pwrforge/index.html",
"Source, https://github.com/pwr/pwrforge",
"Tracker, https://github.com/pwr/pwrforge/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T18:05:25.924605 | pwrforge-0.0.2.tar.gz | 2,714,374 | 27/4a/88e55e1c746feb118e45330da18564e328f109bc239fc313a716af39bf67/pwrforge-0.0.2.tar.gz | source | sdist | null | false | 29efd827795cb5d56448aa90722a4675 | 64c90e701df995f7f8c9e28ab5290b576c646b849ff71b3e0f7d12770bcb8420 | 274a88e55e1c746feb118e45330da18564e328f109bc239fc313a716af39bf67 | null | [
"LICENSE"
] | 223 |
2.4 | apprentice-ai | 0.2.0 | Adaptive model distillation with coaching — progressively replace expensive API calls with a fine-tuned local model | # Apprentice
Adaptive model distillation with coaching. Start with frontier API models, progressively train a local model, then withdraw the expensive dependency — while maintaining quality guarantees.
## How It Works
Apprentice manages the full lifecycle of distilling knowledge from remote frontier models (Claude, GPT, etc.) into specialized local models:
1. **Phase 1 — Cold Start**: Every request goes to the remote API. Responses are collected as training data.
2. **Phase 2 — Reinforcement**: The local model begins attempting responses alongside the remote. Outputs are compared via the confidence engine.
3. **Phase 3 — Steady State**: The local model handles most requests. Adaptive sampling periodically checks quality against the remote, adjusting frequency based on correlation.
The caller submits a request and gets a response. They don't know whether it came from a local model, a remote API, or a blend of both.
## Installation
```bash
pip install -e .
```
## Quick Start
```python
from apprentice import Apprentice
# Initialize from config
app = await Apprentice.create("apprentice.yaml")
# Send a request — routing is automatic
response = await app.run("classify_ticket", {
"text": "My payment didn't go through",
"metadata": {"source": "email"}
})
print(response.result) # {"category": "billing", "priority": 2}
print(response.source) # "local" or "remote" or "dual"
await app.close()
```
## Configuration
See [`examples/apprentice.yaml`](examples/apprentice.yaml) for a complete example. Key sections:
```yaml
tasks:
- name: classify_ticket
prompt_template: "Classify: {text}"
evaluator: structured_match
match_fields: [category, priority]
confidence_thresholds:
phase2: 50 # examples before Phase 2
phase3: 0.85 # correlation for Phase 3
remote:
provider: anthropic
model: claude-sonnet-4-5-20250929
api_key: env:ANTHROPIC_API_KEY
local:
backend: ollama
base_model: llama3.1:8b
budget:
monthly_limit_usd: 150.00
```
## Architecture
25 components organized in two layers — 18 leaf implementations with zero cross-dependencies, wired together by 7 integration compositions:
### Leaf Components
| Component | Purpose |
|-----------|---------|
| `config_loader` | Load and validate YAML configuration |
| `task_registry` | Manage task type definitions and schemas |
| `data_models` | Shared Pydantic models across all components |
| `remote_api_client` | Multi-provider API abstraction (Anthropic, OpenAI, etc.) |
| `local_model_server` | Local model inference (Ollama, vLLM, llama.cpp) |
| `evaluators` | Response quality scoring (exact match, semantic, structured) |
| `phase_manager` | Phase 1/2/3 lifecycle and transitions |
| `rolling_window` | Sliding window correlation tracking |
| `sampling_scheduler` | Adaptive sampling frequency control |
| `training_data_store` | Training example collection and management |
| `fine_tuning_orchestrator` | Fine-tuning pipeline (LoRA, OpenAI, HuggingFace) |
| `model_validator` | Pre-promotion model quality validation |
| `budget_manager` | Multi-window spend tracking and enforcement |
| `router` | Request routing (local, remote, dual) |
| `apprentice_class` | Core Apprentice class — run, status, report |
| `cli` | Command-line interface |
| `audit_log` | Structured event logging (JSONL) |
| `report_generator` | Reports, metrics, and observability |
### Integration Compositions
| Composition | Children | Purpose |
|-------------|----------|---------|
| `config_and_registry` | config_loader, task_registry, data_models | Configuration + type system |
| `confidence_engine` | evaluators, phase_manager, rolling_window | Quality tracking pipeline |
| `external_interfaces` | remote_api_client, local_model_server | External service adapters |
| `training_pipeline` | training_data_store, fine_tuning_orchestrator, model_validator | Training lifecycle |
| `unified_interface` | apprentice_class, cli | User-facing API + CLI |
| `reporting` | audit_log, report_generator | Observability layer |
| `root` | all 6 compositions above | Full system composition root |
## CLI
```bash
apprentice run config.yaml # Start the system
apprentice status config.yaml # Show current phase, confidence, budget
apprentice report config.yaml # Generate summary report
```
## Development
```bash
make dev # Install with dev + lint dependencies
make test # Run all 2,064 tests
make test-quick # Stop on first failure
make lint # Run ruff linter
make lint-fix # Auto-fix lint issues
make clean # Remove build artifacts
```
## Built With
This project was built using [Pact](https://github.com/jmcentire/pact) — a contract-first multi-agent software engineering framework. Pact decomposed the task into 25 components, generated contracts and tests for each, then implemented them using iterative Claude Code sessions that write code, run tests, and fix failures autonomously.
## License
MIT
| text/markdown | Jeremy McEntire | null | null | null | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.25.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"python-dateutil>=2.8; extra == \"dev\"",
"google-cloud-storage>=2.10; extra == \"gke\"",
"kubernetes>=28.0; extra == \"gke\"",
"ruff>=0.4.0; extra == \"lint\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:04:26.826318 | apprentice_ai-0.2.0.tar.gz | 394,850 | 9f/29/6d2480ffe704169ee9c3ab1b9befb9e80b35bb1bcf695dafb1dd7df929c0/apprentice_ai-0.2.0.tar.gz | source | sdist | null | false | 689ea81cda8c3c3c241c939fbded3b8c | 69227ecd89bf89c94d6f97cd3fad2dd693c49543f099abe03ff68441499c7a8c | 9f296d2480ffe704169ee9c3ab1b9befb9e80b35bb1bcf695dafb1dd7df929c0 | null | [
"LICENSE"
] | 248 |
2.4 | bussdcc-system-health | 0.4.0 | bussdcc-system-health | # BussDCC System Health
**bussdcc-system-health** is a reference application demonstrating how to build a real system using **bussdcc** — a deterministic cybernetic runtime for Python.
It monitors host system health and exposes:
* live metrics via a web dashboard
* structured event streams
* historical JSONL logging
* real-time UI updates through WebSockets
The project is intentionally small but complete. It shows how **services, processes, interfaces, and sinks** work together inside a bussdcc runtime.
## Overview
This application collects and visualizes system telemetry:
* CPU usage
* Memory usage
* Disk usage
* System load averages
* CPU temperature
* Network throughput
* Hardware throttling / undervoltage status (Raspberry Pi compatible)
* Host identity information
The runtime emits events continuously, which are:
1. processed into state
2. streamed to a web interface
3. logged to disk
This demonstrates bussdcc’s core pattern:
```
Service → Events → Process → State → Interface → UI
↓
Sinks
```
## Architecture
The project intentionally mirrors bussdcc’s runtime model.
### Services
**`SystemService`**
Runs periodically and emits system telemetry events:
```
system.memory.usage.updated
system.cpu.usage.updated
system.disk.usage.updated
system.temperature.updated
system.network.usage.updated
system.throttling.updated
```
Services are responsible only for **observing the world** and emitting events.
### Processes
**`SystemProcess`**
Consumes events and updates runtime state:
```python
ctx.state.set("system.cpu.usage", evt.data)
```
Processes transform event streams into structured shared state.
### Interface
**`SystemWebInterface`**
* Runs a Flask + Socket.IO server
* Streams runtime events to the browser
* Renders state snapshots on page load
Interfaces expose the system externally without coupling to services.
### Event Sinks
Two sinks demonstrate observability patterns:
#### ConsoleSink
Prints structured JSON events to stdout.
#### JsonlSink
Writes rotating JSONL event logs:
```
data/history/YYYY-MM-DD/HH-MM-SS.jsonl
```
Each line is a single immutable event record.
## Dashboard
The web UI provides live system visibility:
* ✅ Health status indicator
* CPU usage breakdown
* Memory & disk utilization
* Load averages
* Network throughput per interface
* Thermal & power throttling detection
Updates occur in real time using Socket.IO events emitted directly from the runtime.
## Installation
Requires Python 3.11+.
```bash
pip install bussdcc-system-health
```
Or install locally:
```bash
pip install -e .
```
## Running
Start the runtime:
```bash
bussdcc-system-health
```
Then open:
```
http://localhost:8086
```
## Example Event Output
Console sink output:
```json
{"time":"2026-01-01T12:00:00Z","name":"system.cpu.usage.updated","data":{"user":12.4,"system":3.1,"idle":84.5}}
```
This illustrates bussdcc’s core idea:
> the system is an event stream first, UI second.
## Project Structure
```
bussdcc_system_health/
├── cli.py # runtime entrypoint
├── runtime/ # custom runtime lifecycle
├── services/ # telemetry collection
├── processes/ # state projection
├── interfaces/ # web UI
└── sinks/ # event logging
```
## What This Example Demonstrates
This project is designed as a learning reference for bussdcc concepts:
| Concept | Demonstrated By |
| --------------------- | ---------------- |
| Deterministic runtime | custom Runtime |
| Periodic services | SystemService |
| Event-driven state | SystemProcess |
| External interfaces | Flask web UI |
| Observability | sinks |
| Real-time updates | Socket.IO bridge |
## Why bussdcc?
Traditional applications couple:
```
logic ↔ UI ↔ IO ↔ background work
```
bussdcc separates responsibilities through events:
```
observe → emit → transform → expose
```
This leads to systems that are:
* easier to reason about
* deterministic
* observable by default
* naturally extensible
## Hardware Notes
Some features depend on Linux system interfaces:
| Feature | Platform |
| -------------------- | ------------------------ |
| CPU temperature | Linux SBC / Raspberry Pi |
| Throttling detection | Raspberry Pi firmware |
| Network metrics | Linux |
The application still runs on non-Pi systems, but certain fields may be unavailable.
## Development
Install dependencies:
```bash
pip install -e .[dev]
```
Run directly:
```bash
python -m bussdcc_system_health.cli
```
## License
MIT License
## Related
* bussdcc runtime: [https://github.com/jbussdieker/bussdcc](https://github.com/jbussdieker/bussdcc)
| text/markdown | null | "Joshua B. Bussdieker" <jbussdieker@gmail.com> | null | "Joshua B. Bussdieker" <jbussdieker@gmail.com> | null | system-health | [
"Topic :: Software Development :: Libraries :: Python Modules",
"Development Status :: 2 - Pre-Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent"... | [] | null | null | >=3.11 | [] | [] | [] | [
"bussdcc==0.22.0",
"smbus2",
"psutil",
"flask",
"flask-socketio",
"bootstrap-flask",
"types-Flask-SocketIO; extra == \"dev\"",
"types-psutil; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jbussdieker/bussdcc-system-health",
"Documentation, https://github.com/jbussdieker/bussdcc-system-health/blob/main/README.md",
"Repository, https://github.com/jbussdieker/bussdcc-system-health",
"Issues, https://github.com/jbussdieker/bussdcc-system-health/issues",
"Changelog, ... | twine/6.2.0 CPython/3.12.12 | 2026-02-19T18:04:20.913697 | bussdcc_system_health-0.4.0.tar.gz | 18,889 | c3/f7/5e569b39722ff2b02de35e78e1eee30bc61b9e9aeb55cf118b5de86addd7/bussdcc_system_health-0.4.0.tar.gz | source | sdist | null | false | 49dce6390f4b5506b371bf36bb25ec69 | 52a954553211915d877dadb0d873812d96a46c8bee7a249cbcbdff4429626cb1 | c3f75e569b39722ff2b02de35e78e1eee30bc61b9e9aeb55cf118b5de86addd7 | MIT | [
"LICENSE"
] | 239 |
2.1 | ntc-templates | 9.0.0 | TextFSM Templates for Network Devices, and Python wrapper for TextFSM's CliTable. | # NTC Templates
<p align="center">
<img src="https://raw.githubusercontent.com/networktocode/ntc-templates/master/docs/images/icon-ntc-templates.png" class="logo" height="200px">
<br>
<a href="https://github.com/networktocode/ntc-templates/actions"><img src="https://github.com/networktocode/ntc-templates/actions/workflows/ci.yml/badge.svg?branch=main"></a>
<a href="https://ntc-templates.readthedocs.io/en/latest"><img src="https://readthedocs.org/projects/ntc-templates/badge/"></a>
<a href="https://pypi.org/project/ntc-templates/"><img src="https://img.shields.io/pypi/v/ntc-templates"></a>
<a href="https://pypi.org/project/ntc-templates/"><img src="https://img.shields.io/pypi/dm/ntc-templates"></a>
<br>
</p>
## Overview
Repository of TextFSM Templates for Network Devices, and Python wrapper for TextFSM's CliTable. TextFSM is a tool to help make parsing cli commands more manageable.
## Documentation
Full web-based HTML documentation for this library can be found over on the [NTC Templates Docs](https://ntc-templates.readthedocs.io) website:
- [User Guide](https://ntc-templates.readthedocs.io/en/latest/user/lib_overview/) - Overview, Using the library, Getting Started.
- [Administrator Guide](https://ntc-templates.readthedocs.io/en/latest/admin/install/) - How to Install, Configure, Upgrade, or Uninstall the library.
- [Developer Guide](https://ntc-templates.readthedocs.io/en/latest/dev/contributing/) - Extending the library, Code Reference, Contribution Guide.
- [Release Notes / Changelog](https://ntc-templates.readthedocs.io/en/latest/admin/release_notes/).
- [Frequently Asked Questions](https://ntc-templates.readthedocs.io/en/latest/user/faq/).
### Contributing to the Docs
All the Markdown source for the library documentation can be found under the [docs](https://github.com/networktocode/ntc-templates/tree/develop/docs) folder in this repository. For simple edits, a Markdown capable editor is sufficient - clone the repository and edit away.
If you need to view the fully generated documentation site, you can build it with [mkdocs](https://www.mkdocs.org/). A container hosting the docs will be started using the invoke commands (details in the [Development Environment Guide](https://ntc-templates.readthedocs.io/en/latest/dev/dev_environment/#docker-development-environment)) on [http://localhost:8001](http://localhost:8001). As your changes are saved, the live docs will be automatically reloaded.
Any PRs with fixes or improvements are very welcome!
## Questions
For any questions or comments, please check the [FAQ](https://ntc-templates.readthedocs.io/en/latest/user/faq/) first. Feel free to also swing by the [Network to Code Slack](https://networktocode.slack.com/) (channel `#networktocode`), sign up [here](http://slack.networktocode.com/) if you don't have an account.
## Additional Automation Resources
There are situations where one solution or tool might not fulfill needs or as well as another. Fortunately there are often alternatives and the [Awesome Network Automation](https://github.com/networktocode/awesome-network-automation) list can help introduce you to additional resources and solutions!
| text/markdown | Network to Code | info@networktocode.com | null | null | Apache-2.0 | textfsm, network parsers | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming L... | [] | https://ntc-templates.readthedocs.io | null | <4.0,>=3.10 | [] | [] | [] | [
"textfsm>=1.1.0"
] | [] | [] | [] | [
"Documentation, https://ntc-templates.readthedocs.io",
"Repository, https://github.com/networktocode/ntc-templates"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:04:17.290837 | ntc_templates-9.0.0.tar.gz | 322,291 | db/9f/92d8d13e21d9b3dab816e47cf3383939ce526429fa2ef0a5b23fccfdde9c/ntc_templates-9.0.0.tar.gz | source | sdist | null | false | f9403e5374cbc47ea560b51c20c87cf6 | eb4a6f55a02be3db284e4da470907f52f0dd04ef2c721e6f031c2128c9fbcb98 | db9f92d8d13e21d9b3dab816e47cf3383939ce526429fa2ef0a5b23fccfdde9c | null | [] | 32,779 |
2.4 | solids | 0.3.5 | Solids: A tool for crystal structure prediction | # SolidASE_0.0
Crystal Structure Prediction for web-based frameworks
This code is intended to explore the energy landscape of crystalline structures using Python and its libraries such as ASE, PyXtal, Dscribe, and Aegon.
Solids relies in two separated schemes: The Stochastich Algorithm and the Evolutive Algorithm.
The Stochastich Algorithm builts and relaxes a set of Point-Group-Based structues in order to preliminarily explore the energy landscape of cystalline structures. The advantage of this process is that is based on stages where the level of theory can be refined after each stage.
The Evolutive Algorithm improves on the Stochastic one by transmiting the already-available good structural traits to new generations using Crossover and Mutation operators. Each set of crossovers and mutants are relaxed and, in turn, pass on their characteristics to new candidates, untill halting criteria is met.
In version 1.0 Solids is interfaced with ESM optimization by ASE, GULP and VASP.
#Usage
To install in UNIX-based systems, use pip install solids.
Once installed, used ./run_solids.py input_emt to create the inputEMT file.
Then, execute the code by typping ./run_solids.py inputEMT.
| text/markdown | null | Carlos Lopez-Castro <filiberto.ortiz@cinvestav.mx> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"ase==3.23",
"aegon==1.2.8",
"pyxtal",
"dscribe"
] | [] | [] | [] | [
"Homepage, https://github.com/Carloast0790/SolidASE_0.0"
] | twine/6.2.0 CPython/3.10.18 | 2026-02-19T18:04:09.837127 | solids-0.3.5.tar.gz | 27,877 | a0/d9/813f4f2a3f8b71907883f7075dc887acd8311398e1a2cb28a8e1f95722f1/solids-0.3.5.tar.gz | source | sdist | null | false | 818ca0aeb4a923911d326d9517b144b5 | 4338d1fb6870fdf15893413b464d7e863751ee6d74d89ecf18eadedce141ff3b | a0d9813f4f2a3f8b71907883f7075dc887acd8311398e1a2cb28a8e1f95722f1 | MIT | [
"LICENSE"
] | 243 |
2.4 | o1twosum | 0.1.4 | Algorithmic O(1) streaming Two Sum solver using bitwise intersection. | # O1TwoSum
An experimental Python library achieving algorithmic O(1) Time and Space complexity for the Two Sum problem over data streams.
*Note: Limited to integers 0-50. Leverages parallel bitwise intersection. While algorithmic O(1), it relies on Python's arbitrary-precision integers under the hood.*
### Usage
```python
from o1twosum import O1TwoSum
system = O1TwoSum()
system.ingest(2)
system.ingest(11)
print(system.query(13)) # Output: (2, 11)
| text/markdown | Praveen SP | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T18:03:56.702587 | o1twosum-0.1.4.tar.gz | 2,240 | 85/24/787fe3a88f28651c59d32c96be6a7c5d640a466e45ba4174b7a160cbcae0/o1twosum-0.1.4.tar.gz | source | sdist | null | false | 8e71f92932f26e2cd87d976d4f5f266b | cc2ffe8298e8fadb73bc6d9ae89f69d4cdb80ae4c633817ed057db4a8d2387b4 | 8524787fe3a88f28651c59d32c96be6a7c5d640a466e45ba4174b7a160cbcae0 | null | [] | 234 |
2.4 | leeroopedia-mcp | 0.1.4 | MCP server for Leeroopedia ML/AI knowledge search | # Leeroopedia MCP Server
<p align="center">
<strong>Give your AI coding agent access to best-practices of ML and AI.</strong>
</p>
<p align="center">
<a href="https://pypi.org/project/leeroopedia-mcp/"><img src="https://img.shields.io/pypi/v/leeroopedia-mcp?color=blue" alt="PyPI"></a>
<a href="https://discord.gg/hqVbPNNEZM"><img src="https://dcbadge.limes.pink/api/server/hqVbPNNEZM?style=flat" alt="Discord"></a>
<a href="https://github.com/Leeroo-AI/leeroopedia-mcp"><img src="https://img.shields.io/github/commit-activity/m/Leeroo-AI/leeroopedia-mcp" alt="GitHub commit activity"></a>
<a href="https://www.ycombinator.com/companies/leeroo"><img src="https://img.shields.io/badge/Y%20Combinator-X25-orange?logo=ycombinator&logoColor=white" alt="Y Combinator X25"></a>
</p>
---
> **$20 free credit on sign-up** : that's plenty of searches, plans, and diagnoses. Skip the guesswork on your next fine-tuning run or inference deployment. No credit card required. [Get your API key →](https://app.leeroopedia.com)
## What is Leeroopedia?
**Your ML & AI Knowledge Wiki.** Learnt by AI, built by AI, for AI.
Expert-level knowledge across the full ML & AI stack — from fine-tuning and distributed training, to inference serving and GPU kernel optimization, to building agents and RAG pipelines. **1000+ frameworks and libraries**, all in one place.
This MCP server turns your AI coding agent (Claude Code, Cursor) into an ML/AI expert engineer.
Browse the full knowledge base at [leeroopedia.com](https://leeroopedia.com).
### Want to go end-to-end?
Leeroopedia gives your agent the **knowledge**. [**Kapso**](https://github.com/leeroo-ai/kapso) gives it the **ability to act on it** : research, experiment, and deploy. Together: a complete ML/AI engineer agent.
## Benchmarks
We measured the effect of Leeroopedia MCP on real ML tasks built by Claude Code.
- **ML Inference Optimization** — Write CUDA/Triton kernels for 10 KernelBench problems. **2.11x** geomean speedup vs 1.80x (**+17%**), with/without Leeroopedia MCP. [→ results](examples/ml_inference_optimization/)
- **LLM Post-Training** — End-to-end SFT + DPO + LoRA merge + vLLM serving + IFEval on 8×A100. **21.3 vs 18.5** IFEval strict-prompt accuracy, **34.6 vs 30.9** strict-instruction accuracy, **272.7 vs 231.6** throughput. [→ results](examples/llm_post_training/)
- **Self-Evolving RAG** — Build a RAG service that automatically improves itself over multiple rounds. **45.16 vs 40.51** Precision@5, **40.32 vs 35.29** Recall@5, in **52 vs 62 min** wall time. [→ results](examples/self_evolve_rag/)
- **Customer Support Agent** — Multi-agent triage system classifying 200 tickets into 27 intents. **98 vs 83** benchmark performance, **11s vs 61s** per query. [→ results](examples/customer_support_agent/)
## Quick Start
### 1. Install
No installation needed if you have [uv](https://docs.astral.sh/uv/). The MCP configs below use `uvx` to auto-download and run.
**Alternative** (manual install):
```bash
pip install leeroopedia-mcp
```
### 2. Get Your API Key
1. Go to [app.leeroopedia.com](https://app.leeroopedia.com)
2. Create an account or log in
3. Navigate to **Dashboard > API Keys**
4. Copy your API key (format: `kpsk_...`)
### 3. Configure Claude Code
Add to your `~/.claude.json` or project `.mcp.json`:
```json
{
"mcpServers": {
"leeroopedia": {
"command": "uvx",
"args": ["leeroopedia-mcp"],
"env": {
"LEEROOPEDIA_API_KEY": "kpsk_your_key_here"
}
}
}
}
```
### 4. Configure Cursor
Add to your Cursor settings (`.cursor/mcp.json`):
```json
{
"mcpServers": {
"leeroopedia": {
"command": "uvx",
"args": ["leeroopedia-mcp"],
"env": {
"LEEROOPEDIA_API_KEY": "kpsk_your_key_here"
}
}
}
}
```
## Available Tools
The MCP server provides **8 agentic tools**. Each tool (except `get_page`) triggers an AI agent on the backend that searches the knowledge base from multiple angles, reads relevant pages, and synthesizes a structured response.
### Search & Retrieve
<details>
<summary><b><code>search_knowledge</code></b> — Search the KB for framework docs, APIs, and best practices</summary>
<br>
An AI agent synthesizes a grounded answer with `[PageID]` citations.
| Parameter | Required | Description |
|-----------|----------|-------------|
| `query` | Yes | What you want to find out |
| `context` | No | Optional context about what you're building |
</details>
<details>
<summary><b><code>get_page</code></b> — Retrieve a specific KB page by ID</summary>
<br>
Direct lookup — no AI agent needed. Use this to drill into `[PageID]` citations from other tools.
| Parameter | Required | Description |
|-----------|----------|-------------|
| `page_id` | Yes | Exact page ID (e.g., `Workflow/QLoRA_Finetuning`, `Principle/LoRA_Rank_Selection`) |
</details>
### Plan & Review
<details>
<summary><b><code>build_plan</code></b> — Build a step-by-step ML execution plan</summary>
<br>
Returns an overview, key specs, numbered steps, and validation criteria — all grounded in KB evidence.
| Parameter | Required | Description |
|-----------|----------|-------------|
| `goal` | Yes | What you want to accomplish |
| `constraints` | No | Constraints or requirements (e.g., hardware limits, time budget) |
</details>
<details>
<summary><b><code>review_plan</code></b> — Review a plan against KB best practices</summary>
<br>
Catches incorrect assumptions before you write code. Returns approvals, risks, and improvement suggestions.
| Parameter | Required | Description |
|-----------|----------|-------------|
| `proposal` | Yes | The plan or proposal to review |
| `goal` | Yes | The intended goal of the plan |
</details>
### Verify & Debug
<details>
<summary><b><code>verify_code_math</code></b> — Verify code against ML/math concepts</summary>
<br>
Checks your code against documented behavior and reference implementations. Returns a Pass/Fail verdict with analysis.
| Parameter | Required | Description |
|-----------|----------|-------------|
| `code_snippet` | Yes | The code to verify |
| `concept_name` | Yes | The mathematical/ML concept being implemented |
</details>
<details>
<summary><b><code>diagnose_failure</code></b> — Diagnose training/deployment failures</summary>
<br>
Matches symptoms against known failure patterns and misconfigurations. Returns diagnosis, fix steps, and prevention advice.
| Parameter | Required | Description |
|-----------|----------|-------------|
| `symptoms` | Yes | Description of the failure symptoms |
| `logs` | Yes | Relevant log output or error messages |
</details>
### Explore & Optimize
<details>
<summary><b><code>propose_hypothesis</code></b> — Propose ranked next-step hypotheses</summary>
<br>
When you're stuck, get alternative approaches ranked by fit — backed by documented patterns. Returns ranked ideas with rationale and suggested experiments.
| Parameter | Required | Description |
|-----------|----------|-------------|
| `current_status` | Yes | Where the project stands now |
| `recent_experiments` | No | Description of recent experiments and their outcomes |
</details>
<details>
<summary><b><code>query_hyperparameter_priors</code></b> — Query hyperparameter values, ranges & heuristics</summary>
<br>
Start with battle-tested defaults instead of guessing. Returns a suggestion table with KB-grounded justification.
| Parameter | Required | Description |
|-----------|----------|-------------|
| `query` | Yes | Hyperparameter question (e.g., "learning rate for LoRA fine-tuning Llama-3 8B") |
</details>
## Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `LEEROOPEDIA_API_KEY` | Yes | — | Your Leeroopedia API key |
| `LEEROOPEDIA_API_URL` | No | `https://api.leeroopedia.com` | API endpoint |
| `LEEROOPEDIA_POLL_MAX_WAIT` | No | `300` | Max seconds to wait for a search task |
| `LEEROOPEDIA_POLL_INTERVAL` | No | `0.5` | Initial poll interval in seconds (grows via backoff) |
## Troubleshooting
| Error | Fix |
|-------|-----|
| `LEEROOPEDIA_API_KEY is required` | Set `LEEROOPEDIA_API_KEY` in your MCP config `env` block |
| `Invalid or revoked API key` (401) | Re-copy your key from [app.leeroopedia.com](https://app.leeroopedia.com) |
| `Insufficient credits` (402) | Purchase more credits at [app.leeroopedia.com](https://app.leeroopedia.com) |
| `Rate limit exceeded` (429) | Wait for the retry period before making more requests |
| `Search timed out` (504) | Try a more specific query, or increase `LEEROOPEDIA_POLL_MAX_WAIT` |
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details on how to get started.
This project follows our [Code of Conduct](CODE_OF_CONDUCT.md).
## License
This project is licensed under the [MIT License](LICENSE).
| text/markdown | null | Kapso Team <team@kapso.dev> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0",
"httpx>=0.27.0"
] | [] | [] | [] | [
"Homepage, https://leeroopedia.com",
"Documentation, https://docs.leeroopedia.com"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T18:03:47.391658 | leeroopedia_mcp-0.1.4.tar.gz | 16,367 | 4a/04/feb035d36305d68523ecc43d8c9cb96d7d0596e6373a8e72ffc677c8fe7f/leeroopedia_mcp-0.1.4.tar.gz | source | sdist | null | false | b7b570fe3eae183d36d75127ee033218 | b5de002e4390eabca071ed3cdda0b4bbeaa5c6333583f34432904be4edd562f6 | 4a04feb035d36305d68523ecc43d8c9cb96d7d0596e6373a8e72ffc677c8fe7f | null | [
"LICENSE"
] | 239 |
2.4 | nsj-integracao-api-client | 2.2502.0.30 | Client em Python desenvolvido para facilitar a extração de dados de bancos de dados de ERPs e a integração com as APIs de integração da Nasajon. Ele automatiza o fluxo de comunicação entre sistemas, oferecendo funcionalidades para consulta, transformação e envio de dados de maneira eficiente. | # Integracao api client
Cliente em Python desenvolvido para facilitar a extração de dados de bancos de dados de ERPs e a integração com as APIs de integração da Nasajon. Permite que se extraia dados do ERP e efetue integrações na base do bancos web para um tenant previamente configurado, para isso, utiliza-se do mesmo mecanismo de identificação da sincronia pelo Symmetrics, onde um token de tenant é utilizado para prover a comunicação.
---
## Visão geral
O Cliente possui um modo console onde disponibiliza suas principais operações sendo:
* [Instalação](#instalação)
* [Carga Inicial](#carga-inicial)
* [Integração](#intregação)
* [Recarga](#recarga)
* [Verificação de integridade](#verificação-de-integridade)
### Instalação
Possibilita através de uma **chave de ativação**, configurar a integração, escolhendo quais Grupos Empresariais serão integrados. A chave de ativação pode ser obtida no Diretório->Tenants, seguindo o mesmo fluxo da chave de ativação da sincronia. Caso já exista um sincronia instalada, usará a mesma chave no processo.
### Carga Inicial
Efetua a carga inicial de todos os dados das entidades registradas no cliente e dos grupos empresariais selecionados para as apis na web.
### Integração
Considerando as entidades com dados pendentes para envio, seleciona os dados dos grupos configurados e os envia para as APIs de integração da Nasajon.
### Recarga
Mesmo princípio da carga inicial, selecionando apenas as entidades que se deseje rcarregar.
### Verificação de integridade
Permite efetuar uma comparação entre os dados do ambiente na api e os dados locais. A saída permite identificar as diferenaças e corrigir atravéz do envio dos dados para a api de integração.
---
## Arquitetura
O diagrama abaixo apresenta a arquitetura do sistema de integração, destacando os principais componentes e suas interações:
1. **Usuário ERP**: Configura e acompanha o processo de integração.
2. **Ambiente ERP Nasajon**:
- **Banco de Dados ERP**: Contém os dados operacionais do ERP.
- **Integrador**: Aplicação local em Python que extrai dados do ERP e os envia para o ambiente web.
3. **Ambiente WEB Nasajon**:
- **APIs de Integração Web**: Recebem os dados enviados pelo integrador.
- **Banco de Dados Web**: Armazena os dados processados pelas APIs.
4. **Biblioteca de Entidades**: Modela os dados que serão extraídos e enviados, garantindo a integridade entre o integrador e as APIs.
As relações entre os componentes mostram como os dados fluem desde o ERP até o ambiente web, passando pelo integrador e utilizando a biblioteca de entidades para padronização.
```mermaid
%%{init: {'theme':'default'}}%%
C4Container
Person(usuario, "Usuário ERP", "Configura e executa a integração.")
System_Boundary(erp, "Ambiente ERP Nasajon") {
ContainerDb(erp_db, "Banco de Dados ERP", "PostgreSQL", "Contém dados operacionais do ERP")
Container(integrador, "Integrador", "Aplicação Local Python", "Extrai dados do ERP e envia para a Web")
}
System_Boundary(web, "Ambiente WEB Nasajon") {
System_Ext(apis, "APIs de Integração Web", "Recebem os dados enviados pelo integrador")
ContainerDb_Ext(db_web, "Banco de Dados Web", "Armazena os dados processados pelas APIs")
}
Container(entidades_lib, "Biblioteca de Entidades", "Python", "Modela os dados que serão extraídos e enviados")
Rel(usuario, integrador, "Configura e acompanha")
Rel(integrador, erp_db, "Lê dados para integrar")
Rel(integrador, entidades_lib, "Entidades modeladas para integração")
Rel(integrador, apis, "Envia entidades via HTTP")
Rel(apis, db_web, "Persiste dados recebidos")
Rel(apis, entidades_lib, "Entidades modeladas para integração")
```
### Componentes
**ERP Nasajon**
* Aplicação com banco de dados Postgres, onde residem os dados de um cliente.
**Bancos Web**
* Banco de dados Postgres multi-tenant que guarda os dados consumidos pelas aplicações web.
**Integrador**
* Captura os dados do ERP e envia para as APIs de integração, usando **bibliotecas de entidades**.
**APIs de Integração**
* Recebe os dados enviados pelo integrador e os armazena no banco de dados web. Poderão existir diversas apis, uma para cada Tribo, a exemplo [Integração Pessoas API](https://github.com/Nasajon/integracao-pessoas-api).
**Biblioteca de Entidades**
* Modela os dados que serão capturados e enviados para as APIs de integração. O Integrador e as apis usam a mesmma biblioteca de entidades ([nsj_integracao_api_entidades](https://github.com/Nasajon/nsj_integracao_api_entidades)) para garantir a integridade dos dados. Construída através da blioteca [nsj_rest_lib](https://github.com/Nasajon/nsj_rest_lib).
### Modelo de dados
| Módulo | Descrição |
|---------------------------------|---------------------------------------------|
| `util.entidades_integracao` | Gerencia as entidades que serão integradas. É alimentada pela trigger `TRG_registra_entidade_integracao` e função `util.registra_entidade_integracao()`. |
| `util.grupos_empresariais_integracao` | Gerencia os grupos empresariais para integração. |
## Executando localmente
Para executar localmente, Tenha disponível:
* Uma base local do ERP Nasajon;
* Copie o arquivo `env.dist` para `.env` e preencha os dados de conexão com o banco de dados local;
* Usar os comandos dispoíveis no Makefile para executar as operações.
> É possível usar tanto as Apis de QA/DEV, quanto subir uma instância de apis local. Para isso, na execução use o parâmetro `--env=local|dev|qa|prod`.
> Caso queira rodar as apis localmente, será preciso uma base de dados do bancosweb para testes e subir o projeto de apis localmente, tal como o [integracao-pessoas-api](https://github.com/Nasajon/integracao-pessoas-api).
## Distribuição
Existe uma versão do JobManager onde esta biblioteca foi distribuída como um Job, estando disponível tanto para agendamentos como execução pelo método [`run_job`](https://github.com/Nasajon/Jobmanager?#integracao_apis).
### FAQ
#### Adicionar/alterar entidades na integração?
Para adicionar ou alterar entidades na integração, siga o seguinte procedimento:
1. **Identifique a Entidade**
Verifique qual entidade precisa ser adicionada ou alterada. Certifique-se de que ela está devidamente modelada na [biblioteca de entidades](https://github.com/Nasajon/nsj_integracao_api_entidades), seguindo a conveção do rest_lib.
1. **Atualize a Biblioteca de Entidades**
Caso necessário, atualize a biblioteca de entidades ([nsj_integracao_api_entidades](https://github.com/Nasajon/nsj_integracao_api_entidades)) no [requirements.txt](requirements.txt) para incluir a nova versão da biblioteca.
Atualize a [lista de entidades no Integrador](./src/nsj_integracao_api_client/service/integrador.py#L45) respeitando ordem da dependência de chaves estrangeriras.
1. **Configure no Banco de Dados ERP**
Adicione ou atualize os registros na tabela `util.entidades_integracao` para que a nova entidade seja reconhecida pelo integrador. Crie uma trigger para a função `util.registra_entidade_integracao()` apontando a entidade.
```sql
CREATE TRIGGER "TRG_registra_entidade_integracao" AFTER INSERT OR UPDATE ON esquema.tabela FOR EACH ROW EXECUTE PROCEDURE util.registra_entidade_integracao();
```
1. **Teste Localmente**
Execute o integrador localmente para verificar se a nova entidade está sendo capturada e enviada corretamente para as APIs. Use o comando apropriado no Makefile, como `make carga-inicial`.
1. **Valide na API de Integração**
Certifique-se de que os dados enviados estão sendo processados corretamente pela API de integração. Verifique os logs e o banco de dados web para confirmar.
1. **Atualize a Documentação**
Documente a nova entidade e quaisquer alterações realizadas para garantir que a equipe esteja ciente das mudanças.
1. **Distribua as Alterações**
Caso esteja utilizando o JobManager, atualize o job correspondente para incluir a nova entidade ou as alterações realizadas.
> **Dica:** Sempre mantenha um ambiente de QA para validar as alterações antes de aplicá-las em produção.
# Notas
> No Linux, instale designer com:
```sh
sudo apt install qttools5-dev-tools
```
```sh
sudo apt install pyqt5-dev-tools
```
sudo apt install qt5-base-dev
sudo apt install qt5-tools-dev
https://gist.github.com/r00tdaemon/1dcd57542bdaf3c9d1b0dd526ccd44ff
| text/markdown | Nasajon Sistemas | contact.dev@nasajon.com.br | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"Programming Language :: Python :: 3"
] | [] | https://github.com/Nasajon/nsj_integracao_api_client | null | >=3.4 | [] | [] | [] | [
"nsj-integracao-api-entidades==1.0.0a25-48",
"colorama==0.4.6",
"tzdata==2025.1",
"PyQt5==5.15.11",
"sentry-sdk==1.7.1",
"psutil>=5.9.0"
] | [] | [] | [] | [
"Source, https://github.com/Nasajon/nsj_integracao_api_client"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T18:03:14.541539 | nsj_integracao_api_client-2.2502.0.30.tar.gz | 79,932 | c0/aa/8070d0877f8aecd9be79b54db9177728e854e198d746c90ae3f18240ed3d/nsj_integracao_api_client-2.2502.0.30.tar.gz | source | sdist | null | false | 0b5ba481beee668cc93c734a61d992f8 | 15016c1f5a81a14fda6ffa602d80de11cdaf8eecba0bf06baa7c3d216fc8fe67 | c0aa8070d0877f8aecd9be79b54db9177728e854e198d746c90ae3f18240ed3d | null | [] | 237 |
2.4 | django-cloudflare | 1.0.0 | A middleware for Django applications using Cloudflare as a proxy. Allows you to extract and access CF headers. | [](https://github.com/tomwojcik/django-cloudflare/actions/workflows/ci.yml)
[](https://pypi.org/project/django-cloudflare/)
[](https://pypi.org/project/django-cloudflare/)
[](https://django-cloudflare.readthedocs.io/en/latest/)
# django-cloudflare
A reusable Django middleware that allows you to easily extract Cloudflare headers from incoming requests.
Resources:
* **Source**: https://github.com/tomwojcik/django-cloudflare
* **Documentation**: https://django-cloudflare.readthedocs.io/
* **Changelog**: https://django-cloudflare.readthedocs.io/en/latest/changelog/
## Supported headers
| Cloudflare header | Default attribute | Setting to enable |
|---|---|---|
| `Cdn-Loop` | `request.cf_cdn_loop` | `CF_HEADER_CDN_LOOP_ENABLED` |
| `Cf-Connecting-Ip` | `request.cf_ip` | `CF_HEADER_IP_ENABLED` |
| `Cf-Connecting-Ipv6` | `request.cf_ipv6` | `CF_HEADER_IPV6_ENABLED` |
| `Cf-Ipcountry` | `request.cf_country` | `CF_HEADER_COUNTRY_ENABLED` |
| `Cf-Ray` | `request.cf_ray` | `CF_HEADER_RAY_ENABLED` |
| `Cf-Visitor` | `request.cf_visitor` | `CF_HEADER_VISITOR_ENABLED` |
| `Cf-Warp-Tag-Id` | `request.cf_warp_tag` | `CF_HEADER_WARP_TAG_ENABLED` |
| `X-Forwarded-For` | `request.cf_forwarded_for` | `CF_HEADER_FORWARDED_FOR_ENABLED` |
| `X-Forwarded-Proto` | `request.cf_forwarded_proto` | `CF_HEADER_FORWARDED_PROTO_ENABLED` |
All headers are disabled by default. Each attribute name can be customized via the corresponding `*_ATTR_NAME` setting.
## Installation
```bash
pip install -U django-cloudflare
```
## Usage
Add the middleware and enable the headers you need in your `settings.py`:
```python
MIDDLEWARE += ["django_cloudflare.CloudflareMiddleware"]
CF_HEADER_IP_ENABLED = True
CF_HEADER_COUNTRY_ENABLED = True
```
Then access the values on the request object:
```python
def my_view(request):
ip = request.cf_ip
country = request.cf_country
```
Both sync and async views are supported.
## Requirements
Python 3.10+, Django 4.2+
| text/markdown | null | Tom Wojcik <django-cloudflare-pkg@tomwojcik.com> | null | null | MIT | cloudflare, django | [
"Development Status :: 5 - Production/Stable",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"django>=4.2"
] | [] | [] | [] | [
"Homepage, https://github.com/tomwojcik/django-cloudflare",
"Repository, https://github.com/tomwojcik/django-cloudflare",
"Documentation, https://django-cloudflare.readthedocs.io/",
"Issues, https://github.com/tomwojcik/django-cloudflare/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T18:02:10.272841 | django_cloudflare-1.0.0.tar.gz | 64,182 | c1/15/0ce3e90ca7992287f191436c335c1d11c915254ce1101c935857e09d943e/django_cloudflare-1.0.0.tar.gz | source | sdist | null | false | 966eab26df776b45023898b226c935fa | f6d60184cf7c6486dd15d9cf87f0cf8ec790428e2a2ba0f84fcd7d28baf5132b | c1150ce3e90ca7992287f191436c335c1d11c915254ce1101c935857e09d943e | null | [
"LICENSE"
] | 297 |
2.4 | tm-ai | 1.6.2 | Time Machine for AI Agents — Cognitive Version Control for LLM context | <div align="center">
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- HEADER & LOGO -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://img.shields.io/badge/%E2%8F%B3-CVC-8B0000?style=for-the-badge&labelColor=000000&logoColor=white">
<source media="(prefers-color-scheme: light)" srcset="https://img.shields.io/badge/%E2%8F%B3-CVC-8B0000?style=for-the-badge&labelColor=FFFFFF&logoColor=black">
<img alt="CVC Logo" src="https://img.shields.io/badge/%E2%8F%B3-CVC-8B0000?style=for-the-badge&labelColor=000000&logoColor=white" width="200">
</picture>
# 🧠 **CVC** — Cognitive Version Control
### *Time Machine for AI Agents*
> Git for code. **CVC for context.**
> Save. Branch. Rewind. Merge. — **Your AI agent just got an undo button.**
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- QUICK INSTALL -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
```bash
# 🎧 macOS / Linux / WSL
curl -fsSL https://jaimeena.com/cvc/install.sh | bash
# 🪟 Windows PowerShell
irm https://jaimeena.com/cvc/install.ps1 | iex
```
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- BADGES -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
[](https://pypi.org/project/tm-ai/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/)
[](https://github.com/mannuking/CVC)
[](https://pypi.org/project/tm-ai/)
[](http://makeapullrequest.com)
[](https://github.com/psf/black)
[](https://github.com/mannuking/CVC)
[](https://jaimeena.com/cvc)
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- SOCIAL & STATS -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
[](https://github.com/mannuking/CVC/stargazers)
[](https://github.com/mannuking/CVC/network/members)
[](https://github.com/mannuking/CVC/watchers)
[](https://github.com/mannuking/CVC/graphs/contributors)
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- NAVIGATION MENU -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<table>
<tr>
<td align="center"><a href="#-why-cvc"><b>Why CVC?</b></a></td>
<td align="center"><a href="#-features"><b>Features</b></a></td>
<td align="center"><a href="#-quick-start"><b>Quick Start</b></a></td>
<td align="center"><a href="#-documentation"><b>Docs</b></a></td>
<td align="center"><a href="#-architecture"><b>Architecture</b></a></td>
</tr>
<tr>
<td align="center"><a href="#-integrations"><b>Integrations</b></a></td>
<td align="center"><a href="#-community"><b>Community</b></a></td>
<td align="center"><a href="#-contributing"><b>Contribute</b></a></td>
<td align="center"><a href="#-roadmap"><b>Roadmap</b></a></td>
<td align="center"><a href="#-faq"><b>FAQ</b></a></td>
</tr>
<tr>
<td align="center" colspan="5"><a href="https://jaimeena.com/cvc"><b>🌐 jaimeena.com/cvc — Full Website, Docs & Installation</b></a></td>
</tr>
</table>
</div>
<br>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- THE PROBLEM -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<div align="center">
## 💥 **The Problem**
<br>
### Your AI coding agent is brilliant — for about 20 minutes.
Then it **forgets** what it already fixed, **contradicts** its own plan,
and **loops** on the same error for eternity.
<br>
### ***Sound familiar?***
<br>
</div>
<table align="center">
<tr>
<td align="center" width="25%">
<h3>😵💫</h3>
<b>Context Rot</b><br>
<sub>After 60% context fill,<br>quality falls off a cliff</sub>
</td>
<td align="center" width="25%">
<h3>🔁</h3>
<b>Error Loops</b><br>
<sub>Same mistake,<br>different turn</sub>
</td>
<td align="center" width="25%">
<h3>🧠</h3>
<b>No Memory</b><br>
<sub>Can't remember<br>what it just did</sub>
</td>
<td align="center" width="25%">
<h3>💸</h3>
<b>Token Waste</b><br>
<sub>Re-processing the<br>same context</sub>
</td>
</tr>
</table>
<br>
<div align="center">
> ### **Bigger context windows don't fix this.**
> ### **They just give the problem more room to spread.**
<br>
## ✨ **The Solution: CVC**
Give AI agents what they've never had: **memory that actually works.**
</div>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- WHY CVC? -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 🎯 **Why CVC?**
<div align="center">
**CVC** is Git for the AI's brain.
Instead of versioning source code, it versions the agent's **entire cognitive state** — every thought, every decision, every conversation turn — as an immutable, cryptographic **Merkle DAG**.
</div>
<br>
<table>
<thead>
<tr>
<th align="center" width="20%">💾<br><b>Save</b></th>
<th align="center" width="20%">🌿<br><b>Branch</b></th>
<th align="center" width="20%">⏪<br><b>Rewind</b></th>
<th align="center" width="20%">🔀<br><b>Merge</b></th>
<th align="center" width="20%">🔍<br><b>Search</b></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">Checkpoint the agent's brain at any stable moment</td>
<td align="center">Explore risky ideas in isolation without polluting main context</td>
<td align="center">Stuck in a loop? Time-travel back instantly</td>
<td align="center">Merge <em>learnings</em> back — semantic, not syntactic</td>
<td align="center">Find when the agent solved similar problems before</td>
</tr>
</tbody>
</table>
<br>
### 📊 **Research-Backed Results**
<div align="center">
<table>
<tr>
<td align="center" width="25%">
<h2>58.1%</h2>
<sub>Context reduction<br>via branching<br><br><a href="https://arxiv.org/abs/2512.13914">ContextBranch paper →</a></sub>
</td>
<td align="center" width="25%">
<h2>3.5×</h2>
<sub>Success rate improvement<br>with rollback<br><br><a href="https://arxiv.org/abs/2508.00031">GCC paper →</a></sub>
</td>
<td align="center" width="25%">
<h2>~90%</h2>
<sub>Cost reduction through<br>prompt caching<br><br><a href="https://www.prompthub.us/blog/prompt-caching">Caching study →</a></sub>
</td>
<td align="center" width="25%">
<h2>~85%</h2>
<sub>Latency reduction<br>on restores<br><br><a href="https://anthropic.com/news/prompt-caching">Anthropic docs →</a></sub>
</td>
</tr>
</table>
</div>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- FEATURES -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 🚀 **Features**
<div align="center">
<table>
<thead>
<tr>
<th align="center" width="33%">🤖 <b>Built-in Agent</b></th>
<th align="center" width="33%">🔌 <b>Universal Proxy</b></th>
<th align="center" width="34%">⏱️ <b>Time Machine</b></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center">
Just type <code>cvc</code> for a powerful<br>AI coding assistant with<br><b>17 built-in tools</b> and<br><b>4-provider support</b>
</td>
<td align="center">
Run <b>any AI tool</b> through<br>CVC's time machine via<br><b>API proxy</b> or <b>MCP server</b><br>Zero configuration required
</td>
<td align="center">
<b>Auto-checkpoint</b> every N turns<br><b>Never lose context</b><br><b>Crash recovery</b> built-in<br>Time-travel to any point
</td>
</tr>
</tbody>
</table>
</div>
<br>
### 🆚 **How CVC Compares**
<div align="center">
<table>
<thead>
<tr>
<th width="20%"><b>FEATURE</b></th>
<th align="center" width="25%"><b>CLAUDE CODE / CODEX</b></th>
<th align="center" width="30%"><b>ANTIGRAVITY / CURSOR / VS CODE</b></th>
<th align="center" width="25%"><b>🔥 CVC AGENT</b></th>
</tr>
</thead>
<tbody>
<tr>
<td><b>Context Memory</b></td>
<td align="center">Linear (Lost on restart)</td>
<td align="center">Session-based / Linear</td>
<td align="center">✅ <b>Time-Travel (Project Isolated)</b></td>
</tr>
<tr>
<td><b>Branching</b></td>
<td align="center">No branching / rollback</td>
<td align="center">Not supported</td>
<td align="center">✅ <b>Full Branch Support</b></td>
</tr>
<tr>
<td><b>Undo Capability</b></td>
<td align="center">Single step (maybe)</td>
<td align="center">Single step / Ctrl+Z</td>
<td align="center">✅ <b>Instant Rewind (Any State)</b></td>
</tr>
<tr>
<td><b>Search History</b></td>
<td align="center">Current session only</td>
<td align="center">Current session only</td>
<td align="center">✅ <b>Global Semantic Search</b></td>
</tr>
<tr>
<td><b>Providers</b></td>
<td align="center">Single provider (usually)</td>
<td align="center">Vendor locked often</td>
<td align="center">✅ <b>Agnostic (4+ Providers)</b></td>
</tr>
<tr>
<td><b>Cost</b></td>
<td align="center">Full Re-prompting</td>
<td align="center">Full Re-prompting</td>
<td align="center">✅ <b>~90% Cheaper (Cached)</b></td>
</tr>
<tr>
<td><b>Local / Offline</b></td>
<td align="center">Cloud Dependent</td>
<td align="center">Cloud Dependent</td>
<td align="center">✅ <b>100% Local / Offline Capable</b></td>
</tr>
<tr>
<td><b>Context Merging</b></td>
<td align="center">Not available</td>
<td align="center">Not available</td>
<td align="center">✅ <b>Semantic Merge</b></td>
</tr>
<tr>
<td><b>Image Analysis</b></td>
<td align="center">Limited</td>
<td align="center">Limited</td>
<td align="center">✅ <b>Built-in Vision</b></td>
</tr>
<tr>
<td><b>Auto-checkpoint</b></td>
<td align="center">Not available</td>
<td align="center">Not available</td>
<td align="center">✅ <b>Configurable</b></td>
</tr>
<tr>
<td><b>Crash Recovery</b></td>
<td align="center">Session lost</td>
<td align="center">Session lost</td>
<td align="center">✅ <b>Full Restoration</b></td>
</tr>
</tbody>
</table>
</div>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- QUICK START -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## ⚡ **Quick Start**
### **Installation**
#### 🎯 **Recommended: One-Line Installer** (Installs Python if needed)
```bash
# 🎧 macOS / Linux / WSL
curl -fsSL https://jaimeena.com/cvc/install.sh | bash
# 🪟 Windows PowerShell
irm https://jaimeena.com/cvc/install.ps1 | iex
```
<details>
<summary><b>📦 Alternative: pip / uv</b></summary>
<br>
```bash
# pip (all providers)
pip install "tm-ai[all]"
# uv (faster, recommended)
uv tool install "tm-ai[all]"
# Specific provider only
pip install "tm-ai[anthropic]" # Claude
pip install "tm-ai[openai]" # GPT
pip install "tm-ai[google]" # Gemini
```
</details>
<br>
### **Usage**
```bash
# 🤖 Agent Mode - Just type 'cvc'
cvc
# 🔌 Proxy Mode - Zero-config launch
cvc launch claude # Claude Code CLI
cvc launch aider # Aider
cvc launch cursor # Cursor IDE
# Or manual setup
cvc up # Setup + init + serve
```
<br>
### **Set API Keys**
```bash
# Environment variables
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AIza..."
# Or interactive setup
cvc setup
```
<br>
**That''s it! 🎉**
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- DOCUMENTATION -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 📖 **Documentation**
<div align="center">
<table>
<tr>
<td align="center" width="25%">
<h3>🤖</h3>
<b><a href="docs/CLI_AGENT_GUIDE.md">Agent CLI Guide</a></b><br>
<sub>Complete guide to the built-in agent</sub>
</td>
<td align="center" width="25%">
<h3>⌨️</h3>
<b><a href="docs/CLI_SLASH_COMMANDS.md">Slash Commands</a></b><br>
<sub>All slash commands reference</sub>
</td>
<td align="center" width="25%">
<h3>🔧</h3>
<b><a href="docs/CVC_TOOLS_REFERENCE.md">Tools Reference</a></b><br>
<sub>17 built-in tools documentation</sub>
</td>
<td align="center" width="25%">
<h3>🔌</h3>
<b><a href="docs/MCP_DOCUMENTATION.md">MCP Integration</a></b><br>
<sub>Model Context Protocol setup</sub>
</td>
</tr>
<tr>
<td align="center" width="25%">
<h3>🗺️</h3>
<b><a href="docs/CROSS_MODE_GUIDE.md">Cross-Mode Guide</a></b><br>
<sub>Agent + Proxy + MCP workflows</sub>
</td>
<td align="center" width="25%">
<h3>📁</h3>
<b><a href="docs/MULTI_WORKSPACE.md">Multi-Workspace</a></b><br>
<sub>Multiple project management</sub>
</td>
<td align="center" width="25%">
<h3>📚</h3>
<b><a href="docs/documentation.md">Full Docs</a></b><br>
<sub>Complete documentation</sub>
</td>
<td align="center" width="25%">
<h3>📝</h3>
<b><a href="docs/CHANGELOG.md">Changelog</a></b><br>
<sub>Version history</sub>
</td>
</tr>
<tr>
<td align="center" colspan="4">
<h3>🌐</h3>
<b><a href="https://jaimeena.com/cvc">Official Website</a></b><br>
<sub>Full docs, installation guides, examples & more at <a href="https://jaimeena.com/cvc">jaimeena.com/cvc</a></sub>
</td>
</tr>
</table>
</div>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- INTEGRATIONS -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 🔌 **Integrations**
<div align="center">
### **Run ANY AI tool through CVC''s Time Machine**
</div>
<br>
<table>
<thead>
<tr>
<th width="25%">Tool</th>
<th width="40%">How to Connect</th>
<th width="35%">Command</th>
</tr>
</thead>
<tbody>
<tr><td><b>💎 Claude Code CLI</b></td><td><code>export ANTHROPIC_BASE_URL=http://127.0.0.1:8000</code></td><td><code>cvc launch claude</code></td></tr>
<tr><td><b>🛠️ Aider</b></td><td>Standard OpenAI-compatible endpoint</td><td><code>cvc launch aider</code></td></tr>
<tr><td><b>⌨️ Codex CLI</b></td><td><code>model_provider = "cvc"</code> in config</td><td><code>cvc launch codex</code></td></tr>
<tr><td><b>🖱️ Cursor</b></td><td>Settings → Override Base URL</td><td><code>cvc launch cursor</code></td></tr>
<tr><td><b>💎 VS Code + Copilot</b></td><td>BYOK or MCP integration</td><td><code>cvc launch code</code></td></tr>
<tr><td><b>🏄 Windsurf</b></td><td>MCP integration</td><td><code>cvc launch windsurf</code></td></tr>
<tr><td><b>🚀 Antigravity</b></td><td>MCP integration</td><td><code>cvc mcp</code></td></tr>
<tr><td><b>🔄 Continue.dev</b></td><td>Base URL → <code>http://127.0.0.1:8000/v1</code></td><td><code>cvc serve</code></td></tr>
<tr><td><b>🤖 Cline</b></td><td>Base URL → <code>http://127.0.0.1:8000/v1</code></td><td><code>cvc serve</code></td></tr>
<tr><td><b>🦜 LangChain</b></td><td>Use CVC''s function-calling tools</td><td><code>cvc serve</code></td></tr>
<tr><td><b>👥 CrewAI</b></td><td>Use CVC''s function-calling tools</td><td><code>cvc serve</code></td></tr>
<tr><td><b>🤝 AutoGen</b></td><td>Use CVC''s function-calling tools</td><td><code>cvc serve</code></td></tr>
<tr><td><b>🌐 Open WebUI</b></td><td>Standard OpenAI-compatible endpoint</td><td><code>cvc serve</code></td></tr>
</tbody>
</table>
<br>
> **🔑 Auth pass-through:** External tools can send their own API keys — CVC forwards them to the upstream provider.
<br>
Run `cvc connect` for interactive setup instructions.
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- ARCHITECTURE -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 🗝️ **Architecture**
<div align="center">
### **Three-Tiered Local Storage**
<br>
<table>
<tr>
<td align="center" width="33%">
<h3>🗄️ SQLite</h3>
<b>Commit graph</b><br>
<b>Branch pointers</b><br>
<b>Metadata</b><br>
<sub>Fast traversal, zero-config</sub>
</td>
<td align="center" width="33%">
<h3>📦 CAS Blobs</h3>
<b>Context snapshots</b><br>
<b>Zstandard compression</b><br>
<b>Content-addressable</b><br>
<sub>Deduplicated, efficient</sub>
</td>
<td align="center" width="34%">
<h3>🔍 Chroma</h3>
<b>Semantic embeddings</b><br>
<b>Vector search</b><br>
<b>Optional</b><br>
<sub>"Have I solved this before?"</sub>
</td>
</tr>
</table>
<br>
✨ **Everything stays in `.cvc/` inside your project**
🔒 **No cloud • No telemetry • Your agent''s thoughts are yours**
</div>
<br>
### 📦 **Project Structure**
```
cvc/
├── agent/ # Built-in AI coding agent
│ ├── chat.py # REPL loop, slash commands
│ ├── llm.py # Unified LLM client (4 providers)
│ ├── tools.py # 17 tool definitions
│ └── executor.py # Tool execution engine
├── adapters/ # Provider-specific formatting
│ ├── anthropic.py # Prompt caching support
│ ├── openai.py
│ ├── google.py
│ └── ollama.py
├── core/ # Data layer
│ ├── models.py # Merkle DAG, Pydantic schemas
│ └── database.py # SQLite + CAS + Chroma
├── operations/ # CVC engine
│ ├── engine.py # Commit, branch, merge, restore
│ └── state_machine.py # LangGraph routing
└── vcs/ # Git bridge
└── bridge.py # Shadow branches, Git notes, hooks
```
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- PROVIDERS -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 🤖 **Supported Providers**
<table>
<thead>
<tr>
<th width="15%">Provider</th>
<th width="30%">Default Model</th>
<th width="35%">Alternatives</th>
<th width="20%">Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>🟣 Anthropic</b></td>
<td><code>claude-opus-4-6</code></td>
<td><code>claude-opus-4-5</code>, <code>claude-sonnet-4-5</code>, <code>claude-haiku-4-5</code></td>
<td>Prompt caching</td>
</tr>
<tr>
<td><b>🟢 OpenAI</b></td>
<td><code>gpt-5.2</code></td>
<td><code>gpt-5.2-codex</code>, <code>gpt-5-mini</code>, <code>gpt-4.1</code></td>
<td>Auto prefix caching</td>
</tr>
<tr>
<td><b>🔵 Google</b></td>
<td><code>gemini-3-pro-preview</code></td>
<td><code>gemini-3-flash-preview</code>, <code>gemini-2.5-pro</code>, <code>gemini-2.5-flash</code></td>
<td>Multimodal + reasoning</td>
</tr>
<tr>
<td><b>⚪ Ollama</b></td>
<td><code>qwen2.5-coder:7b</code></td>
<td><code>qwen3-coder:30b</code>, <code>devstral:24b</code>, <code>deepseek-r1:8b</code></td>
<td>100% local, no API key</td>
</tr>
</tbody>
</table>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- COMMUNITY -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 💬 **Community**
<div align="center">
### **Join thousands of developers using CVC**
<br>
<table>
<tr>
<td align="center" width="25%">
<h3>🌐</h3>
<b>Website</b><br>
<sub><a href="https://jaimeena.com/cvc">jaimeena.com/cvc →</a></sub>
</td>
<td align="center" width="25%">
<h3>🛠</h3>
<b>Issues</b><br>
<sub><a href="https://github.com/mannuking/CVC/issues">Report bugs →</a></sub>
</td>
<td align="center" width="25%">
<h3>💡</h3>
<b>Discussions</b><br>
<sub><a href="https://github.com/mannuking/CVC/discussions">Share ideas →</a></sub>
</td>
<td align="center" width="25%">
<h3>𝕏</h3>
<b>Twitter</b><br>
<sub><a href="https://twitter.com/cvc_ai">Follow updates →</a></sub>
</td>
</tr>
</table>
</div>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- CONTRIBUTING -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 🤝 **Contributing**
<div align="center">
**This repo is public and open to collaboration.**
Whether you''re fixing a typo or building an entirely new provider adapter —
**contributions are welcome.**
<br>
**Fork** → **Branch** → **Commit** → **Push** → **PR**
</div>
<br>
### 🎯 **Areas Where Help Is Needed**
<table>
<thead>
<tr>
<th width="60%">Area</th>
<th width="40%" align="center">Difficulty</th>
</tr>
</thead>
<tbody>
<tr><td>🔌 Additional Provider Adapters (Mistral, Cohere)</td><td align="center">🟡 Medium</td></tr>
<tr><td>🧪 Tests & edge cases</td><td align="center">🟢 Easy–Medium</td></tr>
<tr><td>🖥️ VS Code Extension (graph visualization)</td><td align="center">🔴 Hard</td></tr>
<tr><td>📊 Metrics & observability dashboard</td><td align="center">🟡 Medium</td></tr>
<tr><td>🔒 Security audit</td><td align="center">🟠 Medium–Hard</td></tr>
<tr><td>📚 Documentation improvements</td><td align="center">🟢 Easy</td></tr>
<tr><td>🌐 Web UI for visualization</td><td align="center">🔴 Hard</td></tr>
<tr><td>🐳 Docker/Kubernetes deployment</td><td align="center">🟡 Medium</td></tr>
</tbody>
</table>
<br>
### 🛠️ **Dev Setup**
```bash
git clone https://github.com/mannuking/CVC.git
cd CVC
uv sync --extra dev
```
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- ROADMAP -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 🗺️ **Roadmap**
<table>
<thead>
<tr>
<th width="60%">Feature</th>
<th width="40%">Status</th>
</tr>
</thead>
<tbody>
<tr><td>🤖 Built-in Agent CLI</td><td>✅ <b>Shipped v1.5.4</b> — 17 tools, 4 providers</td></tr>
<tr><td>☁️ All 4 Provider Adapters</td><td>✅ <b>Shipped</b> — Anthropic, OpenAI, Google, Ollama</td></tr>
<tr><td>🔌 MCP Server</td><td>✅ <b>Shipped</b> — stdio + SSE transports</td></tr>
<tr><td>🚀 Zero-config Launch</td><td>✅ <b>Shipped</b> — <code>cvc launch</code> for all tools</td></tr>
<tr><td>🔗 Git Bridge</td><td>✅ <b>Shipped</b> — Shadow branches, hooks, notes</td></tr>
<tr><td>🎨 VS Code Extension</td><td>📜 <b>Q2 2026</b> — Visual commit graph, time-travel</td></tr>
<tr><td>🌐 Web UI</td><td>📜 <b>Q2 2026</b> — Browser visualization & management</td></tr>
<tr><td>👥 Multi-agent support</td><td>📜 <b>Q3 2026</b> — Shared DB with conflict resolution</td></tr>
<tr><td>☁️ Cloud sync</td><td>📜 <b>Q3 2026</b> — S3/MinIO for teams</td></tr>
<tr><td>📊 Metrics dashboard</td><td>📜 <b>Q4 2026</b> — Cache hits, context utilization, analytics</td></tr>
</tbody>
</table>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- FAQ -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## ❓ **FAQ**
<details>
<summary><b>What is CVC?</b></summary>
<br>
CVC (Cognitive Version Control) is Git for AI agent context. It versions the agent''s entire cognitive state as an immutable Merkle DAG, enabling time-travel, branching, and merging for AI conversations.
</details>
<details>
<summary><b>How is CVC different from Claude Code or Cursor?</b></summary>
<br>
CVC adds: ⏪ Time-travel • 🌿 Branching • 🔍 Semantic search • 🔀 Context merging • 💾 Auto-checkpoint • 🔄 Crash recovery • 🤖 Multi-provider support (4 vs 1)
</details>
<details>
<summary><b>Does CVC work with my existing AI tools?</b></summary>
<br>
**Yes!** CVC works in two modes:
1. **Agent mode** — built-in AI assistant
2. **Proxy mode** — transparent proxy for Claude Code, Aider, Cursor, VS Code, Windsurf, Continue, Cline, LangChain, CrewAI, AutoGen, and more
</details>
<details>
<summary><b>Is my data sent to the cloud?</b></summary>
<br>
**No.** Everything stays local in `.cvc/`. CVC has:
🔒 No telemetry • 🔒 No cloud sync • 🔒 No data collection
Your agent''s thoughts are **yours**.
</details>
<details>
<summary><b>How much does it cost?</b></summary>
<br>
CVC itself is **free and open source**. You only pay for LLM API calls (Anthropic/OpenAI/Google), but CVC makes them **~90% cheaper** via prompt caching and **58.1% less context** via branching.
</details>
<details>
<summary><b>Can I use CVC offline?</b></summary>
<br>
**Yes!** Use Ollama with local models like `qwen2.5-coder:7b`, `deepseek-r1:8b`, etc. 100% local, no API key needed.
</details>
<details>
<summary><b>What providers are supported?</b></summary>
<br>
- 🟣 **Anthropic** (Claude Opus 4.6, Sonnet 4.5, Haiku 4.5)
- 🟢 **OpenAI** (GPT-5.2, GPT-5.2-Codex, GPT-5-mini)
- 🔵 **Google** (Gemini 3 Pro, Gemini 2.5 Pro/Flash)
- ⚪ **Ollama** (Qwen, DeepSeek, Devstral, etc.)
</details>
<details>
<summary><b>Is there a GUI?</b></summary>
<br>
Not yet! But coming soon:
🎨 **VS Code extension** (Q2 2026)
🌐 **Web UI** (Q2 2026)
For now, use the beautiful terminal UI with Rich formatting.
</details>
<details>
<summary><b>Can I use CVC in production?</b></summary>
<br>
**Yes!** CVC is production-ready (v1.5.4) with crash-resistant local storage, used by solo developers, teams, and enterprises.
</details>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- RESEARCH -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 📚 **Research**
<div align="center">
**CVC is grounded in published research**
</div>
<br>
<table>
<thead>
<tr>
<th width="35%">Paper</th>
<th width="65%">Key Finding</th>
</tr>
</thead>
<tbody>
<tr><td><a href="https://arxiv.org/abs/2512.13914"><b>ContextBranch</b></a></td><td>58.1% context reduction via branching</td></tr>
<tr><td><a href="https://arxiv.org/abs/2508.00031"><b>GCC</b></a></td><td>11.7% → 40.7% success with rollback (3.5× improvement)</td></tr>
<tr><td><a href="https://research.protocol.ai/publications/merkle-crdts-merkle-dags-meet-crdts/psaras2020.pdf"><b>Merkle-CRDTs</b></a></td><td>Structural deduplication for content-addressable DAGs</td></tr>
<tr><td><a href="https://www.prompthub.us/blog/prompt-caching-with-openai-anthropic-and-google-models"><b>Prompt Caching</b></a></td><td>Anthropic/OpenAI/Google token reuse patterns</td></tr>
</tbody>
</table>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- LICENSE -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
## 📜 **License**
<div align="center">
**MIT License** — see [LICENSE](LICENSE)
This project is free and open source. Use it however you want.
</div>
<br>
---
<br>
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<!-- FOOTER -->
<!-- ═══════════════════════════════════════════════════════════════════════ -->
<div align="center">
<br>
## ✨ **Because AI agents deserve an undo button** ✨
<br>
**Made with ❤️ by developers who got tired of AI agents forgetting what they just did.**
<br>
---
<br>
### **⭐ Star this repo if you believe in giving AI agents memory that actually works**
<br>
[](https://github.com/mannuking/CVC/stargazers)
[](https://github.com/mannuking/CVC/network/members)
[](https://github.com/mannuking/CVC/watchers)
<br>
<table>
<tr>
<td align="center"><a href="https://jaimeena.com/cvc"><b>🌐 Website</b></a></td>
<td align="center"><a href="https://github.com/mannuking/CVC"><b>⭐ Star</b></a></td>
<td align="center"><a href="https://github.com/mannuking/CVC/issues"><b>🛠 Bug Report</b></a></td>
<td align="center"><a href="https://github.com/mannuking/CVC/issues"><b>💡 Feature Request</b></a></td>
<td align="center"><a href="https://github.com/mannuking/CVC/pulls"><b>🔀 Pull Request</b></a></td>
<td align="center"><a href="https://discord.gg/cvc"><b>💬 Discord</b></a></td>
</tr>
</table>
<br>
---
<br>
<sub>Built with Python 🐍 • FastAPI ⚡ • SQLite 🗄️ • LangGraph 🦜 • Rich 🎨</sub>
<br>
</div>
| text/markdown | CVC Contributors | null | null | null | null | agents, ai, context, llm, merkle, version-control | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
... | [] | null | null | >=3.11 | [] | [] | [] | [
"aiosqlite>=0.20.0",
"chromadb>=1.0.0",
"click>=8.1.0",
"fastapi>=0.115.0",
"gitpython>=3.1.0",
"httpx[http2]>=0.27.0",
"langchain-core>=0.3.0",
"langgraph>=0.2.0",
"prompt-toolkit>=3.0.0",
"pydantic>=2.9.0",
"rich>=13.0.0",
"uvicorn[standard]>=0.30.0",
"zstandard>=0.23.0",
"anthropic>=0.3... | [] | [] | [] | [
"Homepage, https://github.com/mannuking/AI-Cognitive-Version-Control",
"Repository, https://github.com/mannuking/AI-Cognitive-Version-Control",
"Issues, https://github.com/mannuking/AI-Cognitive-Version-Control/issues"
] | uv/0.8.18 | 2026-02-19T18:02:05.667949 | tm_ai-1.6.2.tar.gz | 404,681 | c1/0a/ed6c76e31ef5c355a504e24d0f89f67675ebbdf530fa1d8bf58246315d36/tm_ai-1.6.2.tar.gz | source | sdist | null | false | dbea2596f2115dbece0ce255793f56f4 | 154fb6ed73c6c95c0ac620bbb0bb0b4df0114763a081ab0b89e7837446cfc7ad | c10aed6c76e31ef5c355a504e24d0f89f67675ebbdf530fa1d8bf58246315d36 | MIT | [] | 246 |
2.4 | dagmc-h5m-file-inspector | 0.6.5 | Extracts information from DAGMC h5m files including volumes number, material tags |
[](https://www.python.org)
[](https://github.com/fusion-energy/dagmc_h5m_file_inspector/actions/workflows/ci_with_install.yml)
[](https://codecov.io/gh/fusion-energy/dagmc_h5m_file_inspector)
[](https://github.com/fusion-energy/dagmc_h5m_file_inspector/actions/workflows/python-publish.yml)
[](https://pypi.org/project/dagmc_h5m_file_inspector/)
# dagmc-h5m-file-inspector
A minimal Python package that inspects DAGMC h5m files to extract volume IDs,
material tags, bounding boxes, and geometric volumes.
# Installation
```bash
pip install dagmc-h5m-file-inspector
```
The package uses h5py as the default backend. Optionally, pymoab can be used
as an alternative backend if installed.
# Python API Usage
## Finding volume IDs
```python
import dagmc_h5m_file_inspector as di
di.get_volumes_from_h5m("dagmc.h5m")
>>> [1, 2]
```
## Finding material tags
```python
import dagmc_h5m_file_inspector as di
di.get_materials_from_h5m("dagmc.h5m")
>>> ['big_box', 'small_box']
```
## Finding volume IDs with their materials
```python
import dagmc_h5m_file_inspector as di
di.get_volumes_and_materials_from_h5m("dagmc.h5m")
>>> {1: 'small_box', 2: 'big_box'}
```
## Getting the bounding box
```python
import dagmc_h5m_file_inspector as di
lower_left, upper_right = di.get_bounding_box_from_h5m("dagmc.h5m")
>>> lower_left
array([-5., -10., -10.])
>>> upper_right
array([25., 10., 10.])
```
Optionally filter by material tag to get the bounding box for specific materials:
```python
import dagmc_h5m_file_inspector as di
# Bounding box for a single material
lower_left, upper_right = di.get_bounding_box_from_h5m("dagmc.h5m", materials="small_box")
>>> lower_left
array([-5., -5., -5.])
>>> upper_right
array([5., 5., 5.])
# Bounding box for multiple materials (combined)
lower_left, upper_right = di.get_bounding_box_from_h5m("dagmc.h5m", materials=["small_box", "big_box"])
>>> lower_left
array([-5., -10., -10.])
>>> upper_right
array([25., 10., 10.])
```
## Getting geometric volume sizes by cell ID
```python
import dagmc_h5m_file_inspector as di
di.get_volumes_from_h5m_by_cell_id("dagmc.h5m")
>>> {1: 1000.0, 2: 8000.0}
```
## Getting geometric volume sizes by material name
```python
import dagmc_h5m_file_inspector as di
di.get_volumes_from_h5m_by_material_name("dagmc.h5m")
>>> {'small_box': 1000.0, 'big_box': 8000.0}
```
## Getting geometric volume sizes by cell ID and material name
```python
import dagmc_h5m_file_inspector as di
di.get_volumes_from_h5m_by_cell_id_and_material_name("dagmc.h5m")
>>> {(1, 'small_box'): 1000.0, (2, 'big_box'): 8000.0}
```
## Setting OpenMC material volumes from DAGMC geometry
This function reads the DAGMC file, matches materials by name, and sets the
`volume` attribute on the corresponding OpenMC Material objects.
```python
import openmc
import dagmc_h5m_file_inspector as di
# Create OpenMC materials with names matching the DAGMC file
small_box = openmc.Material(name='small_box')
big_box = openmc.Material(name='big_box')
materials = openmc.Materials([small_box, big_box])
# Set volumes from DAGMC geometry
di.set_openmc_material_volumes_from_h5m(materials, "dagmc.h5m")
>>> small_box.volume
1000.0
>>> big_box.volume
8000.0
```
## Getting triangle connectivity and coordinates for each volume
This function extracts the triangle mesh data for each volume, returning the
connectivity (vertex indices) and coordinates (3D points) needed for visualization
or mesh processing.
```python
import dagmc_h5m_file_inspector as di
data = di.get_triangle_conn_and_coords_by_volume("dagmc.h5m")
>>> data
{1: (array([[0, 1, 2], [0, 2, 3], ...]), array([[0., 0., 0.], [10., 0., 0.], ...])),
2: (array([[0, 1, 2], [0, 2, 3], ...]), array([[-5., -10., -10.], [25., -10., -10.], ...]))}
# Access data for a specific volume
connectivity, coordinates = data[1]
>>> connectivity.shape
(12, 3) # 12 triangles, each with 3 vertex indices
>>> coordinates.shape
(8, 3) # 8 unique vertices, each with x, y, z coordinates
```
## Convert h5m file to vtkhdf
Convert DAGMC h5m files to vtkhdf which can be directly opened in Paraview 5.13+.
The resulting Paraview files have color for cell IDs and material tags present within the h5m file.
```python
import dagmc_h5m_file_inspector as di
di.convert_h5m_to_vtkhdf(h5m_filename='dagmc.h5m', vtkhdf_filename= 'dagmc.vtkhdf')
```

## Using the pymoab backend
All functions support an optional `backend` parameter. The default is `"h5py"`,
but `"pymoab"` can be used if pymoab is installed:
```python
import dagmc_h5m_file_inspector as di
di.get_volumes_from_h5m("dagmc.h5m", backend="pymoab")
>>> [1, 2]
```
| text/markdown | null | The dagmc_h5m_file_inspector Development Team <mail@jshimwell.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"h5py",
"numpy",
"pymoab; extra == \"pymoab\"",
"pytest>=5.4.3; extra == \"tests\"",
"requests; extra == \"tests\"",
"pytest-cov; extra == \"tests\"",
"cad-to-dagmc; extra == \"tests\"",
"cadquery; extra == \"tests\""
] | [] | [] | [] | [
"Homepage, https://github.com/fusion-energy/dagmc_h5m_file_inspector",
"Source, https://github.com/fusion-energy/dagmc_h5m_file_inspector",
"Tracker, https://github.com/fusion-energy/dagmc_h5m_file_inspector/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T18:00:25.881220 | dagmc_h5m_file_inspector-0.6.5.tar.gz | 17,519,161 | c9/29/d6d75699385cd4a99bd462eb9f1649dcdd631423ab387d1ea27be3e8479a/dagmc_h5m_file_inspector-0.6.5.tar.gz | source | sdist | null | false | 81931c934c652572e16b7d309294dbcd | f133b3dc4128fe2edb1e3ae296cba286a19f4b500fb561a727e03292b49c7cd4 | c929d6d75699385cd4a99bd462eb9f1649dcdd631423ab387d1ea27be3e8479a | MIT | [
"LICENSE.txt"
] | 265 |
2.4 | sadcompressor | 0.1.1 | Streamed Array Data compressor | # sadcompressor
`sadcompressor` is a compact archival format and Python library for streamed time-series data (mainly NumPy arrays). It stores frames with delta updates, optional prediction, and quantization-based compression.
## Quick Start
```python
import numpy as np
import sadcompressor as sad
filename = "example.sad"
# Create archive
with sad.SADWriter(filename, prec_nbits=20, prec_maxexp=8) as writer:
writer["x"] = np.array([1.0, 2.0, 3.0], dtype=np.float32)
writer.next_key(0.1)
writer["x"] = np.array([1.1, 2.1, 3.1], dtype=np.float32)
# Read archive sequentially
with sad.SADReader(filename) as reader:
while not reader.next_key():
print(f"t={reader.t:.3f}", reader["x"])
```
| text/markdown; charset=UTF-8; variant=GFM | null | Igor Lobanov <lobanov.igor@gmail.com> | null | null | null | compression, archive, stream | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Progra... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"py-ubjson",
"packaging",
"rich",
"termcolor",
"pytest; extra == \"tests\""
] | [] | [] | [] | [
"Repository, https://gitlab.com/alepoydes/sadcompressor.git"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-19T17:59:19.717523 | sadcompressor-0.1.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | 239,719 | 69/e9/ec09dec07261c384853959fff7b56d729bcedec852b9141e4bffed17f1fc/sadcompressor-0.1.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl | cp38 | bdist_wheel | null | false | 629488c679460ed6b9721f1f415c31b4 | 0ad1628a1eefc0c20916550f9631bda978d291c7aaf4dd65c6afd6f0b168a4b8 | 69e9ec09dec07261c384853959fff7b56d729bcedec852b9141e4bffed17f1fc | MIT | [
"LICENSE.md"
] | 164 |
2.4 | llm-gemini | 0.29 | LLM plugin to access Google's Gemini family of models | # llm-gemini
[](https://pypi.org/project/llm-gemini/)
[](https://github.com/simonw/llm-gemini/releases)
[](https://github.com/simonw/llm-gemini/actions?query=workflow%3ATest)
[](https://github.com/simonw/llm-gemini/blob/main/LICENSE)
API access to Google's Gemini models
## Installation
Install this plugin in the same environment as [LLM](https://llm.datasette.io/).
```bash
llm install llm-gemini
```
## Usage
Configure the model by setting a key called "gemini" to your [API key](https://aistudio.google.com/app/apikey):
```bash
llm keys set gemini
```
```
<paste key here>
```
You can also set the API key by assigning it to the environment variable `LLM_GEMINI_KEY`.
Now run the model using `-m gemini-2.0-flash`, for example:
```bash
llm -m gemini-2.0-flash "A short joke about a pelican and a walrus"
```
> A pelican and a walrus are sitting at a bar. The pelican orders a fishbowl cocktail, and the walrus orders a plate of clams. The bartender asks, "So, what brings you two together?"
>
> The walrus sighs and says, "It's a long story. Let's just say we met through a mutual friend... of the fin."
You can set the [default model](https://llm.datasette.io/en/stable/setup.html#setting-a-custom-default-model) to avoid the extra `-m` option:
```bash
llm models default gemini-2.0-flash
llm "A joke about a pelican and a walrus"
```
## Available models
<!-- [[[cog
import cog
from llm import cli
from click.testing import CliRunner
runner = CliRunner()
result = runner.invoke(cli.cli, ["models", "-q", "gemini/"])
lines = reversed(result.output.strip().split("\n"))
to_output = []
NOTES = {
"gemini/gemini-3.1-pro-preview": "Gemini 3.1 Pro Preview",
"gemini/gemini-3-pro-preview": "Gemini 3 Pro Preview",
"gemini/gemini-flash-latest": "Latest Gemini Flash",
"gemini/gemini-flash-lite-latest": "Latest Gemini Flash Lite",
"gemini/gemini-2.5-flash": "Gemini 2.5 Flash",
"gemini/gemini-2.5-pro": "Gemini 2.5 Pro",
"gemini/gemini-2.5-flash": "Gemini 2.5 Flash",
"gemini/gemini-2.5-flash-lite": "Gemini 2.5 Flash Lite",
"gemini/gemini-2.5-flash-preview-05-20": "Gemini 2.5 Flash preview (priced differently from 2.5 Flash)",
"gemini/gemini-2.0-flash-thinking-exp-01-21": "Experimental \"thinking\" model from January 2025",
"gemini/gemini-1.5-flash-8b-latest": "The least expensive model",
}
for line in lines:
model_id, rest = line.split(None, 2)[1:]
note = NOTES.get(model_id, "")
to_output.append(
"- `{}`{}".format(
model_id,
': {}'.format(note) if note else ""
)
)
cog.out("\n".join(to_output))
]]] -->
- `gemini/gemini-3.1-pro-preview-customtools`
- `gemini/gemini-3.1-pro-preview`: Gemini 3.1 Pro Preview
- `gemini/gemini-3-flash-preview`
- `gemini/gemini-3-pro-preview`: Gemini 3 Pro Preview
- `gemini/gemini-2.5-flash-lite-preview-09-2025`
- `gemini/gemini-2.5-flash-preview-09-2025`
- `gemini/gemini-flash-lite-latest`: Latest Gemini Flash Lite
- `gemini/gemini-flash-latest`: Latest Gemini Flash
- `gemini/gemini-2.5-flash-lite`: Gemini 2.5 Flash Lite
- `gemini/gemini-2.5-pro`: Gemini 2.5 Pro
- `gemini/gemini-2.5-flash`: Gemini 2.5 Flash
- `gemini/gemini-2.5-pro-preview-06-05`
- `gemini/gemini-2.5-flash-preview-05-20`: Gemini 2.5 Flash preview (priced differently from 2.5 Flash)
- `gemini/gemini-2.5-pro-preview-05-06`
- `gemini/gemini-2.5-flash-preview-04-17`
- `gemini/gemini-2.5-pro-preview-03-25`
- `gemini/gemini-2.5-pro-exp-03-25`
- `gemini/gemini-2.0-flash-lite`
- `gemini/gemini-2.0-pro-exp-02-05`
- `gemini/gemini-2.0-flash`
- `gemini/gemini-2.0-flash-thinking-exp-01-21`: Experimental "thinking" model from January 2025
- `gemini/gemini-2.0-flash-thinking-exp-1219`
- `gemini/gemma-3n-e4b-it`
- `gemini/gemma-3-27b-it`
- `gemini/gemma-3-12b-it`
- `gemini/gemma-3-4b-it`
- `gemini/gemma-3-1b-it`
- `gemini/learnlm-1.5-pro-experimental`
- `gemini/gemini-2.0-flash-exp`
- `gemini/gemini-exp-1206`
- `gemini/gemini-exp-1121`
- `gemini/gemini-exp-1114`
- `gemini/gemini-1.5-flash-8b-001`
- `gemini/gemini-1.5-flash-8b-latest`: The least expensive model
- `gemini/gemini-1.5-flash-002`
- `gemini/gemini-1.5-pro-002`
- `gemini/gemini-1.5-flash-001`
- `gemini/gemini-1.5-pro-001`
- `gemini/gemini-1.5-flash-latest`
- `gemini/gemini-1.5-pro-latest`
- `gemini/gemini-pro`
<!-- [[[end]]] -->
All of these models have aliases that omit the `gemini/` prefix, for example:
```bash
llm -m gemini-1.5-flash-8b-latest --schema 'name,age int,bio' 'invent a dog'
```
### Images, audio and video
Gemini models are multi-modal. You can provide images, audio or video files as input like this:
```bash
llm -m gemini-2.0-flash 'extract text' -a image.jpg
```
Or with a URL:
```bash
llm -m gemini-2.0-flash-lite 'describe image' \
-a https://static.simonwillison.net/static/2024/pelicans.jpg
```
Audio works too:
```bash
llm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3
```
And video:
```bash
llm -m gemini-2.0-flash 'describe what happens' -a video.mp4
```
The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.
### YouTube videos
You can provide YouTube video URLs as attachments as well:
```bash
llm -m gemini-3-pro-preview -a 'https://www.youtube.com/watch?v=9o1_DL9uNlM' \
'Produce a summary with relevant URLs and code example snippets, then an accurate transcript with timestamps.'
```
[Example output here](https://gist.github.com/simonw/1b07aafb2bfc112b180ab68c864511cb).
These will be processed with media resolution `low` by default. You can use the `-o media_resolution X` option to set that to `medium`, `high`, or `unspecified`.
### JSON output
Use `-o json_object 1` to force the output to be JSON:
```bash
llm -m gemini-2.0-flash -o json_object 1 \
'3 largest cities in California, list of {"name": "..."}'
```
Outputs:
```json
{"cities": [{"name": "Los Angeles"}, {"name": "San Diego"}, {"name": "San Jose"}]}
```
### Code execution
Gemini models can [write and execute code](https://ai.google.dev/gemini-api/docs/code-execution) - they can decide to write Python code, execute it in a secure sandbox and use the result as part of their response.
To enable this feature, use `-o code_execution 1`:
```bash
llm -m gemini-2.0-flash -o code_execution 1 \
'use python to calculate (factorial of 13) * 3'
```
### Google search
Some Gemini models support [Grounding with Google Search](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini), where the model can run a Google search and use the results as part of answering a prompt.
Using this feature may incur additional requirements in terms of how you use the results. Consult [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini) for more details.
To run a prompt with Google search enabled, use `-o google_search 1`:
```bash
llm -m gemini-2.0-flash -o google_search 1 \
'What happened in Ireland today?'
```
Use `llm logs -c --json` after running a prompt to see the full JSON response, which includes [additional information](https://github.com/simonw/llm-gemini/pull/29#issuecomment-2606201877) about grounded results.
### URL context
Gemini models support a [URL context](https://ai.google.dev/gemini-api/docs/url-context) tool which, when enabled, allows the models to fetch additional content from URLs as part of their execution.
You can enable that with the `-o url_context 1` option - for example:
```bash
llm -m gemini-2.5-flash -o url_context 1 'Latest headline on simonwillison.net'
```
Extra tokens introduced by this tool will be charged as input tokens. Use `--usage` to see details of those:
```bash
llm -m gemini-2.5-flash -o url_context 1 --usage \
'Latest headline on simonwillison.net'
```
Outputs:
```
The latest headline on simonwillison.net as of August 17, 2025, is "TIL: Running a gpt-oss eval suite against LM Studio on a Mac.".
Token usage: 9,613 input, 87 output, {"candidatesTokenCount": 57, "promptTokensDetails": [{"modality": "TEXT", "tokenCount": 10}], "toolUsePromptTokenCount": 9603, "toolUsePromptTokensDetails": [{"modality": "TEXT", "tokenCount": 9603}], "thoughtsTokenCount": 30}
```
The `"toolUsePromptTokenCount"` key shows how many tokens were used for that URL context.
### Chat
To chat interactively with the model, run `llm chat`:
```bash
llm chat -m gemini-2.0-flash
```
### Timeouts
By default there is no `timeout` against the Gemini API. You can use the `timeout` option to protect against API requests that hang indefinitely.
With the CLI tool that looks like this, to set a 1.5 second timeout:
```bash
llm -m gemini-2.5-flash-preview-05-20 'epic saga about mice' -o timeout 1.5
```
In the Python library timeouts are used like this:
```python
import httpx, llm
model = llm.get_model("gemini/gemini-2.5-flash-preview-05-20")
try:
response = model.prompt(
"epic saga about mice", timeout=1.5
)
print(response.text())
except httpx.TimeoutException:
print("Timeout exceeded")
```
An `httpx.TimeoutException` subclass will be raised if the timeout is exceeded.
## Embeddings
The plugin also adds support for the `gemini-embedding-exp-03-07` and `text-embedding-004` embedding models.
Run that against a single string like this:
```bash
llm embed -m text-embedding-004 -c 'hello world'
```
This returns a JSON array of 768 numbers.
The `gemini-embedding-exp-03-07` model is larger, returning 3072 numbers. You can also use variants of it that are truncated down to smaller sizes:
- `gemini-embedding-exp-03-07` - 3072 numbers
- `gemini-embedding-exp-03-07-2048` - 2048 numbers
- `gemini-embedding-exp-03-07-1024` - 1024 numbers
- `gemini-embedding-exp-03-07-512` - 512 numbers
- `gemini-embedding-exp-03-07-256` - 256 numbers
- `gemini-embedding-exp-03-07-128` - 128 numbers
This command will embed every `README.md` file in child directories of the current directory and store the results in a SQLite database called `embed.db` in a collection called `readmes`:
```bash
llm embed-multi readmes -d embed.db -m gemini-embedding-exp-03-07-128 \
--files . '*/README.md'
```
You can then run similarity searches against that collection like this:
```bash
llm similar readmes -c 'upload csvs to stuff' -d embed.db
```
See the [LLM embeddings documentation](https://llm.datasette.io/en/stable/embeddings/cli.html) for further details.
## Listing all Gemini API models
The `llm gemini models` command lists all of the models that are exposed by the Gemini API, some of which may not be available through this plugin.
```bash
llm gemini models
```
You can add a `--key X` option to use a different API key.
To filter models by their supported generation methods use `--method` one or more times:
```bash
llm gemini models --method embedContent
```
If you provide multiple methods you will see models that support any of them.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-gemini
python3 -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
llm install -e '.[test]'
```
To run the tests:
```bash
pytest
```
This project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record Gemini API responses for the tests.
If you add a new test that calls the API you can capture the API response like this:
```bash
PYTEST_GEMINI_API_KEY="$(llm keys get gemini)" pytest --record-mode once
```
You will need to have stored a valid Gemini API key using this command first:
```bash
llm keys set gemini
# Paste key here
```
| text/markdown | Simon Willison | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"llm>=0.27",
"httpx",
"ijson",
"pytest; extra == \"test\"",
"pytest-recording; extra == \"test\"",
"pytest-asyncio; extra == \"test\"",
"cogapp; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/simonw/llm-gemini",
"Changelog, https://github.com/simonw/llm-gemini/releases",
"Issues, https://github.com/simonw/llm-gemini/issues",
"CI, https://github.com/simonw/llm-gemini/actions"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:58:31.567317 | llm_gemini-0.29.tar.gz | 23,322 | 40/23/5760b0b48161beec559cae9e6d0bbbab8bd70539cbea6056d7997d10ea94/llm_gemini-0.29.tar.gz | source | sdist | null | false | ad8bde8e521a3796f7572915d41d61e3 | 3f1f7da7f3765d5c3422ff208e9a1996c401e86fca4fb7b9fbd3bfdc372aea18 | 40235760b0b48161beec559cae9e6d0bbbab8bd70539cbea6056d7997d10ea94 | Apache-2.0 | [
"LICENSE"
] | 1,658 |
2.4 | policyengine-uk | 2.73.1 | PolicyEngine tax and benefit system for the UK. | # PolicyEngine UK
[](https://badge.fury.io/py/policyengine-uk)
[](https://github.com/psf/black)
PolicyEngine UK is [PolicyEngine](https://policyengine.org)'s microsimulation model of the UK tax-benefit system.
It uses the PolicyEngine Core microsimulation framework, which is based on [OpenFisca](https://openfisca.org).
The elements are described in different folders. All the modelling happens within the `policyengine_uk` folder.
- The rates and other system parameters are in the `parameters` folder.
- The formulas and inputs are in the `variables` folder.
- This country package comes also with reforms in the `reforms` folder.
The files that are outside from the `policyengine_uk` folder are used to set up the development environment. Installation instructions are located along with other documentation in the `docs` folder.
The model supports multiple different input datasets provided by the user, one of which is the Family Resources Survey,[^1] containing microdata on household incomes across the UK.
PolicyEngine UK enhances this dataset by fusing it to other surveys and reweighting it to minimize a comprehensive loss metric that measures the difference from an array of administrative totals.
[^1]: Department for Work and Pensions, Office for National Statistics, NatCen Social Research. (2021). Family Resources Survey, 2019-2020. [data collection]. UK Data Service. SN: 8802, http://doi.org/10.5255/UKDA-SN-8802-1
## Fast setup instructions
1. Run `pip install policyengine-uk`
2. Run `policyengine-uk` and go through the prompt to setup microdata.
## Contact
The primary maintainer for PolicyEngine UK is Nikhil Woodruff, co-founder and CTO of PolicyEngine (nikhil@policyengine.org).
## Citation
You may cite the source of your analysis as "PolicyEngine UK release #.#.#, author's calculations."
| text/markdown | null | PolicyEngine <nikhil@policyengine.org> | null | Nikhil Woodruff <nikhil@policyengine.org> | null | benefit, microsimulation, social, tax | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: POSIX",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | <3.14,>=3.13 | [] | [] | [] | [
"microdf-python>=1.2.1",
"policyengine-core>=3.23.6",
"pydantic>=2.11.7",
"tables>=3.10.2",
"black; extra == \"dev\"",
"coverage; extra == \"dev\"",
"furo<2023; extra == \"dev\"",
"jupyter-book>=2.0.0a0; extra == \"dev\"",
"linecheck; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; e... | [] | [] | [] | [
"Homepage, https://github.com/PolicyEngine/policyengine-uk",
"Repository, https://github.com/PolicyEngine/policyengine-uk",
"Issues, https://github.com/PolicyEngine/policyengine-uk/issues",
"Changelog, https://github.com/PolicyEngine/policyengine-uk/blob/master/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:58:22.441902 | policyengine_uk-2.73.1.tar.gz | 1,098,408 | b4/3d/9ba9e4cfe934a2e70840506ee20d1aa4ec1901a4d50d6667b5d676d33219/policyengine_uk-2.73.1.tar.gz | source | sdist | null | false | bdc086c2925633910bcfddfb0add5561 | 08bd0b91df4aa42b8173ededf5a00525dcd943db239ee88d102eabaa0576633a | b43d9ba9e4cfe934a2e70840506ee20d1aa4ec1901a4d50d6667b5d676d33219 | AGPL-3.0 | [
"LICENSE"
] | 816 |
2.4 | pymdwizard | 2.1.1 | This package provides programatic access to the functionality of the USGS MetadataWizard: https://doi-usgs.github.io/fort-pymdwizard/ | [](https://travis-ci.org/talbertc-usgs/fort-pymdwizard)
[](https://coveralls.io/github/talbertc-usgs/fort-pymdwizard?branch=master)
<img width="250" align="right" src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/USGS_logo_green.svg/500px-USGS_logo_green.svg.png"/>
Metadata Wizard
===========================================================================================
The MetadataWizard is a useful tool designed to facilitate FGDC
metadata creation for spatial and non-spatial data sets. It is a cross-platform desktop application
built using an open-source Python architecture.
Complete user documentation available [here](https://doi-usgs.github.io/fort-pymdwizard).

It provides a user-friendly and efficient environment for metadata creation,
editing, preview, and validation. Built-in tools facilitate and automate the creation of high quality
metadata records.
* Auto-population of challenging metadata sections such as the spatial reference,
spatial organization, and entity and attributes, based on information contained in
the data (CSV, Excel, Shapefiles, etc.)<br>

* Auto-population of contact information for USGS affiliates,
taxonomic information from ITIS, or keywords from USGS controlled vocabularies.<br>

* Built-in FGDC validator that highlights any missing or error elements directly on the GUI and in a printable report suitable for metadata review.<br>

* Copy/Paste or Drag-and-Drop of entire sections, subsections, or individual content
between different records or other tools including XML-Notepad and text editors.
* Built-in help documentation that guides users through common and detailed questions about metadata.
This project is modeled off of the original [Metadata Wizard](https://github.com/dignizio-usgs/MDWizard_Source), which was designed as a toolbox in ArcMap and required an ESRI installation.
Recommended Citation:
----------------
Talbert, C.B., Ignizio, D.A., Norkin, T., and Enns, K.D., 2017, Metadata Wizard (ver. 2.1.1, June 2025): U.S. Geological Survey software release, https://doi.org/10.5066/F7V9870D.
Authors:
----------------
Colin B. Talbert -- https://orcid.org/0000-0002-9505-1876<br>
Drew A. Ignizio -- https://orcid.org/0000-0001-8054-5139<br>
Tamar Norkin -- https://orcid.org/0000-0003-0797-3940<br>
Kyle D. Enns -- https://orcid.org/0000-0001-7675-697X
Acknowledgements:
----------------
The Metadata Wizard was developed by the data management team at the USGS Fort Collins Science Center,<br>
with support from the USGS Science Analytics and Synthesis (SAS),
and the USGS Community for Data Integration (CDI).<br><br>
Ongoing support provided by the USGS Science Analytics and Synthesis (SAS)<br><br>
Disclaimer:
-----------
This software has been approved for release by the U.S. Geological Survey (USGS).
Although the software has been subjected to rigorous review, the USGS reserves
the right to update the software as needed pursuant to further analysis and
review. No warranty, expressed or implied, is made by the USGS or the
U.S. Government as to the functionality of the software and related material
nor shall the fact of release constitute any such warranty. Furthermore, the
software is released on condition that neither the USGS nor the U.S. Government
shall be held liable for any damages resulting from its authorized
or unauthorized use.
Contact:
-----------
ask-sdm@usgs.gov
| text/markdown | Colin B. Talbert | Colin Talbert <ctalbert@ios.doi.gov>, Kyle Enns <kenns@usgs.gov>, Tamar Norkin <tnorkin@usgs.gov> | null | null | MetadataWizard License
------------------------------------------------------------------------------------------------
Creative Commons Attribution 4.0 International (CC BY 4.0) URL:
<http://creativecommons.org/licenses/by/4.0/>
Creative Commons Corporation (“Creative Commons”) is not a law firm and does not
provide legal services or legal advice. Distribution of Creative Commons public
licenses does not create a lawyer-client or other relationship. Creative Commons
makes its licenses and related information available on an “as-is” basis.
Creative Commons gives no warranties regarding its licenses, any material
licensed under their terms and conditions, or any related information. Creative
Commons disclaims all liability for damages resulting from their use to the
fullest extent possible.
**Using Creative Commons Public Licenses:** Creative Commons public licenses
provide a standard set of terms and conditions that creators and other rights
holders may use to share original works of authorship and other material subject
to copyright and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
**Considerations for licensors:** Our public licenses are intended for use by
those authorized to give the public permission to use material in ways otherwise
restricted by copyright and certain other rights. Our licenses are irrevocable.
Licensors should read and understand the terms and conditions of the license
they choose before applying it. Licensors should also secure all rights
necessary before applying our licenses so that the public can reuse the material
as expected. Licensors should clearly mark any material not subject to the
license. This includes other CC-licensed material, or material used under an
exception or limitation to copyright. More considerations for licensors.
**Considerations for the public:** By using one of our public licenses, a
licensor grants the public permission to use the licensed material under
specified terms and conditions. If the licensor’s permission is not necessary
for any reason–for example, because of any applicable exception or limitation to
copyright–then that use is not regulated by the license. Our licenses grant only
permissions under copyright and certain other rights that a licensor has
authority to grant. Use of the licensed material may still be restricted for
other reasons, including because others have copyright or other rights in the
material. A licensor may make special requests, such as asking that all changes
be marked or described. Although not required by our licenses, you are
encouraged to respect those requests where reasonable.
Creative Commons Attribution 4.0 International Public License
-------------------------------------------------------------
By exercising the Licensed Rights (defined below), You accept and agree to be
bound by the terms and conditions of this Creative Commons Attribution 4.0
International Public License ("Public License"). To the extent this Public
License may be interpreted as a contract, You are granted the Licensed Rights in
consideration of Your acceptance of these terms and conditions, and the Licensor
grants You such rights in consideration of benefits the Licensor receives from
making the Licensed Material available under these terms and conditions.
**Section 1 – Definitions.**
1. **Adapted Material** means material subject to Copyright and Similar Rights
that is derived from or based upon the Licensed Material and in which the
Licensed Material is translated, altered, arranged, transformed, or
otherwise modified in a manner requiring permission under the Copyright and
Similar Rights held by the Licensor. For purposes of this Public License,
where the Licensed Material is a musical work, performance, or sound
recording, Adapted Material is always produced where the Licensed Material
is synched in timed relation with a moving image.
2. **Adapter's License** means the license You apply to Your Copyright and
Similar Rights in Your contributions to Adapted Material in accordance with
the terms and conditions of this Public License.
3. **Copyright and Similar Rights** means copyright and/or similar rights
closely related to copyright including, without limitation, performance,
broadcast, sound recording, and Sui Generis Database Rights, without regard
to how the rights are labeled or categorized. For purposes of this Public
License, the rights specified in
Section [2(b)(1)-(2)](https://creativecommons.org/licenses/by/4.0/legalcode#s2b) are
not Copyright and Similar Rights.
4. **Effective Technological Measures** means those measures that, in the
absence of proper authority, may not be circumvented under laws fulfilling
obligations under Article 11 of the WIPO Copyright Treaty adopted on
December 20, 1996, and/or similar international agreements.
5. **Exceptions and Limitations** means fair use, fair dealing, and/or any
other exception or limitation to Copyright and Similar Rights that applies
to Your use of the Licensed Material.
6. **Licensed Material** means the artistic or literary work, database, or
other material to which the Licensor applied this Public License.
7. **Licensed Rights** means the rights granted to You subject to the terms and
conditions of this Public License, which are limited to all Copyright and
Similar Rights that apply to Your use of the Licensed Material and that the
Licensor has authority to license.
8. **Licensor** means the individual(s) or entity(ies) granting rights under
this Public License.
9. **Share** means to provide material to the public by any means or process
that requires permission under the Licensed Rights, such as reproduction,
public display, public performance, distribution, dissemination,
communication, or importation, and to make material available to the public
including in ways that members of the public may access the material from a
place and at a time individually chosen by them.
10. **Sui Generis Database Rights** means rights other than copyright resulting
from Directive 96/9/EC of the European Parliament and of the Council of 11
March 1996 on the legal protection of databases, as amended and/or
succeeded, as well as other essentially equivalent rights anywhere in the
world.
11. **You** means the individual or entity exercising the Licensed Rights under
this Public License. **Your** has a corresponding meaning.
**Section 2 – Scope.**
1. **License grant**.
1. Subject to the terms and conditions of this Public License, the Licensor
hereby grants You a worldwide, royalty-free, non-sublicensable,
non-exclusive, irrevocable license to exercise the Licensed Rights in
the Licensed Material to:
1. reproduce and Share the Licensed Material, in whole or in part; and
2. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions
and Limitations apply to Your use, this Public License does not apply,
and You do not need to comply with its terms and conditions.
3. Term. The term of this Public License is specified in
Section [6(a)](https://creativecommons.org/licenses/by/4.0/legalcode#s6a).
4. Media and formats; technical modifications allowed. The Licensor
authorizes You to exercise the Licensed Rights in all media and formats
whether now known or hereafter created, and to make technical
modifications necessary to do so. The Licensor waives and/or agrees not
to assert any right or authority to forbid You from making technical
modifications necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective Technological
Measures. For purposes of this Public License, simply making
modifications authorized by this
Section [2(a)(4)](https://creativecommons.org/licenses/by/4.0/legalcode#s2a4) never
produces Adapted Material.
5. Downstream recipients.
1. Offer from the Licensor – Licensed Material. Every recipient of the
Licensed Material automatically receives an offer from the Licensor
to exercise the Licensed Rights under the terms and conditions of
this Public License.
2. No downstream restrictions. You may not offer or impose any
additional or different terms or conditions on, or apply any
Effective Technological Measures to, the Licensed Material if doing
so restricts exercise of the Licensed Rights by any recipient of the
Licensed Material.
6. No endorsement. Nothing in this Public License constitutes or may be
construed as permission to assert or imply that You are, or that Your
use of the Licensed Material is, connected with, or sponsored, endorsed,
or granted official status by, the Licensor or others designated to
receive attribution as provided in
Section [3(a)(1)(A)(i)](https://creativecommons.org/licenses/by/4.0/legalcode#s3a1Ai).
2. **Other rights**.
1. Moral rights, such as the right of integrity, are not licensed under
this Public License, nor are publicity, privacy, and/or other similar
personality rights; however, to the extent possible, the Licensor waives
and/or agrees not to assert any such rights held by the Licensor to the
limited extent necessary to allow You to exercise the Licensed Rights,
but not otherwise.
2. Patent and trademark rights are not licensed under this Public License.
3. To the extent possible, the Licensor waives any right to collect
royalties from You for the exercise of the Licensed Rights, whether
directly or through a collecting society under any voluntary or waivable
statutory or compulsory licensing scheme. In all other cases the
Licensor expressly reserves any right to collect such royalties.
**Section 3 – License Conditions.**
Your exercise of the Licensed Rights is expressly made subject to the following
conditions.
1. **Attribution**.
1. If You Share the Licensed Material (including in modified form), You
must:
1. retain the following if it is supplied by the Licensor with the
Licensed Material:
1. identification of the creator(s) of the Licensed Material and
any others designated to receive attribution, in any reasonable
manner requested by the Licensor (including by pseudonym if
designated);
2. a copyright notice;
3. a notice that refers to this Public License;
4. a notice that refers to the disclaimer of warranties;
5. a URI or hyperlink to the Licensed Material to the extent
reasonably practicable;
2. indicate if You modified the Licensed Material and retain an
indication of any previous modifications; and
3. indicate the Licensed Material is licensed under this Public
License, and include the text of, or the URI or hyperlink to, this
Public License.
2. You may satisfy the conditions in
Section [3(a)(1)](https://creativecommons.org/licenses/by/4.0/legalcode#s3a1) in
any reasonable manner based on the medium, means, and context in which
You Share the Licensed Material. For example, it may be reasonable to
satisfy the conditions by providing a URI or hyperlink to a resource
that includes the required information.
3. If requested by the Licensor, You must remove any of the information
required by
Section [3(a)(1)(A)](https://creativecommons.org/licenses/by/4.0/legalcode#s3a1A) to
the extent reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's License You
apply must not prevent recipients of the Adapted Material from complying
with this Public License.
**Section 4 – Sui Generis Database Rights.**
> Where the Licensed Rights include Sui Generis Database Rights that apply to
> Your use of the Licensed Material:
1. for the avoidance of doubt,
Section [2(a)(1)](https://creativecommons.org/licenses/by/4.0/legalcode#s2a1) grants
You the right to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
2. if You include all or a substantial portion of the database contents in a
database in which You have Sui Generis Database Rights, then the database in
which You have Sui Generis Database Rights (but not its individual contents)
is Adapted Material; and
3. You must comply with the conditions in
Section [3(a)](https://creativecommons.org/licenses/by/4.0/legalcode#s3a) if
You Share all or a substantial portion of the contents of the database.
For the avoidance of doubt, this
Section [4](https://creativecommons.org/licenses/by/4.0/legalcode#s4) supplements
and does not replace Your obligations under this Public License where the
Licensed Rights include other Copyright and Similar Rights.
**Section 5 – Disclaimer of Warranties and Limitation of Liability.**
1. **Unless otherwise separately undertaken by the Licensor, to the extent
possible, the Licensor offers the Licensed Material as-is and as-available,
and makes no representations or warranties of any kind concerning the
Licensed Material, whether express, implied, statutory, or other. This
includes, without limitation, warranties of title, merchantability, fitness
for a particular purpose, non-infringement, absence of latent or other
defects, accuracy, or the presence or absence of errors, whether or not
known or discoverable. Where disclaimers of warranties are not allowed in
full or in part, this disclaimer may not apply to You.**
2. **To the extent possible, in no event will the Licensor be liable to You on
any legal theory (including, without limitation, negligence) or otherwise
for any direct, special, indirect, incidental, consequential, punitive,
exemplary, or other losses, costs, expenses, or damages arising out of this
Public License or use of the Licensed Material, even if the Licensor has
been advised of the possibility of such losses, costs, expenses, or damages.
Where a limitation of liability is not allowed in full or in part, this
limitation may not apply to You.**
3. The disclaimer of warranties and limitation of liability provided above
shall be interpreted in a manner that, to the extent possible, most closely
approximates an absolute disclaimer and waiver of all liability.
**Section 6 – Term and Termination.**
1. This Public License applies for the term of the Copyright and Similar Rights
licensed here. However, if You fail to comply with this Public License, then
Your rights under this Public License terminate automatically.
2. Where Your right to use the Licensed Material has terminated under
Section [6(a)](https://creativecommons.org/licenses/by/4.0/legalcode#s6a),
it reinstates:
1. automatically as of the date the violation is cured, provided it is
cured within 30 days of Your discovery of the violation; or
2. upon express reinstatement by the Licensor.
> For the avoidance of doubt, this
> Section [6(b)](https://creativecommons.org/licenses/by/4.0/legalcode#s6b) does
> not affect any right the Licensor may have to seek remedies for Your
> violations of this Public License.
1. For the avoidance of doubt, the Licensor may also offer the Licensed
Material under separate terms or conditions or stop distributing the
Licensed Material at any time; however, doing so will not terminate this
Public License.
2. Sections [1](https://creativecommons.org/licenses/by/4.0/legalcode#s1), [5](https://creativecommons.org/licenses/by/4.0/legalcode#s5), [6](https://creativecommons.org/licenses/by/4.0/legalcode#s6), [7](https://creativecommons.org/licenses/by/4.0/legalcode#s7),
and [8](https://creativecommons.org/licenses/by/4.0/legalcode#s8) survive
termination of this Public License.
**Section 7 – Other Terms and Conditions.**
1. The Licensor shall not be bound by any additional or different terms or
conditions communicated by You unless expressly agreed.
2. Any arrangements, understandings, or agreements regarding the Licensed
Material not stated herein are separate from and independent of the terms
and conditions of this Public License.
**Section 8 – Interpretation.**
1. For the avoidance of doubt, this Public License does not, and shall not be
interpreted to, reduce, limit, restrict, or impose conditions on any use of
the Licensed Material that could lawfully be made without permission under
this Public License.
2. To the extent possible, if any provision of this Public License is deemed
unenforceable, it shall be automatically reformed to the minimum extent
necessary to make it enforceable. If the provision cannot be reformed, it
shall be severed from this Public License without affecting the
enforceability of the remaining terms and conditions.
3. No term or condition of this Public License will be waived and no failure to
comply consented to unless expressly agreed to by the Licensor.
4. Nothing in this Public License constitutes or may be interpreted as a
limitation upon, or waiver of, any privileges and immunities that apply to
the Licensor or You, including from the legal processes of any jurisdiction
or authority.
Creative Commons is not a party to its public licenses. Notwithstanding,
Creative Commons may elect to apply one of its public licenses to material it
publishes and in those instances will be considered the “Licensor.” The text of
the Creative Commons public licenses is dedicated to the public domain under the
CC0 Public Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the use of the
trademark “Creative Commons” or any other trademark or logo of Creative Commons
without its prior written consent including, without limitation, in connection
with any unauthorized modifications to any of its public licenses or any other
arrangements, understandings, or agreements concerning use of licensed material.
For the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
Python Libraries, Software Dependencies and Licensing
-----------------------------------------------------
The Geospatial Route Interface Tool (GRIT) software includes Python scripts
developed by United States Geological Survey (USGS), which one can access
through a software repository. The GRIT installer is a complete deployment
package of the GRIT software and all third party Python libraries. The GRIT
installer does not include ESRI desktop (required). Please review and understand
the licensing associated with all software related to the GRIT package before
using. In addition to the listing of library dependencies below, please see
Appendix 5 for all software and library details.
| **Library Name** | **License** | **Description/Use** |
|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Python 3.6 | Python software foundation license agreement for Python 3.6.5 | Many of the Python base packages are used throughout the Metadata Wizard software. |
| PyQt5 | General Public License (GPL) | PyQt5 is used for the user interface (UI) development. |
| lxml | BSD | XML reading, modification, and writing |
| defusedxml | Python Software Foundation License (PSFL) | XML reading, modification, and writing |
| gdal/osgeo | Massachusetts Institute of Technology (MIT) | Geospatial Data Abstract Library used for manipulating geospatial data sets. This includes the GDAL (raster) and ogr (vector) references. Dependencies for this library include libgdal, proj4 and numpy. |
| requests | Apache Software License (Apache 2.0) | Web service requests |
| beautifulsoup4 | Massachusetts Institute of Technology (MIT) | HTML parsing |
| fiona | BSD 3-Clause License | Vector spatial data read and introspection. |
| rasterio | BSD 3-Clause License | Raster spatial data read and introspection. |
| leaflet | BSD | Interactive HTML/JavaScript Map generation. |
| habanero | Massachusetts Institute of Technology (MIT) | Obtain citation information from digital object identifiers. |
| folium | Massachusetts Institute of Technology (MIT) | Used by The GRIT to provide mapping capabilities in the UI and for reports generated by the software. |
| matplotlib | Berkeley Software Distribution (BSD) based on the Python Software Foundation (PSF) license and Non-BSD compatible licenses (for example, LGPL) for matplotlib toolkits (for example, basemap). | Used by the GRIT to provide mapping capabilities in the UI and for reports generated by the software. |
| pyproj | Internet Systems Consortium (ISC), functionally equivalent to the BSD-2-Clause and MIT | Used by the GRIT to work with map projections. |
| docx | Massachusetts Institute of Technology (MIT) | Create Microsoft Word metadata review documents |
| pandas | BSD 3-Clause License | Tabular data reading and manipulation |
| jupyter | BSD 3-Clause License | Notebook based scripting and automation |
| pysb | This USGS product is considered to be in the U.S. public domain, and is licensed under CC0 1.0. | Included for scripting of USGS ScienceBase interaction |
| bokeh | Freely Distributable, OSI Approved (New BSD) | Included for scripting of data visualization |
| seaborn | BSD 3-Clause License | Included for scripting of statistical data visualization |
| metadata, USGS, FGDC, CSGDM | [
"Programming Language :: Python :: 3",
"License :: CC0 1.0 Universal (CC0 1.0) Public Domain Dedication",
"Operating System :: OS Independent"
] | [] | https://github.com/usgs/fort-pymdwizard | null | >=3.8 | [] | [] | [] | [
"numpy>=1.22.0",
"pandas>=1.4.0",
"requests>=2.27.0",
"geopandas>=0.10.0",
"fiona>=1.10.0",
"shapely>=1.8.0",
"GDAL>=3.4.0",
"matplotlib>=3.5.0"
] | [] | [] | [] | [
"Homepage, https://www.usgs.gov/software/metadata-wizard",
"Documentation, https://doi-usgs.github.io/fort-pymdwizard",
"Source, https://github.com/DOI-USGS/fort-pymdwizard",
"Tracker, https://github.com/DOI-USGS/fort-pymdwizard/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T17:58:12.821181 | pymdwizard-2.1.1.tar.gz | 2,961,483 | fe/30/f22a175404402bb2e754e3009ab80c07e04b4a00cd2fb9c7e7e1b98d5810/pymdwizard-2.1.1.tar.gz | source | sdist | null | false | 231d67c31b471d07d86b92b6dbb03566 | c6f1ddb21ebe6cbad9f038deea2671d0428e8fa6ca676a631f841e0f6b5c8c04 | fe30f22a175404402bb2e754e3009ab80c07e04b4a00cd2fb9c7e7e1b98d5810 | null | [
"LICENSE.md"
] | 244 |
2.3 | soma-models | 0.1.1 | Python model implementations for the Soma network, matching the Rust runtime exactly | # Soma Models
Python implementations of Soma network models. These implementations are **numerically identical** to the Rust runtime — weights trained in Python produce the same outputs when evaluated on-chain.
Both [PyTorch](https://pytorch.org/) and [Flax](https://flax.readthedocs.io/) (JAX) are supported as first-class frameworks. Models are serialized to [safetensors](https://huggingface.co/docs/safetensors/) format, which is the canonical weight exchange format between Python and the Rust runtime.
## Install
```bash
# PyTorch
uv add soma-models[torch]
# Flax / JAX
uv add soma-models[flax]
# Both
uv add soma-models[all]
```
Or with pip:
```bash
pip install soma-models[torch] # PyTorch
pip install soma-models[flax] # Flax / JAX
pip install soma-models[all] # Both
```
## Versioning
Model architectures are **versioned**. Each version defines a fixed architecture, hyperparameters, data contract, and scoring function. The on-chain runtime selects the architecture version when evaluating a model, so your weights must match the version you registered with.
New versions may be introduced via protocol upgrades. Previous versions continue to work for models registered under them.
---
## V1
V1 is a **pre-norm byte-level transformer**. It operates directly on raw bytes — no external tokenizer is needed. The model uses rotary positional embeddings (RoPE), GELU activations, and a next-token prediction objective with a Gaussian uniformity regularizer (SIGReg) to prevent embedding collapse.
### Architecture
```
Input bytes (0–255)
│
▼
Embedding (vocab_size → embedding_dim)
│
▼
Encoder (num_layers × TransformerBlock)
│ ┌─────────────────────────────────┐
│ │ Pre-Norm (LayerNorm) │
│ │ Multi-Head Attention (RoPE) │
│ │ Dropout + Residual │
│ │ Pre-Norm (LayerNorm) │
│ │ Feed-Forward (GELU) │
│ │ Dropout + Residual │
│ └─────────────────────────────────┘
│
▼
Final LayerNorm → representations (used for embedding + loss)
│
▼
Linear predictor → logits (used for cross-entropy loss)
```
### Hyperparameters
| Parameter | Value | Description |
|-----------|-------|-------------|
| `EMBEDDING_DIM` | 2048 | Dimension of token embeddings and hidden states |
| `NUM_HEADS` | 8 | Number of attention heads (head_dim = 256) |
| `NUM_LAYERS` | 32 | Number of transformer blocks |
| `MAX_SEQ_LEN` | 8192 | Maximum sequence length during on-chain evaluation |
| `PWFF_HIDDEN_DIM` | 8192 | Feed-forward inner dimension (4 × embedding_dim) |
| `VOCAB_SIZE` | 264 | 256 byte tokens + 8 special tokens |
| `MAX_WAVELENGTH` | 10,000 | RoPE positional encoding wavelength |
| `SCALE_FACTOR` | 1.0 | RoPE scale factor |
| `BATCH_SIZE` | 32 | Batch size during on-chain evaluation |
### Data Contract
The model operates on **raw bytes**. During on-chain evaluation, data is processed as follows:
- Each byte (0–255) is its own token ID
- Special tokens: **PAD = 256**, **EOS = 257**
- Data is chunked into non-overlapping sequences of `MAX_SEQ_LEN` (8192) bytes
- EOS is only placed on the **final chunk** and only if it is shorter than `MAX_SEQ_LEN` — it occupies the position immediately after the last data byte. If data length is an exact multiple of `MAX_SEQ_LEN`, no EOS is appended
- Any remaining positions after EOS (or after data if no EOS) are filled with PAD
- **Targets** are the input token IDs shifted left by 1 (next-token prediction), with PAD appended as the final target
- **Position IDs** are global byte offsets for data positions. PAD and EOS positions are clamped to the offset of the last data byte + 1 (they do not continue incrementing)
- Sequences are batched in groups of `BATCH_SIZE` (32)
You are free to prepare your training data however you want — different sequence lengths, different batching, different shuffling. But your model will be **scored** using the contract above, so your training should produce weights that perform well under these conditions.
### Scoring (Loss Function)
Models are scored on-chain by the following loss:
```
loss = cross_entropy + sig_reg_loss
```
The model with the **lowest loss** wins. Both components are:
1. **Cross-entropy loss**: Standard next-token prediction loss over the vocabulary. PAD tokens (256) are masked out and do not contribute to the loss.
2. **SIGReg loss**: A Gaussian uniformity regularizer ([LeJEPA](https://arxiv.org/pdf/2511.08544)) that penalizes embedding collapse. It measures how far the embedding distribution deviates from a standard Gaussian by comparing the characteristic function of projected representations against the Gaussian characteristic function.
| SIGReg Parameter | Value |
|------------------|-------|
| `SIG_REG_T_MAX` | 3.0 |
| `SIG_REG_SLICES` | 1024 |
| `SIG_REG_POINTS` | 17 |
| `SIG_REG_COEFFICIENT` | 0.02 |
SIGReg noise is generated using each framework's native RNG (`jax.random` via `nnx.Rngs` for Flax, `torch.randn` for PyTorch).
### Tokenizer
The tokenizer implements the on-chain data contract as a framework-agnostic Python module. It converts raw bytes into `token_ids`, `targets`, and `pos_ids` that can be wrapped with any framework's tensor constructor (`torch.tensor()`, `jnp.array()`, `tf.constant()`, etc.).
```python
from soma_models.v1.tokenizer import tokenize
batches = tokenize(raw_bytes)
for batch in batches:
batch.token_ids # [batch, seq_len] nested list of ints
batch.targets # [batch, seq_len] nested list of ints
batch.pos_ids # [batch, seq_len] nested list of ints
```
The default `max_seq_len` and `batch_size` match the on-chain evaluation parameters. You can override them for training:
```python
batches = tokenize(raw_bytes, max_seq_len=2048, batch_size=8)
```
The final batch may contain fewer than `batch_size` sequences (matching the Rust DataLoader behaviour).
### Usage
Both frameworks expose the same API: a `Model`, a `SIGReg` regularizer, and a `compute_loss` function.
#### PyTorch
```python
import torch
from soma_models.v1.configs import ModelConfig, SIGRegConfig
from soma_models.v1.tokenizer import tokenize
from soma_models.v1.torch.modules.model import Model
from soma_models.v1.torch.modules.sig_reg import SIGReg
from soma_models.v1.torch.loss import compute_loss
# Initialize
model = Model(ModelConfig(dropout_rate=0.1))
sig_reg = SIGReg(SIGRegConfig())
# Tokenize raw bytes
batches = tokenize(raw_bytes)
# Forward + loss (differentiable)
for batch in batches:
loss, embedding = compute_loss(
model, sig_reg,
token_ids=torch.tensor(batch.token_ids),
targets=torch.tensor(batch.targets),
)
loss.backward()
# Save / load weights
model.save("weights.safetensors")
model = Model.load("weights.safetensors", ModelConfig(dropout_rate=0.0))
```
#### Flax
```python
import jax.numpy as jnp
from flax import nnx
from soma_models.v1.configs import ModelConfig, SIGRegConfig
from soma_models.v1.tokenizer import tokenize
from soma_models.v1.flax.modules.model import Model
from soma_models.v1.flax.modules.sig_reg import SIGReg
from soma_models.v1.flax.loss import compute_loss
# Initialize
rngs = nnx.Rngs(0)
model = Model(ModelConfig(dropout_rate=0.1), rngs=rngs)
sig_reg = SIGReg(SIGRegConfig(), rngs=rngs)
# Tokenize raw bytes
batches = tokenize(raw_bytes)
# Forward + loss (differentiable via jax.grad)
for batch in batches:
loss, embedding = compute_loss(
model, sig_reg,
token_ids=jnp.array(batch.token_ids),
targets=jnp.array(batch.targets),
)
# Save / load weights
model.save("weights.safetensors")
model = Model.load("weights.safetensors", ModelConfig(dropout_rate=0.0), rngs=rngs)
```
### Weight Serialization
Weights are stored in safetensors format with a canonical key layout. The serde layer handles all framework-specific transformations automatically:
- **LayerNorm**: `weight`/`bias` (torch) ↔ `gamma`/`beta` (safetensors) ↔ `scale`/`bias` (flax)
- **Linear**: Row-major (torch) ↔ column-major (safetensors/flax)
- **Attention**: Split-head (flax) ↔ flat (safetensors/torch)
#### PyTorch
```python
from soma_models.v1.configs import ModelConfig
from soma_models.v1.torch.modules.model import Model
# Save
model.save("weights.safetensors")
# Load
model = Model.load("weights.safetensors", ModelConfig(dropout_rate=0.0))
```
#### Flax
```python
from soma_models.v1.configs import ModelConfig
from soma_models.v1.flax.modules.model import Model
from flax import nnx
# Save
model.save("weights.safetensors")
# Load
model = Model.load("weights.safetensors", ModelConfig(dropout_rate=0.0), rngs=nnx.Rngs(0))
```
Weights are cross-compatible — you can save from one framework and load into the other.
| text/markdown | Soma Contributors | null | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"soma-arrgen==0.1.0",
"safetensors>=0.6.2",
"soma-models[flax,torch]; extra == \"all\"",
"flax>=0.12.3; extra == \"flax\"",
"torch>=2.10.0; extra == \"torch\""
] | [] | [] | [] | [
"Homepage, https://github.com/soma-org/soma",
"Repository, https://github.com/soma-org/soma"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T17:58:10.073396 | soma_models-0.1.1.tar.gz | 15,038 | 95/e2/96a95360289761531dd2c7329c9c1090e6b8db5adaa90f47ade394715d1f/soma_models-0.1.1.tar.gz | source | sdist | null | false | 474edc7138c098efc384dd915d623dd6 | 6cadc689e3e14afdf3e14f05cd3aa946242caf739cdc720742a148c2e7532131 | 95e296a95360289761531dd2c7329c9c1090e6b8db5adaa90f47ade394715d1f | null | [] | 221 |
2.4 | lm-mcp | 1.8.0 | MCP server for LogicMonitor platform API integration | # LogicMonitor MCP Server
[](https://pypi.org/project/lm-mcp/)
[](https://pypi.org/project/lm-mcp/)
[](https://opensource.org/licenses/MIT)
<!-- mcp-name: io.github.ryanmat/logicmonitor -->
Model Context Protocol (MCP) server for LogicMonitor REST API v3 integration. Enables AI assistants to interact with LogicMonitor monitoring data through 198 structured tools, 14 workflow prompts, and 24 resources.
Works with any MCP-compatible client: Claude Desktop, Claude Code, Cursor, Continue, Cline, and more.
## Quick Start
**1. Get your LogicMonitor Bearer Token:**
- Log into your LogicMonitor portal
- Go to **Settings** → **Users and Roles** → **API Tokens**
- Create a new API-only user or add a token to an existing user
- Copy the Bearer token
**2. Configure your MCP client:**
For **Claude Code** (CLI):
```bash
claude mcp add logicmonitor \
-e LM_PORTAL=yourcompany.logicmonitor.com \
-e LM_BEARER_TOKEN=your-bearer-token \
-- uvx --from lm-mcp lm-mcp-server
```
For **Claude Desktop**, add to your config file (see [MCP Client Configuration](#mcp-client-configuration) below).
**3. Verify it's working:**
```
claude mcp list
```
You should see: `logicmonitor: uvx --from lm-mcp lm-mcp-server - ✓ Connected`
**4. Test with a prompt:**
```
"Show me all critical alerts in LogicMonitor"
```
## Features
**198 Tools** across comprehensive LogicMonitor API coverage (180 LM + 18 AAP):
### AI Analysis Tools
Server-side intelligence that transforms raw monitoring data into actionable insights:
- **Alert Correlation**: Automatically clusters related alerts by device, datasource, and temporal proximity — replaces dozens of manual API calls with a single aggregated view
- **Alert Statistics**: Aggregated alert counts by severity, top-10 devices and datasources, time-bucketed distributions for trend analysis
- **Metric Anomaly Detection**: Z-score based anomaly detection on any metric datapoint with configurable thresholds and IQR fallback
- **Metric Baselines**: Save baseline snapshots of metric behavior, then compare current performance against the baseline to detect drift
- **Scheduled Analysis**: HTTP API endpoints for triggering analysis workflows (alert correlation, RCA, top talkers, health checks) from external schedulers and webhooks
### ML/Statistical Analysis Tools
Pure-Python statistical methods for capacity planning, trend analysis, and operational scoring:
- **Metric Forecasting**: Linear regression to predict threshold breach timing with trend direction and confidence
- **Metric Correlation**: Pearson correlation matrix across multiple metric series with strong-correlation highlighting
- **Change Point Detection**: CUSUM algorithm for identifying regime shifts and mean-level changes
- **Alert Noise Scoring**: Shannon entropy and flap detection to quantify alert noise (0-100) with tuning recommendations
- **Seasonality Detection**: Autocorrelation-based periodicity detection at standard intervals with peak-hour identification
- **Availability Calculation**: SLA-style uptime percentage from alert history with MTTR, incident counts, and per-device breakdown
- **Blast Radius Analysis**: Topology-based downstream impact scoring for device failure scenarios
- **Change Correlation**: Cross-references alert spikes with audit/change logs to identify change-induced incidents
- **Trend Classification**: Categorizes metrics as stable, increasing, decreasing, cyclic, or volatile
- **Device Health Scoring**: Multi-metric composite health score (0-100) using z-score analysis with configurable weights
### APM Trace Tools
Service discovery and RED metrics for LogicMonitor APM (Application Performance Monitoring):
- **Service Discovery**: List all traced services, inspect individual service details and properties
- **Operation Listing**: Discover endpoints/routes monitored within each service
- **RED Metrics**: Duration, error count, and operation count at both service and per-operation level
- **Alert Integration**: View active alerts for any traced service
- **Property Inspection**: OTel attributes, namespace info, and auto-discovered metadata
### Core Monitoring
- **Alert Management**: Query, acknowledge, bulk acknowledge, add notes, view rules
- **Device Management**: Full CRUD - list, create, update, delete devices and groups
- **Metrics & Data**: Query datasources, instances, metric data, and graphs
- **Dashboard Management**: Full CRUD for dashboards, widgets, and groups
- **SDT Management**: Create, list, bulk create/delete Scheduled Downtime
- **Collector Management**: List collectors and collector groups
### Extended Features
- **Website Monitoring**: Full CRUD for synthetic checks and website groups
- **Report Management**: List, view, run reports, manage schedules
- **Escalation Management**: Full CRUD for escalation chains and recipient groups
- **Alert Rules**: Full CRUD for alert routing rules
- **User & Role Management**: View users, roles, access groups, API tokens
- **Ops Management**: Audit logs, ops notes, login/change audits
### LogicModules
- **DataSources**: Query and export datasource definitions
- **ConfigSources**: Query and export configuration collection modules
- **EventSources**: Query and export event detection modules
- **PropertySources**: Query and export property collection modules
- **TopologySources**: Query and export topology mapping modules
- **LogSources**: Query and export log collection modules
- **Import Support**: Import LogicModules from JSON definitions
### Advanced Capabilities
- **Cost Optimization**: Cloud cost analysis, recommendations, idle resources (LM Envision)
- **Network Topology**: Device neighbors, interfaces, flows, connections
- **Batch Jobs**: View and manage batch job execution history
- **Log/Metric Ingestion**: Push logs and metrics via LMv1 authentication
### MCP Protocol Features
- **Resources**: 24 schema/enum/filter/guide resources for API reference
- **Prompts**: 14 workflow templates (incident triage, RCA, capacity forecasting, remediation, etc.)
- **Completions**: Auto-complete for tool arguments
### Claude Code Skills
Pre-built slash-command workflows for Claude Code that orchestrate multiple tools into guided operational runbooks:
| Skill | Command | Description |
|-------|---------|-------------|
| Alert Triage | `/lm-triage` | Investigate active alerts, score noise, correlate clusters, assess blast radius, take action |
| Device Health | `/lm-health <device>` | Comprehensive health check — metrics, anomalies, health score, availability, topology |
| Portal Overview | `/lm-portal` | Portal-wide snapshot for shift handoff — alerts, collectors, SDTs, down devices |
| Capacity Planning | `/lm-capacity <device>` | Trend analysis, seasonality detection, breach forecasting, right-sizing |
| APM Investigation | `/lm-apm [service]` | Service discovery, operation-level RED metrics, alert correlation |
| Remediation | `/lm-remediate` | Diagnose alert, find/generate playbook, launch AAP job, verify fix |
Skills ship with the repo — clone it and invoke `/lm-triage` in Claude Code to get started.
### Ansible Automation Platform Integration
18 tools for observability-driven remediation via Ansible Automation Platform (AAP). Connects LogicMonitor alerts to automated remediation playbooks.
- **Job Templates**: List, inspect, and launch job templates with extra variables and host limits
- **Job Execution**: Launch jobs, check status, view output, cancel or relaunch runs
- **Workflows**: Launch workflow templates, monitor multi-step automation sequences
- **Inventories & Hosts**: List inventories, inspect hosts for targeted remediation
- **Projects & Credentials**: Browse available projects and credentials (secrets never exposed)
- **Write Protection**: launch_job, launch_workflow, cancel_job, relaunch_job require `LM_ENABLE_WRITE_OPERATIONS=true`
- **Jinja2 Safety**: All extra_vars inputs are validated to prevent template injection
AAP tools are optional — they only appear when `AWX_URL` and `AWX_TOKEN` are configured. See [Example Playbooks](examples/playbooks/) for remediation templates.
### Operational Features
- **Security-First**: Read-only by default, write operations require explicit opt-in
- **Rate Limit Handling**: Automatic retry with exponential backoff and jitter
- **Server Error Recovery**: Automatic retry on 5xx server errors
- **Pagination Support**: Handle large result sets with offset-based pagination
- **Session Persistence**: Optional file-backed session variables that survive restarts
## Installation
### Via PyPI (Recommended)
```bash
# Using uvx (no install needed)
uvx --from lm-mcp lm-mcp-server
# Using pip
pip install lm-mcp
```
### From Source
```bash
git clone https://github.com/ryanmat/mcp-server-logicmonitor.git
cd mcp-server-logicmonitor
uv sync
```
### Docker Deployment
For remote/shared deployments using HTTP transport:
```bash
cd deploy
cp .env.example .env
# Edit .env with your credentials
# Run with docker-compose
docker compose up -d
# With TLS via Caddy
docker compose --profile tls up -d
```
The server exposes health endpoints for container orchestration:
- `GET /health` - Detailed health check with all component statuses
- `GET /healthz` - Liveness probe (200 OK or 503)
- `GET /readyz` - Readiness probe (includes connectivity check if enabled)
## Configuration
### Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `LM_PORTAL` | Yes | - | LogicMonitor portal hostname (e.g., `company.logicmonitor.com`) |
| `LM_BEARER_TOKEN` | Yes* | - | API Bearer token (min 10 characters) |
| `LM_ACCESS_ID` | No | - | LMv1 API access ID (for ingestion APIs) |
| `LM_ACCESS_KEY` | No | - | LMv1 API access key (for ingestion APIs) |
| `LM_ENABLE_WRITE_OPERATIONS` | No | `false` | Enable write operations (create, update, delete) |
| `LM_API_VERSION` | No | `3` | API version |
| `LM_TIMEOUT` | No | `30` | Request timeout in seconds (range: 5-300) |
| `LM_MAX_RETRIES` | No | `3` | Max retries for rate-limited/server error requests (range: 0-10) |
| `LM_TRANSPORT` | No | `stdio` | Transport mode: `stdio` (local) or `http` (remote) |
| `LM_HTTP_HOST` | No | `0.0.0.0` | HTTP server bind address |
| `LM_HTTP_PORT` | No | `8080` | HTTP server port |
| `LM_CORS_ORIGINS` | No | `*` | Comma-separated CORS origins |
| `LM_SESSION_ENABLED` | No | `true` | Enable session context tracking |
| `LM_SESSION_HISTORY_SIZE` | No | `50` | Number of tool calls to keep in history |
| `LM_LOG_LEVEL` | No | `warning` | Logging level: `debug`, `info`, `warning`, or `error` |
| `LM_FIELD_VALIDATION` | No | `warn` | Field validation: `off`, `warn`, or `error` |
| `LM_HEALTH_CHECK_CONNECTIVITY` | No | `false` | Include LM API ping in health checks |
| `LM_SESSION_PERSIST_PATH` | No | - | File path for persistent session variables (survives restarts) |
| `LM_ANALYSIS_TTL_MINUTES` | No | `60` | TTL for scheduled analysis results (1-1440 minutes) |
| `AWX_URL` | No | - | Ansible Automation Platform controller URL (e.g., `https://aap.example.com`) |
| `AWX_TOKEN` | No | - | AAP personal access token |
| `AWX_VERIFY_SSL` | No | `true` | Verify SSL certificates for AAP connections |
| `AWX_TIMEOUT` | No | `30` | Request timeout in seconds for AAP API calls |
| `AWX_MAX_RETRIES` | No | `3` | Max retries for failed AAP API requests |
*Either `LM_BEARER_TOKEN` or both `LM_ACCESS_ID` and `LM_ACCESS_KEY` are required.
### Authentication Methods
**Bearer Token (Recommended):**
- Simpler setup, works for most operations
- Set `LM_BEARER_TOKEN`
**LMv1 HMAC (Required for Ingestion):**
- Required for `ingest_logs` and `push_metrics` tools
- Set both `LM_ACCESS_ID` and `LM_ACCESS_KEY`
- Can be used alongside Bearer token
### Getting API Credentials
**Bearer Token:**
1. Log into your LogicMonitor portal
2. Go to **Settings** → **Users and Roles** → **API Tokens**
3. Create a new API-only user or add a token to an existing user
4. Copy the Bearer token
**LMv1 Credentials:**
1. Go to **Settings** → **Users and Roles** → **Users**
2. Select a user → **API Tokens** tab
3. Create or view the Access ID and Access Key
## MCP Client Configuration
### Claude Code
```bash
claude mcp add logicmonitor \
-e LM_PORTAL=yourcompany.logicmonitor.com \
-e LM_BEARER_TOKEN=your-bearer-token \
-e LM_ENABLE_WRITE_OPERATIONS=true \
-- uvx --from lm-mcp lm-mcp-server
```
> **Note:** Remove `-e LM_ENABLE_WRITE_OPERATIONS=true` if you want read-only access.
Verify the connection:
```bash
claude mcp list
```
To update an existing configuration, remove and re-add:
```bash
claude mcp remove logicmonitor
claude mcp add logicmonitor -e LM_PORTAL=... -e LM_BEARER_TOKEN=... -- uvx --from lm-mcp lm-mcp-server
```
### Cursor
Add to `~/.cursor/mcp.json` (global) or `.cursor/mcp.json` (project):
```json
{
"mcpServers": {
"logicmonitor": {
"command": "uvx",
"args": ["--from", "lm-mcp", "lm-mcp-server"],
"env": {
"LM_PORTAL": "yourcompany.logicmonitor.com",
"LM_BEARER_TOKEN": "your-bearer-token"
}
}
}
}
```
To enable write operations and ingestion APIs:
```json
{
"mcpServers": {
"logicmonitor": {
"command": "uvx",
"args": ["--from", "lm-mcp", "lm-mcp-server"],
"env": {
"LM_PORTAL": "yourcompany.logicmonitor.com",
"LM_BEARER_TOKEN": "your-bearer-token",
"LM_ACCESS_ID": "your-access-id",
"LM_ACCESS_KEY": "your-access-key",
"LM_ENABLE_WRITE_OPERATIONS": "true"
}
}
}
}
```
Then restart Cursor or enable the server in **Cursor Settings** → **MCP**.
### Claude Desktop
Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
```json
{
"mcpServers": {
"logicmonitor": {
"command": "uvx",
"args": ["--from", "lm-mcp", "lm-mcp-server"],
"env": {
"LM_PORTAL": "yourcompany.logicmonitor.com",
"LM_BEARER_TOKEN": "your-bearer-token"
}
}
}
}
```
To enable write operations and ingestion APIs:
```json
{
"mcpServers": {
"logicmonitor": {
"command": "uvx",
"args": ["--from", "lm-mcp", "lm-mcp-server"],
"env": {
"LM_PORTAL": "yourcompany.logicmonitor.com",
"LM_BEARER_TOKEN": "your-bearer-token",
"LM_ACCESS_ID": "your-access-id",
"LM_ACCESS_KEY": "your-access-key",
"LM_ENABLE_WRITE_OPERATIONS": "true"
}
}
}
}
```
### OpenAI Codex CLI
```bash
codex mcp add logicmonitor \
--env LM_PORTAL=yourcompany.logicmonitor.com \
--env LM_BEARER_TOKEN=your-bearer-token \
-- uvx --from lm-mcp lm-mcp-server
```
Or add directly to `~/.codex/config.toml`:
```toml
[mcp_servers.logicmonitor]
command = "uvx"
args = ["--from", "lm-mcp", "lm-mcp-server"]
[mcp_servers.logicmonitor.env]
LM_PORTAL = "yourcompany.logicmonitor.com"
LM_BEARER_TOKEN = "your-bearer-token"
```
### Cline (VS Code Extension)
Add to Cline's MCP settings file:
**macOS**: `~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json`
**Windows**: `%APPDATA%\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json`
**Linux**: `~/.config/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json`
```json
{
"mcpServers": {
"logicmonitor": {
"command": "uvx",
"args": ["--from", "lm-mcp", "lm-mcp-server"],
"env": {
"LM_PORTAL": "yourcompany.logicmonitor.com",
"LM_BEARER_TOKEN": "your-bearer-token"
}
}
}
}
```
### GitHub Copilot (VS Code 1.99+)
Add to your VS Code settings (`settings.json`) or project-level `.vscode/mcp.json`:
```json
{
"mcp": {
"servers": {
"logicmonitor": {
"command": "uvx",
"args": ["--from", "lm-mcp", "lm-mcp-server"],
"env": {
"LM_PORTAL": "yourcompany.logicmonitor.com",
"LM_BEARER_TOKEN": "your-bearer-token"
}
}
}
}
}
```
Enable MCP in VS Code settings: `"chat.mcp.enabled": true`
### Gemini CLI
Gemini CLI supports MCP servers. Configure in `~/.gemini/settings.json`:
```json
{
"mcpServers": {
"logicmonitor": {
"command": "uvx",
"args": ["--from", "lm-mcp", "lm-mcp-server"],
"env": {
"LM_PORTAL": "yourcompany.logicmonitor.com",
"LM_BEARER_TOKEN": "your-bearer-token"
}
}
}
}
```
### Other Clients
**Aider**: Does not currently have native MCP support. Track progress at [aider issue #3314](https://github.com/Aider-AI/aider/issues/3314).
**Continue**: Uses similar JSON configuration. See [Continue MCP docs](https://docs.continue.dev/customize/model-providers/mcp).
### Enabling Write Operations
For any JSON-based configuration, add `LM_ENABLE_WRITE_OPERATIONS` to the `env` section:
```json
"env": {
"LM_PORTAL": "yourcompany.logicmonitor.com",
"LM_BEARER_TOKEN": "your-bearer-token",
"LM_ENABLE_WRITE_OPERATIONS": "true"
}
```
This enables tools like `acknowledge_alert`, `create_sdt`, `create_device`, etc.
## Available Tools
### Alert Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_alerts` | List alerts with optional severity/status filters | No |
| `get_alert_details` | Get detailed information about a specific alert | No |
| `acknowledge_alert` | Acknowledge an alert with optional note | Yes |
| `add_alert_note` | Add a note to an alert | Yes |
| `bulk_acknowledge_alerts` | Acknowledge multiple alerts at once (max 100) | Yes |
### Alert Rule Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_alert_rules` | List alert rules | No |
| `get_alert_rule` | Get detailed alert rule information | No |
| `create_alert_rule` | Create a new alert rule | Yes |
| `update_alert_rule` | Update an existing alert rule | Yes |
| `delete_alert_rule` | Delete an alert rule | Yes |
| `export_alert_rule` | Export alert rule as JSON | No |
### Device Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_devices` | List devices with optional group/name filters | No |
| `get_device` | Get detailed information about a specific device | No |
| `get_device_groups` | List device groups | No |
| `create_device` | Create a new device | Yes |
| `update_device` | Update an existing device | Yes |
| `delete_device` | Delete a device | Yes |
| `create_device_group` | Create a new device group | Yes |
| `delete_device_group` | Delete a device group | Yes |
### Metrics Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_device_datasources` | List DataSources applied to a device | No |
| `get_device_instances` | List instances for a DataSource on a device | No |
| `get_device_data` | Get metric data for a specific instance | No |
| `get_graph_data` | Get graph data for visualization | No |
### APM Trace Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_trace_services` | List APM trace services (deviceType:6) | No |
| `get_trace_service` | Get detailed APM service information | No |
| `get_trace_service_alerts` | Get alerts for an APM service | No |
| `get_trace_service_datasources` | List datasources applied to an APM service | No |
| `get_trace_operations` | List operations (endpoints/routes) for an APM service | No |
| `get_trace_service_metrics` | Get service-level RED metrics (Duration, ErrorOperationCount, OperationCount) | No |
| `get_trace_operation_metrics` | Get per-operation RED metrics | No |
| `get_trace_service_properties` | Get APM service properties (OTel attributes, metadata) | No |
### Dashboard Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_dashboards` | List dashboards with optional filters | No |
| `get_dashboard` | Get detailed dashboard information | No |
| `get_dashboard_widgets` | Get widgets for a specific dashboard | No |
| `get_widget` | Get detailed widget information | No |
| `get_dashboard_groups` | List dashboard groups | No |
| `get_dashboard_group` | Get dashboard group details | No |
| `create_dashboard` | Create a new dashboard | Yes |
| `update_dashboard` | Update an existing dashboard | Yes |
| `delete_dashboard` | Delete a dashboard | Yes |
| `add_widget` | Add a widget to a dashboard | Yes |
| `update_widget` | Update a widget | Yes |
| `delete_widget` | Delete a widget from a dashboard | Yes |
| `export_dashboard` | Export dashboard as JSON | No |
| `create_dashboard_group` | Create a dashboard group | Yes |
| `delete_dashboard_group` | Delete a dashboard group | Yes |
### SDT Tools
| Tool | Description | Write |
|------|-------------|-------|
| `list_sdts` | List Scheduled Downtime entries | No |
| `get_active_sdts` | Get currently active SDTs | No |
| `get_upcoming_sdts` | Get SDTs scheduled within a time window | No |
| `create_sdt` | Create a new SDT for a device or group | Yes |
| `delete_sdt` | Delete an existing SDT | Yes |
| `bulk_create_device_sdt` | Create SDT for multiple devices (max 100) | Yes |
| `bulk_delete_sdt` | Delete multiple SDTs at once (max 100) | Yes |
### Collector Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_collectors` | List all collectors | No |
| `get_collector` | Get detailed information about a specific collector | No |
| `get_collector_groups` | List collector groups | No |
| `get_collector_group` | Get detailed collector group info | No |
### Website Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_websites` | List websites/synthetic checks | No |
| `get_website` | Get detailed website information | No |
| `get_website_groups` | List website groups | No |
| `get_website_data` | Get monitoring data for a website | No |
| `create_website` | Create a new website check | Yes |
| `update_website` | Update a website check | Yes |
| `delete_website` | Delete a website check | Yes |
| `create_website_group` | Create a website group | Yes |
| `delete_website_group` | Delete a website group | Yes |
### Escalation Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_escalation_chains` | List escalation chains | No |
| `get_escalation_chain` | Get detailed escalation chain info | No |
| `create_escalation_chain` | Create a new escalation chain | Yes |
| `update_escalation_chain` | Update an escalation chain | Yes |
| `delete_escalation_chain` | Delete an escalation chain | Yes |
| `export_escalation_chain` | Export escalation chain as JSON | No |
| `get_recipient_groups` | List recipient groups | No |
| `get_recipient_group` | Get detailed recipient group info | No |
| `create_recipient_group` | Create a new recipient group | Yes |
| `update_recipient_group` | Update a recipient group | Yes |
| `delete_recipient_group` | Delete a recipient group | Yes |
### Resource Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_device_properties` | List all properties for a device | No |
| `get_device_property` | Get a specific device property | No |
| `update_device_property` | Update or create a custom device property | Yes |
### Report Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_reports` | List reports with optional filters | No |
| `get_report` | Get detailed report information | No |
| `get_report_groups` | List report groups | No |
| `get_scheduled_reports` | Get reports with schedules configured | No |
| `run_report` | Execute/run a report | Yes |
| `create_report` | Create a new report | Yes |
| `update_report_schedule` | Update a report's schedule | Yes |
| `delete_report` | Delete a report | Yes |
### DataSource Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_datasources` | List all DataSources | No |
| `get_datasource` | Get DataSource details | No |
| `export_datasource` | Export DataSource as JSON | No |
| `import_datasource` | Import DataSource from JSON | Yes |
| `create_datasource` | Create DataSource via REST API format (supports overwrite) | Yes |
| `update_datasource` | Update existing DataSource definition | Yes |
| `delete_datasource` | Delete a DataSource definition | Yes |
### LogicModule Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_configsources` | List ConfigSources | No |
| `get_configsource` | Get ConfigSource details | No |
| `export_configsource` | Export ConfigSource as JSON | No |
| `import_configsource` | Import ConfigSource from JSON | Yes |
| `get_eventsources` | List EventSources | No |
| `get_eventsource` | Get EventSource details | No |
| `export_eventsource` | Export EventSource as JSON | No |
| `import_eventsource` | Import EventSource from JSON | Yes |
| `get_propertysources` | List PropertySources | No |
| `get_propertysource` | Get PropertySource details | No |
| `export_propertysource` | Export PropertySource as JSON | No |
| `import_propertysource` | Import PropertySource from JSON | Yes |
| `get_topologysources` | List TopologySources | No |
| `get_topologysource` | Get TopologySource details | No |
| `import_topologysource` | Import TopologySource from JSON | Yes |
| `get_logsources` | List LogSources | No |
| `get_logsource` | Get LogSource details | No |
| `get_device_logsources` | Get LogSources applied to a device | No |
| `export_logsource` | Export LogSource as JSON | No |
| `import_logsource` | Import LogSource from JSON | Yes |
| `import_jobmonitor` | Import JobMonitor from JSON | Yes |
| `import_appliesto_function` | Import AppliesTo function from JSON | Yes |
### Cost Optimization Tools (LM Envision)
| Tool | Description | Write |
|------|-------------|-------|
| `get_cost_summary` | Get cloud cost summary | No |
| `get_resource_cost` | Get cost data for a specific resource | No |
| `get_cost_recommendations` | Get cost optimization recommendations | No |
| `get_cost_recommendation_categories` | Get recommendation categories with counts | No |
| `get_cost_recommendation` | Get specific recommendation by ID | No |
| `get_idle_resources` | Get idle/underutilized resources | No |
| `get_cloud_cost_accounts` | Get cloud accounts with cost data | No |
### Ingestion Tools (Requires LMv1 Auth)
| Tool | Description | Write |
|------|-------------|-------|
| `ingest_logs` | Push log entries to LogicMonitor | Yes |
| `push_metrics` | Push custom metrics to LogicMonitor | Yes |
### Network & Topology Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_topology_map` | Get network topology map data | No |
| `get_device_neighbors` | Get neighboring devices based on topology | No |
| `get_device_interfaces` | Get network interfaces for a device | No |
| `get_network_flows` | Get network flow data (NetFlow/sFlow) | No |
| `get_device_connections` | Get device relationships/connections | No |
### Batch Job Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_batchjobs` | List batch jobs | No |
| `get_batchjob` | Get batch job details | No |
| `get_batchjob_history` | Get execution history for a batch job | No |
| `get_device_batchjobs` | Get batch jobs for a specific device | No |
| `get_scheduled_downtime_jobs` | Get batch jobs related to SDT automation | No |
### Ops & Audit Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_audit_logs` | Get audit log entries | No |
| `get_api_token_audit` | Get API token usage audit logs | No |
| `get_login_audit` | Get login/authentication audit logs | No |
| `get_change_audit` | Get configuration change audit logs | No |
| `get_ops_notes` | List ops notes | No |
| `get_ops_note` | Get detailed ops note information | No |
| `add_ops_note` | Add a new ops note | Yes |
### User & Access Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_users` | List users | No |
| `get_user` | Get detailed user information | No |
| `get_roles` | List roles | No |
| `get_role` | Get detailed role information | No |
| `get_access_groups` | List access groups (RBAC) | No |
| `get_access_group` | Get access group details | No |
| `get_api_tokens` | List API tokens | No |
| `get_api_token` | Get API token details | No |
### Service Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_services` | List services (LM Service Insight) | No |
| `get_service` | Get detailed service information | No |
| `get_service_groups` | List service groups | No |
### Netscan Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_netscans` | List network discovery scans | No |
| `get_netscan` | Get detailed netscan information | No |
| `run_netscan` | Execute a netscan immediately | Yes |
### OID Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_oids` | List SNMP OIDs | No |
| `get_oid` | Get detailed OID information | No |
### Session Tools
| Tool | Description | Write |
|------|-------------|-------|
| `get_session_context` | Get current session state (last results, variables, history) | No |
| `set_session_variable` | Store a named variable in the session | No |
| `get_session_variable` | Retrieve a session variable | No |
| `delete_session_variable` | Delete a session variable | No |
| `clear_session_context` | Reset all session state | No |
| `list_session_history` | List recent tool call history | No |
### Correlation & Analysis Tools
| Tool | Description | Write |
|------|-------------|-------|
| `correlate_alerts` | Cluster related alerts by device, datasource, and temporal proximity | No |
| `get_alert_statistics` | Aggregated alert counts by severity, top devices/datasources, time buckets | No |
| `get_metric_anomalies` | Z-score based anomaly detection on metric datapoints | No |
### Baseline Tools
| Tool | Description | Write |
|------|-------------|-------|
| `save_baseline` | Save a metric baseline snapshot to session for later comparison | No |
| `compare_to_baseline` | Compare current metrics against a saved baseline | No |
### ML/Statistical Analysis Tools
| Tool | Description | Write |
|------|-------------|-------|
| `forecast_metric` | Linear regression forecasting with threshold breach prediction | No |
| `correlate_metrics` | Pearson correlation matrix across multiple metric series (max 10) | No |
| `detect_change_points` | CUSUM-based regime shift detection with configurable sensitivity | No |
| `score_alert_noise` | Shannon entropy + flap detection to score alert noise (0-100) | No |
| `detect_seasonality` | Autocorrelation-based periodicity detection at standard intervals | No |
| `calculate_availability` | SLA-style uptime % from alert history with MTTR and incident counts | No |
| `analyze_blast_radius` | Topology-based downstream impact scoring for device failures | No |
| `correlate_changes` | Cross-reference alert spikes with audit/change logs | No |
| `classify_trend` | Categorize metric behavior: stable, increasing, decreasing, cyclic, volatile | No |
| `score_device_health` | Composite health score (0-100) from multi-metric z-score analysis | No |
### Ansible Automation Platform Tools
These tools are only available when `AWX_URL` and `AWX_TOKEN` are configured.
| Tool | Description | Write |
|------|-------------|-------|
| `test_awx_connection` | Test connectivity to Ansible Automation Platform controller | No |
| `get_job_templates` | List job templates with optional name/project filters | No |
| `get_job_template` | Get details of a specific job template | No |
| `launch_job` | Launch a job template with extra variables, host limits, and check mode | Yes |
| `get_job_status` | Get the status of a running or completed job | No |
| `get_job_output` | Get the stdout output of a job | No |
| `cancel_job` | Cancel a running job | Yes |
| `relaunch_job` | Relaunch a previously run job with optional variable overrides | Yes |
| `get_inventories` | List inventories with optional name filter | No |
| `get_inventory_hosts` | List hosts in a specific inventory | No |
| `launch_workflow` | Launch a workflow job template | Yes |
| `get_workflow_status` | Get the status of a workflow job | No |
| `get_workflow_templates` | List workflow job templates | No |
| `get_projects` | List projects from Ansible Automation Platform | No |
| `get_credentials` | List credentials (secrets not exposed) | No |
| `get_organizations` | List organizations from Ansible Automation Platform | No |
| `get_job_events` | Get events from a specific job run | No |
| `get_hosts` | List hosts with optional name/inventory filters | No |
#### ML Tool Usage Guide
These tools use pure-Python statistical methods (no external ML libraries). They all operate on data fetched from the LM API at query time. Most metric-based tools share the same core parameters: `device_id`, `device_datasource_id`, `instance_id` (find these using `get_device_datasources` and `get_device_instances`).
**Capacity forecasting** — predict when a metric will breach a threshold:
```
"Forecast when memory usage on device 150098 will exceed 90%"
```
Uses `forecast_metric` with `threshold=90`. Returns days until breach, trend direction, and R-squared confidence. Use `hours_back=168` (1 week) for meaningful regression, or `hours_back=24` if the device has limited history.
**Metric correlation** — find relationships between metrics across devices:
```
"Correlate CPU usage on server A with memory usage on server B over the last 24 hours"
```
Uses `correlate_metrics` with a `sources` array. Each source requires `device_id`, `device_datasource_id`, `instance_id`, and `datapoint` name. Returns an NxN Pearson correlation matrix and highlights strong correlations (|r| > 0.7). Maximum 10 sources per call.
**Change point detection** — find when metric behavior shifted:
```
"Detect any regime shifts in CPU metrics on device 150098 in the last 24 hours"
```
Uses `detect_change_points` with CUSUM algorithm. The `sensitivity` parameter (default 1.0) controls detection threshold — lower values detect smaller shifts. Returns timestamps and direction of each detected change.
**Alert noise scoring** — identify tuning opportunities:
```
"Score the alert noise across all devices over the last 24 hours"
```
Uses `score_alert_noise`. Returns a 0-100 noise score combining Shannon entropy, flap detection (alerts that clear and re-fire within 30 minutes), and repeat ratio. Includes top noisy devices/datasources and tuning recommendations.
**Device health scoring** — aggregate health into a single number:
```
"Give me a health score for the stress-demo pod"
```
Uses `score_device_health`. Computes z-scores for each datapoint's latest value against its historical window, then produces a weighted composite score (0-100). Status: healthy (80+), degraded (50-79), critical (<50). Use the `weights` parameter to emphasize specific datapoints.
**Availability calculation** — SLA reporting from alert data:
```
"Calculate 30-day availability across all devices at error severity or above"
```
Uses `calculate_availability` with `hours_back=720` and `severity_threshold="error"`. Merges overlapping alert windows and returns availability %, MTTR, incident count, longest incident, and per-device breakdown.
## MCP Resources
The server exposes 24 resources for API reference:
### Schema Resources
| URI | Description |
|-----|-------------|
| `lm://schema/alerts` | Alert object fields, types, and descriptions |
| `lm://schema/devices` | Device object fields and types |
| `lm://schema/sdts` | SDT (Scheduled Downtime) object fields |
| `lm://schema/dashboards` | Dashboard object fields |
| `lm://schema/collectors` | Collector object fields |
| `lm://schema/escalations` | Escalation chain object fields |
| `lm://schema/reports` | Report object fields |
| `lm://schema/websites` | Website check object fields |
| `lm://schema/datasources` | DataSource definition fields |
| `lm://schema/users` | User object fields |
| `lm://schema/audit` | Audit log entry fields |
### Enum Resources
| URI | Description |
|-----|-------------|
| `lm://enums/severity` | Alert severity levels: critical(4), error(3), warning(2), info(1) |
| `lm://enums/device-status` | Device status values: normal(0), dead(1), etc. |
| `lm://enums/sdt-type` | SDT types: DeviceSDT, DeviceGroupSDT, etc. |
| `lm://enums/alert-cleared` | Alert cleared status: true, false |
| `lm://enums/alert-acked` | Alert acknowledgment status: true, false |
| `lm://enums/collector-build` | Collector build types: EA, GD, MGD |
### Filter Resources
| URI | Description |
|-----|-------------|
| `lm://filters/alerts` | Filter fields and operators for alert queries |
| `lm://filters/devices` | Filter fields and operators for device queries |
| `lm://filters/sdts` | Filter fields and operators for SDT queries |
| `lm://syntax/operators` | Filter operators: `:`, `~`, `>`, `<`, `!:`, `!~`, `>:`, `<:` |
### Guide Resources
| URI | Description |
|-----|-------------|
| `lm://guide/tool-categories` | All 198 tools organized by domain category |
| `lm://guide/examples` | Common filter patterns and query examples |
| `lm://guide/mcp-orchestration` | Patterns for combining LogicMonitor with other MCP servers |
## MCP Prompts
Pre-built workflow templates for common tasks:
| Prompt | Description | Arguments |
|--------|-------------|-----------|
| `incident_triage` | Analyze active alerts, identify patterns, suggest root cause | `severity`, `time_window_hours` |
| `capacity_review` | Review resource utilization and identify capacity concerns | `group_id`, `threshold_percent` |
| `health_check` | Generate environment health summary with key metrics | `include_collectors` |
| `alert_summary` | Generate alert digest grouped by severity or resource | `group_by`, `hours_back` |
| `sdt_planning` | Plan scheduled downtime for maintenance windows | `device_ids`, `group_id` |
| `cost_optimization` | Analyze cloud costs, find savings opportunities | `provider`, `threshold_percent` |
| `audit_review` | Review recent changes, logins, and security events | `hours_back`, `username` |
| `alert_correlation` | Correlate alerts across devices to find common root causes | `severity`, `hours_back`, `device_id`, `group_id` |
| `collector_health` | Assess collector load balancing, versions, and failover readiness | `group_id` |
| `troubleshoot_device` | Guided troubleshooting for a specific device | `device_id` |
| `top_talkers` | Identify noisiest devices and datasources generating the most alerts | `hours_back`, `limit`, `group_by` |
| `rca_workflow` | Guided root cause analysis combining alerts, topology, and change history | `device_id`, `alert_id`, `hours_back` |
| `capacity_forecast` | Forecast capacity trends and predict threshold breaches | `device_id`, `group_id`, `datasource`, `hours_back`, `threshold` |
| `remediate_workflow` | Diagnose a LogicMonitor alert and remediate via Ansible Automation Platform | `alert_id`, `device_id` |
## Example Usage
Once configured, you can ask your AI assistant natural language questions. Here are prompts to test different capabilities:
### Quick Verification Prompts
Start with these to verify the connection is working:
- "List the first 5 devices in LogicMonitor"
- "How many collectors do I have?"
- "Show me active alerts"
### Alert Management
- "Show me all critical alerts"
- "What alerts fired in the last hour?"
- "Get details on alert LMA12345"
- "Acknowledge alert LMA12345 with note 'Investigating disk issue'"
- "Bulk acknowledge all warning alerts from the last hour"
- "Add a note to alert LMA67890: 'Escalated to storage team'"
- "What alert rules route to the Primary On-Call escalation chain?"
### Device Operations
- "What devices are in the Production group?"
- "Find all devices with 'web' in the name"
- "Show me details for device I | text/markdown | null | Ryan Matuszewski <ryan.matuszewski@logicmonitor.com> | null | null | MIT | api, logicmonitor, mcp, model-context-protocol, monitoring | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming ... | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp<2,>=1.0.0",
"pydantic-settings>=2.0.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"starlette>=0.40.0; extra == \"http\"",
"uvicorn[standard]>=0.30.0; extra == \"http\""
] | [] | [] | [] | [
"Homepage, https://github.com/ryanmat/mcp-server-logicmonitor",
"Repository, https://github.com/ryanmat/mcp-server-logicmonitor",
"Issues, https://github.com/ryanmat/mcp-server-logicmonitor/issues"
] | uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T17:58:04.285210 | lm_mcp-1.8.0-py3-none-any.whl | 183,058 | 41/db/0cb53891515dadea2a1ba20618473e0192bdc35b93a5e24caa25105b52c8/lm_mcp-1.8.0-py3-none-any.whl | py3 | bdist_wheel | null | false | a9e73efb59357f516d416dc033c7a35e | dc0361aa0c0976091650783a133038aa9f13686949dd6b7f0a6b4ac099adb369 | 41db0cb53891515dadea2a1ba20618473e0192bdc35b93a5e24caa25105b52c8 | null | [
"LICENSE"
] | 231 |
2.4 | sminter | 0.0.1 | Reserved package name | This package name is reserved. | text/plain | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T17:57:40.245886 | sminter-0.0.1-py3-none-any.whl | 1,026 | 7d/89/82ca402d7d28230ce2cf52fd5a89e5faa986273e500bc95775c9308222a6/sminter-0.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 816ed3a73d1be6e979308b576dba8f09 | 5f7a75094d1a828987c6250e63708e5c237ed7cce2d1f4cdbdcba4ff515b9f35 | 7d8982ca402d7d28230ce2cf52fd5a89e5faa986273e500bc95775c9308222a6 | null | [] | 239 |
2.4 | qonnx | 1.0.0 | Frontend and utilities for QONNX | # QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
[](http://qonnx.readthedocs.io/)
[](https://github.com/fastmachinelearning/qonnx/discussions)


[](https://doi.org/10.5281/zenodo.7622236)
[](https://badge.fury.io/py/qonnx)
[](https://pepy.tech/project/qonnx)
<img align="left" src="https://xilinx.github.io/finn/img/TFC_1W2A.onnx.png" alt="QONNX example" style="margin-right: 20px" width="200"/>
QONNX (Quantized ONNX) introduces several [custom operators](docs/qonnx-custom-ops/overview.md) -- `IntQuant`, `FloatQuant`, `BipolarQuant`, and `Trunc` -- in order to represent arbitrary-precision integer and minifloat quantization in ONNX. This enables:
* Representation of binary, ternary, 3-bit, 4-bit, 6-bit or any other integer/fixed-point quantization.
* Representation of minifloat quantization with configurable exponent and mantissa bits.
* Quantization is an operator itself, and can be applied to any parameter or layer input.
* Flexible choices for scaling factor and zero-point granularity, also enabling [OCP MX datatypes](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf).
* Quantized values are carried using standard `float` datatypes to remain ONNX protobuf-compatible.
This repository contains a set of Python utilities to work with QONNX models, including but not limited to:
* executing QONNX models for (slow) functional verification
* shape inference, constant folding and other basic optimizations
* summarizing the inference cost of a QONNX model in terms of mixed-precision MACs, parameter and activation volume
* Python infrastructure for writing transformations and defining executable, shape-inferencable custom ops
* (experimental) data layout conversion from standard ONNX NCHW to custom QONNX NHWC ops
## Quickstart
### Operator definitions
Please see the [custom operator overview](docs/qonnx-custom-ops/overview.md) table for more details.
### Installation
`pip install qonnx`
### Export, Import and Model Zoo
The following quantization-aware training (QAT) frameworks support exporting to QONNX:
* [Brevitas](https://github.com/Xilinx/brevitas)
* [QKeras](https://github.com/google/qkeras) - note: QKeras to QONNX conversion will be moved to another repository. Please use the older version `qonnx==0.4` until this is done.
* [HAWQ](https://github.com/Zhen-Dong/HAWQ/tree/main/utils/export)
* [<your NN quantization framework here? please get in touch!>](https://github.com/fastmachinelearning/qonnx/discussions)
The following NN inference frameworks support importing QONNX models for deployment:
* [FINN](https://github.com/Xilinx/finn) (FPGA dataflow-style)
* [hls4ml](https://github.com/fastmachinelearning/hls4ml) (FPGA dataflow-style)
* [<your NN deployment framework here? please get in touch!>](https://github.com/fastmachinelearning/qonnx/discussions)
Head to the [QONNX model zoo](https://github.com/fastmachinelearning/QONNX_model_zoo) to download pre-trained QONNX models on various datasets.
### Model Visualization
We recommend [Netron](https://netron.app/) for visualizing QONNX models.
### Executing ONNX graph with QONNX custom nodes
Using the `qonnx-exec` command line utility, with top-level inputs supplied from `in0.npy` and `in1.npy`:
`qonnx-exec my-qonnx-model.onnx in0.npy in1.npy`
Using the Python API:
```
from qonnx.core.modelwrapper import ModelWrapper
from qonnx.core.onnx_exec import execute_onnx
model = ModelWrapper("my-qonnx-model.onnx")
idict = {"in0" : np.load("in0.npy), "in1" : np.load("in1.npy")}
odict = execute_onnx(idict)
```
### Calculate inference cost for QONNX model
Using the `qonnx-inference-cost` command line utility for the [CNV_2W2A example](https://github.com/fastmachinelearning/qonnx_model_zoo/tree/main/models/CIFAR10/Brevitas_FINN_CNV):
`qonnx-inference-cost CNV_2W2A.onnx`
Which will print a inference cost dictionary like the following:
```
Inference cost for CNV_2W2A.onnx
{
"discount_sparsity": true, # discount MAC counts by layer sparsity (disregard zero-valued MACs and params)
# mem_o_X: number of layer outputs with datatype X
"mem_o_INT32": 142602.0, # number of INT32 output elements
# mem_o_X: number of layer parameters (weights) with datatype X
"mem_w_INT2": 908033.0, # number of INT2 parameters (weights)
# op_mac_X_Y: number of MAC operations, datatype X by datatype Y
# scaled integer datatypes have a tensor- or channelwise scale factor
"op_mac_SCALEDINT<8>_INT2": 1345500.0, # number of scaled int8 x int2 MACs
"op_mac_INT2_INT2": 35615771.0, # number of int2 x int2 MACs
"total_bops": 163991084.0, # total number of MACs normalized to bit-ops (BOPS)
"total_mem_o_bits": 4563264.0, # total number of bits for layer outputs
"total_mem_w_bits": 1816066.0, # total number of bits for layer parameters
"unsupported": "set()"
}
```
You can use the `--cost-breakdown` option to generate a more detailed report that covers per-node (by name) and per-op-type information.
You can read more about the BOPS metric in [this paper](https://www.frontiersin.org/articles/10.3389/frai.2021.676564/full), Section 4.2 Bit Operations.
### Convert between different quantization representations
Using the `qonnx-convert` command line utility you can convert from QONNX to QCDQ-style quantization:
`qonnx-convert CNV_2W2A.onnx`
This will convert `Quant` nodes to `QuantizeLinear -> Clip -> DequantizeLinear` nodes where possible.
Please see the documentation of the `QuantToQCDQ` transformation to learn more about the limitations.
## Development
Install in editable mode in a Python virtual environment:
```
git clone https://github.com/fastmachinelearning/qonnx
cd qonnx
virtualenv -p python3.10 venv
source venv/bin/activate
pip install --upgrade pip
pip install -e .[testing]
```
### Running tests
Run entire test suite, parallelized across CPU cores:
```
pytest -n auto --verbose
```
Run a particular test and fall into pdb if it fails:
```
pytest --pdb -k "test_extend_partition.py::test_extend_partition[extend_id1-2]"
```
### Linting
If you plan to make pull requests to the qonnx repo, linting will be required.
We use a pre-commit hook to auto-format Python code and check for issues. See https://pre-commit.com/ for installation. Once you have `pre-commit`,
you can install the hooks into your local clone of the qonnx repo:
```
cd qonnx
source venv/bin/activate
pip install pre-commit
pre-commit install
```
Every time you commit some code, the pre-commit hooks will first run, performing various checks and fixes. In some cases pre-commit won’t be able to
fix the issues and you may have to fix it manually, then run git commit once again. The checks are configured in .pre-commit-config.yaml under the repo root.
## Why QONNX?
The QONNX representation has several advantages compared to other alternatives, as summarized in the table below.
These include a compact but flexible, single-node quantization representation that avoids operator duplication
and can support arbitrary precision up to the container datatype limit.
<img align="left" src="https://raw.githubusercontent.com/fastmachinelearning/qonnx/main/docs/qonnx-comparison.png" alt="QONNX comparison table" style="margin-right: 20px" />
## Community
The QONNX efforts were started by the FINN and hls4ml communities working together to create a common, arbitrary-precision representation that both frameworks could ingest. However, QONNX aims to build an open-source community for practitioners and researchers working with mixed-precision quantized neural networks by providing useful tools and a [discussion forum](https://github.com/fastmachinelearning/qonnx/discussions).
<div>
<img src=https://raw.githubusercontent.com/Xilinx/finn/github-pages/docs/img/finn-logo.png height=100/>
<img src="https://fastmachinelearning.github.io/hls4ml/img/logo.jpg" alt="hls4ml" height="128"/>
</div>
## Resources
You can read more about QONNX in [this paper](https://arxiv.org/abs/2206.07527). If you find QONNX useful in your work, please consider citing:
```bibtex
@inproceedings{Pappalardo:2022nxk,
author = "Pappalardo, Alessandro and Umuroglu, Yaman and Blott, Michaela and Mitrevski, Jovan and Hawks, Ben and Tran, Nhan and Loncar, Vladimir and Summers, Sioni and Borras, Hendrik and Muhizi, Jules and Trahms, Matthew and Hsu, Shih-Chieh Hsu and Hauck, Scott and Duarte, Javier"
title = "{QONNX: Representing Arbitrary-Precision Quantized Neural Networks}",
booktitle = "{4th Workshop on Accelerated Machine Learning (AccML) at HiPEAC 2022 Conference}",
eprint = "2206.07527",
archivePrefix = "arXiv",
primaryClass = "cs.LG",
reportNumber = "FERMILAB-CONF-22-471-SCD",
month = "6",
year = "2022",
url = "https://accml.dcs.gla.ac.uk/papers/2022/4thAccML_paper_1(12).pdf"
}
@software{yaman_umuroglu_2023_7622236,
author = "Umuroglu, Yaman and Borras, Hendrik and Loncar, Vladimir, and Summers, Sioni and Duarte, Javier",
title = "fastmachinelearning/qonnx",
month = {06},
year = 2022,
publisher = {Zenodo},
doi = {10.5281/zenodo.7622236},
url = {https://github.com/fastmachinelearning/qonnx}
}
```
| text/markdown; charset=UTF-8 | null | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python"
] | [
"any"
] | https://github.com/fastmachinelearning/qonnx | null | null | [] | [] | [] | [
"importlib-metadata",
"attrs>=22.2.0",
"clize>=5.0.1",
"protobuf>=3.20.3",
"bitstring>=3.1.7",
"numpy>=1.24.1",
"onnx; python_version >= \"3.11\"",
"onnx<=1.17; python_version < \"3.11\"",
"onnxruntime>=1.16.1",
"onnxscript>=0.1.0",
"sigtools>=4.0.1",
"toposort>=1.7.0",
"setuptools; extra ==... | [] | [] | [] | [
"Documentation, https://pyscaffold.org/"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T17:57:39.277561 | qonnx-1.0.0.tar.gz | 1,247,450 | cf/ee/1bb0b938bd1c42372c6a05caa4ffc5caba477e0f73305b99852806f52126/qonnx-1.0.0.tar.gz | source | sdist | null | false | 77740012a8089bba4100ca45e535297c | 02f240743999de0fef06bee94c4891240324b8f48453848eb1583395c5963401 | cfee1bb0b938bd1c42372c6a05caa4ffc5caba477e0f73305b99852806f52126 | null | [
"LICENSE",
"AUTHORS.rst"
] | 7,200 |
2.1 | odoo-addon-pms | 16.0.3.3.0 | A property management system | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
================================
PMS (Property Management System)
================================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:ba14e18e17e5bfd4a0c935eefb3048361d2f99f70cceed99de18d36e74b49251
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fpms-lightgray.png?logo=github
:target: https://github.com/OCA/pms/tree/16.0/pms
:alt: OCA/pms
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/pms-16-0/pms-16-0-pms
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/pms&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module is an all-in-one property management system (PMS) focused on medium-sized properties
for managing every aspect of your property's daily operations.
You can manage properties with multi-property and multi-company support, including your rooms inventory,
reservations, check-in, daily reports, board services, rate and availability plans among other property functionalities.
**Table of contents**
.. contents::
:local:
Installation
============
This module depends on modules ``base``, ``mail``, ``sale`` and ``multi_pms_properties``.
Ensure yourself to have all them in your addons list.
Configuration
=============
You will find the hotel settings in PMS Management > Configuration > Properties > Your Property.
This module required additional configuration for company, accounting, invoicing and user privileges.
Usage
=====
To use this module, please, read the complete user guide at `<roomdoo.com>`_.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/pms/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/pms/issues/new?body=module:%20pms%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* Commit [Sun]
Contributors
~~~~~~~~~~~~
* Alexandre Díaz
* Pablo Quesada
* Jose Luis Algara
* `Commit [Sun] <https://www.commitsun.com>`:
* Dario Lodeiros
* Eric Antones
* Sara Lago
* Brais Abeijon
* Miguel Padin
* Omar Castiñeira <omar@comunitea.com>
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/pms <https://github.com/OCA/pms/tree/16.0/pms>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | Commit [Sun], Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 4 - Beta"
] | [] | https://github.com/OCA/pms | null | >=3.10 | [] | [] | [] | [
"odoo-addon-multi-pms-properties<16.1dev,>=16.0dev",
"odoo-addon-partner-contact-birthdate<16.1dev,>=16.0dev",
"odoo-addon-partner-contact-gender<16.1dev,>=16.0dev",
"odoo-addon-partner-contact-nationality<16.1dev,>=16.0dev",
"odoo-addon-partner-firstname<16.1dev,>=16.0dev",
"odoo-addon-queue-job<16.1dev,... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T17:57:11.921608 | odoo_addon_pms-16.0.3.3.0-py3-none-any.whl | 724,953 | d8/e5/22c17d5f97a38ba58803a48875afd94e5923f24f4fc1f8a1b5ad946e5646/odoo_addon_pms-16.0.3.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 2c2da299c27a9ad6769f59ca9bed00da | 4e5255f40a21a86ab3dd2255bab854ee3a24d9652f7e2db256119f3e4ebc7c5c | d8e522c17d5f97a38ba58803a48875afd94e5923f24f4fc1f8a1b5ad946e5646 | null | [] | 94 |
2.4 | smint | 0.0.1 | Reserved package name | This package name is reserved. | text/plain | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T17:57:00.446979 | smint-0.0.1.tar.gz | 478 | c1/29/e4c7160cd2da0d5d60bb2dee163d33b021ee1041839f44cd9dfc90c6dfd2/smint-0.0.1.tar.gz | source | sdist | null | false | b31d8c06cb30d08974d255697c1c5124 | 2c353c7aa6a761553344c0cac07e5f50988131aa567a5a041281540652be1a6a | c129e4c7160cd2da0d5d60bb2dee163d33b021ee1041839f44cd9dfc90c6dfd2 | null | [] | 241 |
2.4 | thryve | 0.2.0 | A modular framework for multi-LLM agents, tools, and workflows | # Thryve
[English](README_en.md) | 中文
面向多 LLM Agent、工具调用、上下文管理与 DAG 工作流的模块化 Python 框架。Thryve 将多种后端(OpenAI、Ollama 等)、工具、记忆与 Agent 统一到一套 API 下。
## 特性
- **多后端 LLM/VLM**:OpenAI(对话、视觉、函数调用、嵌入)、Ollama;llama.cpp、Transformers 占位
- **统一消息格式**:文本 + 图像(多模态)、工具调用与结果
- **工具系统**:用 `@tool` 或 `Tool` 定义工具,注册与执行;OpenAI/Anthropic 格式转换
- **Agent 循环**:LLM → 工具调用 → 观察 → 循环,含 doom-loop 检测
- **上下文管理**:消息历史、token 预算、检查点、截断/摘要压缩
- **记忆**:SQLite 索引 + Markdown 记忆片双写,短期按日期分片,手写关键词检索(FTS + 规则重排)
- **DAG 工作流**:图拓扑排序、`LLMNode` / `AgentNode` / `ToolNode`,按层并行执行
## 架构
```
Config (YAML/env) ──► Thryve ──────────────────────────────────────────────► chat / stream / agent / graph
│
┌───────────────────────┼───────────────────────┐
▼ ▼ ▼
AgentLoop MemoryManager GraphExecutor
│ │ │
├──► ContextManager ├──► HybridSearcher (DAG 拓扑执行)
├──► ToolRegistry └──► SQLiteStorage
│
└───────────────────────┬───────────────────────┘
▼
ProviderAdapter
│
┌─────────────────┼─────────────────┐
▼ ▼ ▼
OpenAI Ollama transformers / llama_cpp
```
## 安装
```bash
pip install thryve
```
可选依赖:
```bash
pip install thryve[dev] # pytest, pytest-asyncio
pip install thryve[all] # aiosqlite, tiktoken
pip install thryve[transformers] # HuggingFace Transformers 后端
pip install thryve[llama] # llama-cpp-python 后端
```
## 快速开始
### 1. 基础对话
设置 API Key 并使用默认配置:
```bash
export OPENAI_API_KEY=sk-...
```
```python
from thryve import Thryve, ThryveConfig
config = ThryveConfig.from_env() # 读取 OPENAI_API_KEY、THRYVE_* 等
thryve = Thryve(config)
reply = thryve.chat("2 + 2 等于几?") # 同步,直接返回 str
print(reply)
```
### 2. 显式配置对话
```python
from thryve import Thryve, ThryveConfig, LLMConfig
config = ThryveConfig(
llm=LLMConfig(
backend="openai",
model="kimi-k2-turbo-preview",
api_key="sk-ffzyxxx...",
base_url="https://api.moonshot.cn/v1",
temperature=0.7,
)
)
thryve = Thryve(config)
reply = thryve.chat("你好!")
```
### 3. 带工具的 Agent
注册工具并让 Agent 调用:
```python
from thryve import Thryve, ThryveConfig, tool
@tool()
def add(a: int, b: int) -> int:
"""两数相加。"""
return a + b
thryve = Thryve(ThryveConfig.from_env())
thryve.register_tool(add)
result = thryve.chat_with_agent("3 + 5 等于多少?")
print(result.final_response)
print(result.stop_reason) # 如 COMPLETED
```
### 4. DAG 工作流
构建图并执行:
```python
from thryve import Thryve, ThryveConfig, Graph, FunctionNode
async def step_a(inputs):
return inputs.get("x", 0) + 1
async def step_b(inputs):
return inputs.get("step_a", 0) * 2
thryve = Thryve(ThryveConfig.from_env())
g = Graph()
g.chain(
FunctionNode("step_a", step_a),
FunctionNode("step_b", step_b),
)
outputs = thryve.execute_graph(g, {"x": 10})
print(outputs["step_a"]) # 11
print(outputs["step_b"]) # 22
```
### 5. 记忆与信息
```python
thryve.add_to_memory("用户偏好深色模式。", permanent=False)
chunks = thryve.search_memory("深色", top_k=5)
info = thryve.get_llm_info() # provider, model, supports_vision, supports_tools
```
记忆配置示例(短期默认 7 天,按日期切片到 markdown):
```yaml
memory:
storage_path: "./data/memory.db"
markdown_path: "./data/memory"
short_term_retention_days: 7
enable_fts: true
```
## 同步与异步
所有公开方法**默认是同步**的,可在 REPL、普通脚本中直接使用。异步版本使用 `_async` 后缀。
| 同步(默认) | 异步 | 说明 |
|---|---|---|
| `chat(message)` | `chat_async(message)` | 对话 |
| `chat_stream(message, callback=...)` | `chat_stream_async(message)` | 流式对话 |
| `chat_with_agent(message)` | `chat_with_agent_async(message)` | Agent 对话 |
| `execute_graph(graph, inputs)` | `execute_graph_async(graph, inputs)` | DAG 工作流 |
```python
# 同步(默认,直接调用)
reply = thryve.chat("你好")
# 异步(在 async def 中)
reply = await thryve.chat_async("你好")
```
## 流式输出
`chat()` / `chat_async()` 会等待完整回复再返回。需要边收边打时使用流式方法:
**同步流式**(默认):
```python
reply = thryve.chat_stream(
"什么是大语言模型",
callback=lambda c: print(c, end="", flush=True),
)
print() # 换行
# reply 仍是完整回复字符串
```
**异步流式**:
```python
async for chunk in thryve.chat_stream_async("什么是大语言模型"):
print(chunk, end="", flush=True)
```
## 配置
- **环境变量**:`ThryveConfig.from_env()` 使用 `OPENAI_API_KEY`、`THRYVE_LLM_MODEL`、`THRYVE_LLM_BACKEND`、`THRYVE_MEMORY_PATH` 等。
- **文件**:`ThryveConfig.from_file("config.json")` 读取 JSON 配置。
- **合并**:`config.merge(other)` 用另一份配置覆盖。
### Memory 检索说明
Memory 默认使用**关键词检索**(FTS + 规则重排),不再依赖 embedding model。
## 项目结构
```
src/thryve/
thryve.py # Thryve 主入口
llm.py # LLM 门面
config.py # ThryveConfig, LLMConfig, EmbeddingConfig, MemoryConfig, AgentConfig
core/
backends/ # OpenAI, Ollama, llama_capp(占位), transformers(占位)
tools/ # Tool, ToolRegistry, ToolExecutor, @tool
agent/ # Agent, AgentLoop, MultiAgentOrchestrator
context/ # ContextManager, 检查点, 压缩
memory/ # MemoryManager, SQLiteStorage, HybridSearcher
graph/ # Graph, Node, GraphExecutor
```
## 许可证
MIT License
| text/markdown | SyJarvis | jarvisshangye@gmail.com | null | null | null | llm, agent, tool-calling, framework, ai | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiosqlite>=0.19; extra == \"all\"",
"httpx>=0.25",
"llama-cpp-python>=0.2; extra == \"llama\"",
"openai>=1.0",
"pydantic>=2.0",
"pydantic-settings>=2.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pyyaml>=6.0",
"tiktoken>=0.5; extra == \"all\"",
"torch>=2.0; ext... | [] | [] | [] | [
"Homepage, https://github.com/SyJarvis/thryve",
"Repository, https://github.com/SyJarvis/thryve"
] | poetry/2.3.0 CPython/3.12.12 Darwin/25.2.0 | 2026-02-19T17:56:46.238157 | thryve-0.2.0-py3-none-any.whl | 80,325 | 50/52/59bcc7e554f784cca4064df55863e00073477d9820b6d9162b6dd52b6ded/thryve-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 6936fd6ffbfebdd9194bfa5bd6899e82 | 7ffe34d9064d7a753b7ac6cc03899a4db97b12633903ff31accdbd57312ca459 | 505259bcc7e554f784cca4064df55863e00073477d9820b6d9162b6dd52b6ded | MIT | [
"LICENSE"
] | 249 |
2.1 | pyCGNS | 6.3.5 | pyCGNS - Python package for CGNS (CFD General Notation System) | 
pyCGNS is a set of Python modules implementing the
[CFD General Notation System standard](https://cgns.github.io),
the standard of the CFD data representation.
The [user documentation](http://pycgns.github.io) is available online, it
contains the releases, the installation requirements and process, the usage docs
and the reference docs.
For more information concerning the CGNS standard please refer to cgns.github.io
For MS-Windows users, an unofficial version can be found on https://anaconda.org/conda-forge/pycgns
## CGNS Modules
- `CGNS.MAP` implements CGNS/Python physical representation of CGNS/SIDS
- `CGNS.PAT` has a large set of fonctions for CGNS/Python tree handling
- `CGNS.NAV` is a CGNS/Python tree browser
- `CGNS.VAL` checks CGNS/SIDS compliance of CGNS/Python trees
- `CGNS.APP` is a set of all-purpose utilities
- `CGNS.DAT` is not maintained today
## Bugs/Feature and issue tracking
Please use the [issue-tracker](https://github.com/pycgns/pycgns/issues) at github
to report bugs, evolution proposals and submit patches.
## License
The distribution and use of the pyCGNS software is covered by the LGPL v2.1 license.
| text/markdown | null | "Marc Poinot et al." <marc.poinot@safrangroup.com> | null | Mickael Philit <mickey.phy@gmail.com>, Marc Poinot <marc.poinot@safrangroup.com> | LGPL 2 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Lesser General Public License v2 (LGPLv2)",
"Operating System :: Unix",
"Operating System :: POSIX :: Linux",
"O... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.23.5",
"qtpy; extra == \"gui\"",
"pyside6; extra == \"gui\"",
"vtk; extra == \"gui\"",
"unittest; extra == \"test\""
] | [] | [] | [] | [
"homepage, https://pycgns.github.io/",
"source, https://github.com/pyCGNS/pyCGNS",
"documentation, https://pycgns.github.io/",
"tracker, https://github.com/pyCGNS/pyCGNS/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T17:56:16.688258 | pycgns-6.3.5.tar.gz | 11,611,368 | 32/6a/e2739e98b5541629ac2e0d142fd17b090686f5deb3f6c21b57e10a4f3c3c/pycgns-6.3.5.tar.gz | source | sdist | null | false | fa9715ab4d839e931709d5c4110a7574 | 0414362305e7831c5719ccedfbec2c477bd345a6ff426d2a0601de727c5d74c3 | 326ae2739e98b5541629ac2e0d142fd17b090686f5deb3f6c21b57e10a4f3c3c | null | [] | 0 |
2.4 | py-browser-automation | 0.3.4 | Automate online browsing using python and AI | <h1 align="center">PyBA</h1>
<p align="center">
<strong>Tell the AI what to do once. Get a Python script you can run forever.</strong>
</p>
<p align="center">
PyBA uses LLMs to autonomously navigate any website, then exports the session as a standalone Playwright script - no API costs on repeat runs.
</p>
<p align="center">
<a href="https://pepy.tech/projects/py-browser-automation">
<img height="28px" src="https://static.pepy.tech/personalized-badge/py-browser-automation?period=total&units=INTERNATIONAL_SYSTEM&left_color=BLACK&right_color=GREEN&left_text=downloads" />
</a>
<a href="https://badge.socket.dev/pypi/package/py-browser-automation/0.2.8?artifact_id=tar-gz">
<img height="28px" src="https://badge.socket.dev/pypi/package/py-browser-automation/0.2.8?artifact_id=tar-gz" />
</a>
</p>
<p align="center">
<a href="https://pypi.org/project/py-browser-automation/"><b>PyPI</b></a> •
<a href="https://pyba.readthedocs.io/"><b>Documentation</b></a> •
<a href="https://openhub.net/p/pyba"><b>OpenHub</b></a>
</p>
---
## The Problem with AI Browser Agents
Every AI browser agent has the same issue: **you pay for every single run.**
- Run it 100 times? Pay for 100 LLM calls.
- Same task every day? Pay every day.
- The AI figures out the same clicks over and over.
**PyBA is different.** Let the AI figure it out once, then export a deterministic script you own forever.
```python
from pyba import Engine
engine = Engine(openai_api_key="sk-...")
# Step 1: AI navigates autonomously
engine.sync_run(
prompt="Go to Hacker News, click the top story, extract all comments"
)
# Step 2: Export as a standalone Playwright script
engine.generate_code(output_path="hacker_news_scraper.py")
```
Now run `python hacker_news_scraper.py` forever. No AI. No API costs. Just Playwright.
---
## Installation
```sh
pip install py-browser-automation
```
---
## What Can You Do?
### Automate Repetitive Browser Tasks
```python
engine.sync_run(
prompt="Login to my bank, download this month's statement as PDF",
automated_login_sites=["swissbank"]
)
engine.generate_code("download_statement.py")
```
### OSINT & Reconnaissance
```python
from pyba import DFS
dfs = DFS(openai_api_key="sk-...")
dfs.sync_run(
prompt="Find all social media accounts linked to username 'targetuser123'"
)
```
### Structured Data Extraction
```python
from pydantic import BaseModel
class Product(BaseModel):
name: str
price: float
rating: float
engine.sync_run(
prompt="Scrape all products from the first 3 pages",
extraction_format=Product
)
# Data is extracted DURING navigation, stored in your database
```
### Authenticated Workflows
```python
engine.sync_run(
prompt="Go to my Instagram DMs and message john Paula 'Running 10 mins late'",
automated_login_sites=["instagram"]
)
# Credentials come from env vars - never exposed to the LLM
```
---
## Four Exploration Modes
| Mode | Use Case | Example |
|------|----------|---------|
| **Normal** | Direct task execution | "Fill out this form and submit" |
| **Step** | Interactive, step-by-step control | "Click here" → "Now search for X" → "Extract that" |
| **DFS** | Deep investigation | "Analyze this GitHub user's contribution patterns" |
| **BFS** | Wide discovery | "Map all pages linked from this homepage" |
```python
from pyba import Engine, Step, DFS, BFS
# Normal mode (default)
engine = Engine(openai_api_key="...")
# Step-by-step interactive mode
step = Step(openai_api_key="...")
# Deep-first exploration
dfs = DFS(openai_api_key="...")
# Breadth-first discovery
bfs = BFS(openai_api_key="...")
```
### Interactive Step-by-Step Automation
```python
from pyba import Step
step = Step(openai_api_key="sk-...")
await step.start()
await step.step("Go to google.com and search for 'playwright python'")
await step.step("Click the first result")
output = await step.step("Extract the installation instructions")
await step.stop()
```
---
## Key Features
### Code Generation
Export any successful run as a standalone Python script. Run it forever without AI.
### Trace Files
Every run generates a Playwright trace.zip — replay exactly what happened in [Trace Viewer](https://trace.playwright.dev/).
### Low Memory Mode
Saves ~120MB of idle RAM by lazy-loading heavy Python dependencies (oxymouse, google-genai, openai). Chromium flags improve container stability. Built for CI servers, containers, and low-spec machines.
```python
engine = Engine(openai_api_key="sk-...", low_memory=True)
```
### Stealth Mode
Anti-fingerprinting, random mouse movements, human-like delays. Bypass common bot detection.
### Multi-Provider
Works with OpenAI, Google VertexAI, or Gemini.
### Database Logging
Store every action in SQLite, PostgreSQL, or MySQL. Audit trails and replay capability.
### Platform Logins
Built-in login handlers for Instagram, Gmail, Facebook. Credentials stay in env vars.
---
## Quick Examples
### Extract YouTube Video Metadata
```python
engine.sync_run(
prompt="Go to this YouTube video and extract: title, view count, like count, channel name, upload date"
)
```
### Fill a Multi-Page Form
```python
engine.sync_run(
prompt="Fill out the job application: Name='John Doe', Email='john@email.com', upload resume from ~/resume.pdf, submit"
)
engine.generate_code("job_application.py") # Replay anytime
```
### Research a Company
```python
dfs = DFS(openai_api_key="...")
dfs.sync_run(
prompt="Find the leadership team, recent news, and funding history for Acme Corp"
)
```
---
---
## Configuration
```python
from pyba import Engine, Database
# With database logging
db = Database(engine="sqlite", name="runs.db")
engine = Engine(
openai_api_key="sk-...",
headless=False, # Watch it work
enable_tracing=True, # Generate trace.zip
max_depth=20, # Max actions per run
database=db # Log everything
)
```
See [full configuration options](https://pyba.readthedocs.io/) in the docs.
---
## Origin
PyBA was built for automated intelligence and OSINT — replicating everything a human analyst can do in a browser, but with reproducibility and speed.
If you're doing security research, competitive intelligence, or just automating tedious browser tasks, this is for you.
---
## Status
> **v0.3.0** - Active development. First stable release: December 18, 2025.
Breaking changes may occur. Pin your version in production.
---
<p align="center">
<b>If PyBA saved you time, consider giving it a ⭐</b>
</p>
| text/markdown | pUrGe12 | achintya.jai@owasp.org | null | null | MIT | browser-automations, AI | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming La... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"bs4<0.0.3,>=0.0.2",
"colorama<0.5.0,>=0.4.6",
"google-genai<2.0.0,>=1.45.0",
"openai<3.0.0,>=2.6.0",
"oxymouse<2.0.0,>=1.1.0",
"playwright<2.0.0,>=1.55.0",
"playwright-stealth<3.0.0,>=2.0.0",
"pydantic<3.0.0,>=2.12.0",
"python-dotenv<2.0.0,>=1.1.1",
"pyyaml<7.0.0,>=6.0.3",
"requests<3.0.0,>=2.3... | [] | [] | [] | [
"Documentation, https://pyba.readthedocs.io/"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T17:54:41.412261 | py_browser_automation-0.3.4.tar.gz | 64,057 | f8/33/cb03741f0d2d12db8bc42c1236e89ddac3da089092680f3c42859d90b840/py_browser_automation-0.3.4.tar.gz | source | sdist | null | false | 2b093db7c75508d550ef9dcc6ff36abb | 47e11235b3328ec3da3bb5ae78094a0e793b9c98a6004b7e4aa2c9a192aedb60 | f833cb03741f0d2d12db8bc42c1236e89ddac3da089092680f3c42859d90b840 | null | [
"LICENSE"
] | 219 |
2.4 | device-frames-core | 0.1.2 | Core library for applying device frames to screenshots. | device-frames-core
==================
Core library for applying device frames to screenshots.
Install
-------
```bash
pip install device-frames-core
```
Quick Start
-----------
```python
from pathlib import Path
from device_frames_core import apply_frame, list_devices
# List all iOS device variations
devices = list_devices(category="iOS")
print(f"Found {len(devices)} iOS device variations")
apply_frame(
screenshot_path=Path("input.png"),
device="16 Pro Max",
variation="Black Titanium",
output_path=Path("output/framed.png"),
category="iOS",
)
```
API
---
- `list_devices(category=None, device=None)` returns a list of available devices and variations, optionally filtered.
- `apply_frame(...)` applies a frame using bundled assets and writes an output image.
- `find_template(device, variation, category=None)` returns the template data as a dict.
- `get_frame_image(device, variation, category=None)` returns the frame image as a PIL Image.
- `get_mask_image(device, variation, category=None)` returns the mask image as a PIL Image.
Notes
-----
- Assets are bundled in the package under `device_frames_core/assets`.
- The package depends on Pillow.
| text/markdown | Jonny Jackson | null | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>. | null | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"Pillow>=10.0.0"
] | [] | [] | [] | [
"Homepage, https://jonny-jackson.com/posts/device-frames/",
"Repository, https://github.com/jonnyjackson26/device-frames-core"
] | twine/6.2.0 CPython/3.12.1 | 2026-02-19T17:54:06.145394 | device_frames_core-0.1.2.tar.gz | 20,484,310 | 44/d9/a42b3ab677161e80c0bdac1fecff727fce0a53a562798a5f64899bf598d6/device_frames_core-0.1.2.tar.gz | source | sdist | null | false | 73404627336ec520abed6f91374665bd | 72b6b705cfef7b9a203921cbad6b3e0c58dea9e73816c09c4ac9646ca6857f98 | 44d9a42b3ab677161e80c0bdac1fecff727fce0a53a562798a5f64899bf598d6 | null | [
"LICENSE"
] | 259 |
2.4 | VocabMaster | 0.3.0 | Master new languages with this CLI tool, designed to help you record vocabulary and create Anki flashcards without the need to manually input translations or example sentences. | # VocabMaster
CLI tool to record vocabulary and create Anki flashcards. Translations and example sentences are generated automatically.

<!-- TOC -->
## Table of Contents
1. [Features](#features)
1. [Installation](#installation)
1. [Prerequisites](#prerequisites)
1. [Install via `pip`](#install-via-pip)
1. [Install via `uv` (recommended)](#install-via-uv-recommended)
1. [OpenAI API key](#openai-api-key)
1. [Shell Completion](#shell-completion)
1. [Usage](#usage)
1. [Add a new language pair](#add-a-new-language-pair)
1. [Add words to your vocabulary list](#add-words-to-your-vocabulary-list)
1. [Manage language pairs](#manage-language-pairs)
1. [Generate an Anki deck from your vocabulary list](#generate-an-anki-deck-from-your-vocabulary-list)
1. [Choose where your files live](#choose-where-your-files-live)
1. [Recover from backups](#recover-from-backups)
1. [For detailed help on each command, run](#for-detailed-help-on-each-command-run)
1. [Importing into Anki](#importing-into-anki)
1. [Licence](#licence)
<!-- /TOC -->
## Features
* Record vocabulary words
* Automatic translation and usage examples via OpenAI GPT
* Definition mode: same-language pairs (e.g., french:french) for definitions instead of translations
* Custom Anki deck names
* Backup and recovery
* Multiple languages
## Installation
### Prerequisites
* Python 3.10+
* Compatible with Windows, Linux, and macOS
### Install via `pip`
```
python3 -m pip install vocabmaster
```
### Install via `uv` (recommended)
```
uv tool install vocabmaster
```
### OpenAI API key
Vocabmaster requires an OpenAI API key to function. You can obtain a key by signing up for an account at [OpenAI's website](https://platform.openai.com/settings/organization/api-keys).
Once you have your API key, store it in `~/.config/lmt/key.env` (preferred) or set it as an environment variable:
* On macOS and Linux:
```bash
mkdir -p ~/.config/lmt
cat << 'EOF' > ~/.config/lmt/key.env
OPENAI_API_KEY="your-api-key-here"
EOF
chmod 600 ~/.config/lmt/key.env
```
The key file accepts `OPENAI_API_KEY=...`, `export OPENAI_API_KEY=...`, or a single bare key on its own line.
To use an environment variable instead, add this to your shell configuration file (`.bashrc`, `.zshrc`, etc.):
```bash
export OPENAI_API_KEY="your-api-key-here"
```
* On Windows:
```
setx OPENAI_API_KEY your_key
```
### Shell Completion
To enable shell completion for bash or zsh, source the completion file (see the [`completion`](https://github.com/sderev/vocabmaster/tree/main/completion) folder) related to your shell by adding the following line to your `.bashrc` or `.zshrc` file:
#### For bash
```
source /path/to/vocabmaster/completion/_complete_vocabmaster.bash
```
#### For zsh
```
source /path/to/vocabmaster/completion/_complete_vocabmaster.zsh
```
Remember to replace `/path/to/vocabmaster` with the actual path where the completion file is located.
## Usage
### Add a new language pair
```
vocabmaster pairs add
```

#### Definition mode for same-language pairs
VocabMaster supports same-language pairs for getting definitions instead of translations.
For example, to create a French vocabulary list with definitions in French:
```
vocabmaster pairs add
# When prompted, enter: french (language to learn) and french (mother tongue)
```
When using same-language pairs:
* The LLM provides concise definitions (2-3 words) instead of translations
* Example sentences are in the target language
* Anki decks are named "{Language} definitions" instead of "{Language} vocabulary"
### Add words to your vocabulary list
```
vocabmaster add la casa
```

### Manage language pairs
```
vocabmaster pairs list
vocabmaster pairs set-default
vocabmaster pairs remove
vocabmaster pairs rename
vocabmaster pairs inspect --pair english:french
```
`inspect` shows file locations, translation counts, and the estimated input-token cost (input tokens only) for a specific pair.
#### Custom deck names
Set a custom name for your Anki deck instead of using auto-generated names:
```
# Set a custom deck name
vocabmaster pairs set-deck-name --pair english:french --name "Business English"
# Interactive mode (prompts for pair selection and name)
vocabmaster pairs set-deck-name
# Remove custom name (revert to auto-generation)
vocabmaster pairs set-deck-name --pair english:french --remove
```
Once set, the custom deck name will be used automatically when generating Anki decks. You can also override it temporarily:
```
# Use custom name from config
vocabmaster anki --pair english:french
# Override with a different name for this generation only
vocabmaster anki --pair english:french --deck-name "Temporary Name"
```
The same `--deck-name` option works with the `translate` command.
### Generate an Anki deck from your vocabulary list
```
vocabmaster translate
```

Generate a deck for a specific pair with:
```
vocabmaster anki --pair spanish:english
```
### Choose where your files live
```
vocabmaster config dir --show
vocabmaster config dir ~/Documents/vocabmaster
```
Use `--show` to print your current storage directory. Vocabulary CSV and Anki decks default to `~/.vocabmaster`, but you can relocate them anywhere under your home directory. The configuration file itself always stays under `~/.config/vocabmaster/config.json`.
### Recover from backups
VocabMaster automatically creates backups before modifying your vocabulary files. Use the `recover` command group to list, validate, or restore from these backups.
```
# List available backups
vocabmaster recover list
vocabmaster recover list --pair spanish:english
# Restore from the most recent backup
vocabmaster recover restore --latest
# Restore a specific backup (use the ID from 'recover list')
vocabmaster recover restore --backup-id 3
# Validate backup integrity
vocabmaster recover validate
```
### For detailed help on each command, run
```
vocabmaster <command> --help
```
## Importing into Anki
To import the vocabulary deck into Anki, follow the steps below:
1. Launch Anki.
1. Click on the `Import File` button. This will open a file picker dialog.
1. In the file picker, locate and select the `anki_deck_language1-language2.csv` file.
1. Ensure the `Existing notes` field is set to *Update*. This will prevent the creation of duplicate cards if the same note already exists in your deck.
## Licence
VocabMaster is released under the [Apache Licence version 2](LICENSE).
___
<https://github.com/sderev/vocabmaster>
| text/markdown | Sébastien De Revière | null | null | null | Apache-2.0 | vocabulary, language-learning, anki, flashcards, cli, openai, gpt | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Langua... | [] | null | null | >=3.10 | [] | [] | [] | [
"click~=8.3",
"httpx>=0.28.1",
"openai<2.0,>=1.66.0",
"tiktoken~=0.12",
"urllib3>=2.6.3"
] | [] | [] | [] | [
"Homepage, https://github.com/sderev/vocabmaster",
"Repository, https://github.com/sderev/vocabmaster",
"Issues, https://github.com/sderev/vocabmaster/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T17:54:06.128199 | vocabmaster-0.3.0-py3-none-any.whl | 45,526 | 03/24/ff9ad5b660bc17b6b7755dac0c0ae3ca9c85463da8781ec07054463f545b/vocabmaster-0.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 39b0f9d5bf73366d16c2b122cf17a667 | f0d3aa8096dfca4c4f04ed5d7c583ba16a5c818d97361504cf7d3298777f483f | 0324ff9ad5b660bc17b6b7755dac0c0ae3ca9c85463da8781ec07054463f545b | null | [
"LICENSE"
] | 0 |
2.4 | naas-abi | 1.8.1 | Multi-agent orchestrator and knowledge graph management system for AI orchestration, providing comprehensive coordination of specialized AI agents and semantic data management capabilities | # naas-abi
Multi-agent orchestrator and knowledge graph management system for AI orchestration, providing comprehensive coordination of specialized AI agents and semantic data management capabilities.
## Overview
`naas-abi` is the central coordination hub for the ABI (Agentic Brain Infrastructure) ecosystem. It provides:
- **Multi-Agent Orchestration**: Central coordinator managing specialized AI agents (ChatGPT, Claude, Mistral, Gemini, Grok, Llama, Perplexity, Qwen, DeepSeek, Gemma)
- **Knowledge Graph Operations**: Complete CRUD operations for semantic data management
- **Ontology Engineering**: BFO-compliant entity extraction and SPARQL generation
- **Intelligent Routing**: Weighted decision hierarchy with context preservation
- **Multilingual Support**: Native French/English interactions with cultural awareness
- **Production Integration**: Event-driven triggers and YAML ontology publishing
## Installation
```bash
pip install naas-abi
```
## Core Components
### ABIModule
The `ABIModule` is the main module that orchestrates the entire ABI system. It automatically loads marketplace modules (AI agents, applications, domain experts) and provides the core infrastructure.
**Configuration:**
```yaml
modules:
- module: naas_abi
enabled: true
config:
datastore_path: "abi"
workspace_id: "{{ secret.WORKSPACE_ID }}"
storage_name: "{{ secret.STORAGE_NAME }}"
```
**Dependencies:**
The module automatically loads marketplace modules as soft dependencies (optional):
- AI Agents: ChatGPT, Claude, Mistral, Gemini, Grok, Llama, Perplexity, Qwen, DeepSeek, Gemma
- Applications: GitHub, LinkedIn, Google services, and 50+ other integrations
- Domain Experts: Support, Software Engineering, Data Analysis, and more
**Required Services:**
- `Secret`: For credential management
- `TripleStoreService`: For knowledge graph operations
- `ObjectStorageService`: For file storage
### Agents
#### AbiAgent
The main multi-agent orchestrator that coordinates specialized agents.
**Features:**
- Intelligent routing based on request type and context
- Context preservation across conversations
- Multilingual support (French/English)
- Weighted decision hierarchy for optimal agent selection
- Strategic advisory capabilities
**Usage:**
```python
from naas_abi.agents.AbiAgent import create_agent
agent = create_agent()
response = agent.invoke("Route this to the best AI for code generation")
```
**Routing Priorities:**
- Context Preservation (0.99): Maintains active conversations
- Identity/Strategic (0.95): Direct Abi responses
- Web Search (0.90): Routes to Perplexity/ChatGPT
- Creative/Multimodal (0.85): Routes to Gemini
- Truth Seeking (0.80): Routes to Grok
- Advanced Reasoning (0.75): Routes to Claude
- Code & Math (0.70): Routes to Mistral
- Knowledge Graph (0.68): Opens KG Explorer
- Internal Knowledge (0.65): Uses ontology agent
- Platform Operations (0.45): Routes to Naas agent
- Issue Management (0.25): Routes to Support agent
#### EntitytoSPARQLAgent
Extracts entities from natural language and generates SPARQL queries.
**Capabilities:**
- BFO-compliant entity extraction
- Automatic SPARQL query generation
- Entity relationship mapping
- Ontology-aware query construction
**Usage:**
```python
from naas_abi.agents.EntitytoSPARQLAgent import create_agent
agent = create_agent()
response = agent.invoke(
"Extract entities from: 'John works at Microsoft as a software engineer'"
)
```
#### KnowledgeGraphBuilderAgent
Provides complete CRUD operations for the knowledge graph.
**Capabilities:**
- Add individuals (entities) to the knowledge graph
- Update properties and relationships
- Remove individuals
- Merge duplicate entities
- Query and explore the graph
**Usage:**
```python
from naas_abi.agents.KnowledgeGraphBuilderAgent import create_agent
agent = create_agent()
response = agent.invoke(
"Add a new organization called 'NaasAI' to the knowledge graph"
)
```
#### OntologyEngineerAgent
Specialized agent for BFO ontology engineering and management.
**Capabilities:**
- Ontology creation and modification
- BFO-compliant structure validation
- Class and property definition
- Ontology publishing to YAML
**Usage:**
```python
from naas_abi.agents.OntologyEngineerAgent import create_agent
agent = create_agent()
response = agent.invoke(
"Create a new ontology class for 'SoftwareProject' with properties"
)
```
### Workflows
#### AgentRecommendationWorkflow
Recommends the best AI agent for a given intent using SPARQL queries.
**Features:**
- Intent-to-query matching
- SPARQL template parameterization
- Weighted recommendation scoring
- Provider preference support
**Usage:**
```python
from naas_abi.workflows.AgentRecommendationWorkflow import (
AgentRecommendationWorkflow,
AgentRecommendationConfiguration,
AgentRecommendationParameters
)
workflow = AgentRecommendationWorkflow(
AgentRecommendationConfiguration(queries_file_path="path/to/queries.ttl")
)
result = workflow.run(AgentRecommendationParameters(
intent_description="I need help with code generation",
provider_preference="openai"
))
```
#### ArtificialAnalysisWorkflow
Fetches and stores AI model data from the Artificial Analysis API.
**Features:**
- Fetches model performance data
- Filters for modules with active agents
- Saves timestamped JSON files
- Supports multiple endpoints (models, providers, categories)
**Usage:**
```python
from naas_abi.workflows.ArtificialAnalysisWorkflow import (
ArtificialAnalysisWorkflow,
ArtificialAnalysisWorkflowConfiguration,
ArtificialAnalysisWorkflowParameters
)
workflow = ArtificialAnalysisWorkflow(
ArtificialAnalysisWorkflowConfiguration(
api_key="your_api_key",
base_url="https://artificialanalysis.ai/api/v2"
)
)
result = workflow.run(ArtificialAnalysisWorkflowParameters(
endpoint="models",
validate_agents_only=True
))
```
#### SearchIndividualWorkflow
Searches for individuals (entities) in the knowledge graph.
**Features:**
- Semantic search across entities
- Fuzzy matching support
- Property-based filtering
- Result ranking
#### GetSubjectGraphWorkflow
Retrieves the graph structure for a specific subject (entity).
**Features:**
- Entity relationship exploration
- Configurable depth traversal
- Graph visualization data
- Property and relationship extraction
#### GetObjectPropertiesFromClassWorkflow
Retrieves object properties for a given ontology class.
**Features:**
- Class property discovery
- BFO-compliant property extraction
- Relationship type identification
- Ontology hierarchy traversal
### Pipelines
#### AddIndividualPipeline
Adds new individuals (entities) to the knowledge graph.
**Features:**
- Duplicate detection
- Automatic URI generation
- Property assignment
- Relationship creation
#### AIAgentOntologyGenerationPipeline
Generates AI agent ontologies from Artificial Analysis data.
**Features:**
- BFO-structured ontology generation
- Model-to-agent mapping
- Timestamped audit trails
- Automatic deployment to module folders
**Execution Steps:**
1. Loads Artificial Analysis data
2. Groups models by AI agent
3. Generates ontologies in timestamped folders
4. Deploys current versions to module folders
5. Creates audit trail and summary
#### InsertDataSPARQLPipeline
Inserts data into the knowledge graph using SPARQL INSERT queries.
**Features:**
- SPARQL query execution
- Batch insert operations
- Validation and error handling
- Transaction support
#### MergeIndividualsPipeline
Merges duplicate individuals in the knowledge graph.
**Features:**
- Duplicate detection
- Property merging
- Relationship consolidation
- Audit logging
#### RemoveIndividualPipeline
Removes individuals from the knowledge graph.
**Features:**
- Safe deletion with validation
- Relationship cleanup
- Audit trail creation
- Backup generation
#### Update Pipelines
Specialized pipelines for updating specific entity types:
- `UpdateDataPropertyPipeline`: Updates data properties
- `UpdatePersonPipeline`: Updates person entities
- `UpdateCommercialOrganizationPipeline`: Updates organization entities
- `UpdateSkillPipeline`: Updates skill entities
- `UpdateLinkedInPagePipeline`: Updates LinkedIn page data
- `UpdateTickerPipeline`: Updates stock ticker information
- `UpdateWebsitePipeline`: Updates website information
- `UpdateLegalNamePipeline`: Updates legal names
### Ontologies
The module includes a comprehensive ontology structure organized in a 4-level hierarchy:
1. **Top-level**: BFO foundational ontologies
2. **Mid-level**: Common Core Ontologies (CCO)
3. **Domain-level**: Domain-specific ontologies
4. **Application-level**: Use-case specific ontologies
**Location:** `naas_abi/ontologies/`
### Models
The module supports multiple AI model configurations:
#### Cloud Mode (Default)
- **Model**: `gpt-4.1-mini`
- **Provider**: OpenAI
- **Temperature**: 0 (precise orchestration)
- **Requires**: `OPENAI_API_KEY`
#### Airgap Mode
- **Qwen3** (default): Temperature 0.7, 8K context
- **Gemma3** (alternative): Temperature 0.2, 8K context
- **Requires**: Docker Model Runner on `localhost:12434`
**Configuration:**
```python
# Set environment variable
AI_MODE=cloud # or "airgap" or "local"
```
## CLI Tools
The module provides CLI commands for creating new components:
```bash
# Create a new module
python -m naas_abi.cli create-module
# Create a new agent
python -m naas_abi.cli create-agent
# Create a new integration
python -m naas_abi.cli create-integration
# Create a new workflow
python -m naas_abi.cli create-workflow
# Create a new pipeline
python -m naas_abi.cli create-pipeline
# Create a new ontology
python -m naas_abi.cli create-ontology
```
Each command provides an interactive wizard to guide you through the creation process.
## Usage Examples
### Basic Agent Interaction
```python
from naas_abi_core.engine.Engine import Engine
# Initialize engine
engine = Engine()
engine.load(module_names=["naas_abi"])
# Get AbiAgent
from naas_abi.agents.AbiAgent import create_agent
agent = create_agent()
# Interact with agent
response = agent.invoke("What agents are available?")
print(response)
```
### Knowledge Graph Operations
```python
from naas_abi.agents.KnowledgeGraphBuilderAgent import create_agent
kg_agent = create_agent()
# Add an organization
kg_agent.invoke("Add organization 'NaasAI' with website 'https://naas.ai'")
# Search for entities
from naas_abi.workflows.SearchIndividualWorkflow import (
SearchIndividualWorkflow,
SearchIndividualWorkflowConfiguration,
SearchIndividualWorkflowParameters
)
workflow = SearchIndividualWorkflow(
SearchIndividualWorkflowConfiguration(
triple_store=engine.services.triple_store
)
)
results = workflow.run(SearchIndividualWorkflowParameters(
search_term="NaasAI"
))
```
### Workflow Execution
```python
from naas_abi.workflows.AgentRecommendationWorkflow import (
AgentRecommendationWorkflow,
AgentRecommendationConfiguration,
AgentRecommendationParameters
)
workflow = AgentRecommendationWorkflow(
AgentRecommendationConfiguration(
queries_file_path="path/to/queries.ttl"
)
)
recommendations = workflow.run(AgentRecommendationParameters(
intent_description="I need help with data analysis",
provider_preference="anthropic"
))
```
### Pipeline Execution
```python
from naas_abi.pipelines.AddIndividualPipeline import (
AddIndividualPipeline,
AddIndividualPipelineConfiguration,
AddIndividualPipelineParameters
)
pipeline = AddIndividualPipeline(
AddIndividualPipelineConfiguration(
triple_store=engine.services.triple_store,
search_individual_configuration=SearchIndividualWorkflowConfiguration(
triple_store=engine.services.triple_store
)
)
)
graph = pipeline.run(AddIndividualPipelineParameters(
individual_label="New Company",
individual_type="CommercialOrganization"
))
```
## Configuration
### Environment Variables
| Variable | Values | Default | Description |
|----------|--------|---------|-------------|
| `AI_MODE` | `cloud` \| `airgap` \| `local` | `cloud` | Model deployment mode |
| `OPENAI_API_KEY` | API key | Required (cloud) | For cloud models |
| `NAAS_API_KEY` | API key | Optional | For production triggers |
| `ENV` | `dev` \| `prod` | `dev` | Environment mode |
### Module Configuration
```yaml
modules:
- module: naas_abi
enabled: true
config:
datastore_path: "abi"
workspace_id: "{{ secret.WORKSPACE_ID }}"
storage_name: "{{ secret.STORAGE_NAME }}"
```
## Key Features
### 🔄 Context-Aware Orchestration
Preserves active conversations while enabling intelligent agent transitions.
### 🌍 Multilingual Support
Native French/English code-switching with cultural awareness.
### 🎯 Weighted Decision Routing
Sophisticated hierarchy for optimal agent selection based on request type.
### 🔍 Knowledge Graph Integration
Direct access to SPARQL querying and semantic data exploration.
### 🔒 Deployment Flexibility
Choice between cloud (OpenAI) and airgap (Docker Model Runner) models.
### 📊 Strategic Advisory
Direct consultation capabilities for business and technical guidance.
### 🛡️ Production Ready
Event-driven triggers, comprehensive testing, and error resilience.
## Dependencies
- `naas-abi-core>=1.0.0`: Core ABI framework
- `naas-abi-marketplace>=1.0.0`: Marketplace modules and agents
- `thefuzz>=0.22.1`: Fuzzy string matching
## Architecture
### Module Structure
```
naas_abi/
├── agents/ # Agent implementations
│ ├── AbiAgent.py
│ ├── EntitytoSPARQLAgent.py
│ ├── KnowledgeGraphBuilderAgent.py
│ └── OntologyEngineerAgent.py
├── workflows/ # Business logic workflows
│ ├── AgentRecommendationWorkflow.py
│ ├── ArtificialAnalysisWorkflow.py
│ ├── SearchIndividualWorkflow.py
│ ├── GetSubjectGraphWorkflow.py
│ └── GetObjectPropertiesFromClassWorkflow.py
├── pipelines/ # Data processing pipelines
│ ├── AddIndividualPipeline.py
│ ├── AIAgentOntologyGenerationPipeline.py
│ ├── InsertDataSPARQLPipeline.py
│ ├── MergeIndividualsPipeline.py
│ └── Update*Pipeline.py
├── ontologies/ # Ontology definitions
├── models/ # Model configurations
├── cli.py # CLI commands
└── __init__.py # Module initialization
```
## Testing
```bash
# Run all tests
pytest naas_abi/ -v
# Test specific agent
pytest naas_abi/agents/AbiAgent_test.py -v
# Test workflows
pytest naas_abi/workflows/ -v
# Test pipelines
pytest naas_abi/pipelines/ -v
```
## See Also
- [ABI Main README](../../README.md) - Complete ABI framework documentation
- [naas-abi-core](../naas-abi-core/) - Core engine documentation
- [naas-abi-cli](../naas-abi-cli/) - CLI tool documentation
- [naas-abi-marketplace](../naas-abi-marketplace/) - Marketplace modules
## License
MIT License
| text/markdown | null | Maxime Jublou <maxime@naas.ai>, Florent Ravenel <florent@naas.ai>, Jeremy Ravenel <jeremy@naas.ai> | null | null | MIT License | null | [] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"naas-abi-core[dagster]>=1.4.0",
"naas-abi-marketplace>=1.3.3",
"fastapi>=0.109.0",
"uvicorn[standard]>=0.27.0",
"python-dotenv>=1.0.0",
"pydantic>=2.5.0",
"pydantic-settings>=2.1.0",
"sqlalchemy>=2.0.0",
"greenlet>=3.0.0",
"asyncpg>=0.29.0",
"psycopg2-binary>=2.9.9",
"alembic>=1.13.0",
"red... | [] | [] | [] | [
"Homepage, https://github.com/jupyter-naas/abi",
"Repository, https://github.com/jupyter-naas/abi/tree/main/libs/naas-abi"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:53:38.061685 | naas_abi-1.8.1.tar.gz | 20,861,871 | 76/4d/ca80798d0018364cdba03f61e0ea7b7e012238d0208478cd2b22cb151fe4/naas_abi-1.8.1.tar.gz | source | sdist | null | false | 9d58a3368cce5a98d809f54e67e718ec | e0c1d786cb83010a142bc37e647c947e0beb615ddd814980dc4593ad3503daed | 764dca80798d0018364cdba03f61e0ea7b7e012238d0208478cd2b22cb151fe4 | null | [] | 253 |
2.4 | apple-mail-mcp | 0.1.2 | Fast MCP server for Apple Mail with FTS5 search index | # Apple Mail MCP
<!-- mcp-name: io.github.imdinu/apple-mail-mcp -->
[](https://www.python.org/downloads/)
[](https://www.gnu.org/licenses/gpl-3.0)
[](https://www.apple.com/macos/)
[](https://modelcontextprotocol.io/)
[](https://github.com/astral-sh/ruff)
[](https://github.com/imdinu/apple-mail-mcp/actions/workflows/lint.yml)
A fast MCP server for Apple Mail — **87x faster** email fetching via batch JXA, plus an FTS5 search index for **700–3500x faster** body search (~2ms vs ~7s).
**[Read the docs](https://imdinu.github.io/apple-mail-mcp/)** for the full guide.
## Quick Start
```bash
pipx install apple-mail-mcp
```
Add to your MCP client:
```json
{
"mcpServers": {
"mail": {
"command": "apple-mail-mcp"
}
}
}
```
### Build the Search Index (Recommended)
```bash
# Requires Full Disk Access for Terminal
# System Settings → Privacy & Security → Full Disk Access → Add Terminal
apple-mail-mcp index --verbose
```
## Tools
| Tool | Purpose |
|------|---------|
| `list_accounts()` | List email accounts |
| `list_mailboxes(account?)` | List mailboxes |
| `get_emails(filter?, limit?)` | Get emails — all, unread, flagged, today, this_week |
| `get_email(message_id)` | Get single email with full content |
| `search(query, scope?)` | Search — all, subject, sender, body |
## Performance
| Scenario | Apple Mail MCP | Best alternative | Speedup |
|----------|---------------|-----------------|---------|
| Fetch 50 emails | 529ms | 15,288ms | **29x** |
| Body search | ~2ms | ~7,000ms (or unsupported) | **3500x** |
| List accounts | 108ms | 146ms | Fastest |
> Benchmarked against [7 other Apple Mail MCP servers](https://imdinu.github.io/apple-mail-mcp/benchmarks/) at the MCP protocol level.
## Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| `APPLE_MAIL_DEFAULT_ACCOUNT` | First account | Default email account |
| `APPLE_MAIL_DEFAULT_MAILBOX` | `INBOX` | Default mailbox |
| `APPLE_MAIL_INDEX_PATH` | `~/.apple-mail-mcp/index.db` | Index location |
```json
{
"mcpServers": {
"mail": {
"command": "apple-mail-mcp",
"args": ["--watch"],
"env": {
"APPLE_MAIL_DEFAULT_ACCOUNT": "Work"
}
}
}
}
```
## Development
```bash
git clone https://github.com/imdinu/apple-mail-mcp
cd apple-mail-mcp
uv sync
uv run ruff check src/
uv run pytest
```
## License
GPL-3.0-or-later
| text/markdown | null | Ioan-Mihail Dinu <iodinu@icloud.com> | null | null | null | apple-mail, apple-mail-mcp, automation, email, fts5, macos, mcp, model-context-protocol | [
"Development Status :: 3 - Alpha",
"Environment :: MacOS X",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: MacOS",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3",
"Programming Language :: Python ... | [] | null | null | >=3.11 | [] | [] | [] | [
"beautifulsoup4>=4.12",
"cyclopts>=5.0.0a1",
"fastmcp<4,>=3.0.0b1",
"watchfiles>=1.0; extra == \"watch\""
] | [] | [] | [] | [
"Homepage, https://github.com/imdinu/apple-mail-mcp",
"Documentation, https://imdinu.github.io/apple-mail-mcp/",
"Repository, https://github.com/imdinu/apple-mail-mcp",
"Issues, https://github.com/imdinu/apple-mail-mcp/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:53:22.064336 | apple_mail_mcp-0.1.2.tar.gz | 44,753 | 47/4b/112f3ee66d5fb488fd4790e411f3a12ffce92220916d8698a6dfecb56dee/apple_mail_mcp-0.1.2.tar.gz | source | sdist | null | false | 00280a49441e6d49053a737d37a2b65d | dec7d8f0dd07fa22b5b8850eb3c5f05671b587df933849cd45d995ba4387524d | 474b112f3ee66d5fb488fd4790e411f3a12ffce92220916d8698a6dfecb56dee | GPL-3.0-or-later | [
"LICENSE"
] | 262 |
2.4 | nrcan-etl-toolbox | 0.2.26 | Package for logging and database interfacing using SQLAlchemy and SQLModels | from nrcan_etl_toolbox.etl_toolbox.reader.source_readers import ExcelReader
# NRCAN ETL Toolbox
[](https://codecov.io/github/xmalet-nrcan/etl-toolbox)
[](https://github.com/xmalet-nrcan/etl-toolbox/actions/workflows/ci-release.yml)
Pour la version française de ce document, consultez [README-fr.md](README-fr.md).
`etl-toolbox` is a Python toolkit designed to simplify Extract, Transform, and Load (ETL) data processes. This modular toolkit offers several specialized components for different aspects of ETL workflows.
## Components
### etl_logging
Specialized logging module for ETL processes, allowing simple configuration and efficient log analysis.
### etl_toolbox
Collection of tools for reading data from various sources. It includes readers for different file formats and databases, facilitating data integration in ETL processes:
- **Data Readers**: CSV, Excel, GeoPackage, JSON, PostGIS, Shapefile
### database
Interfaces and ORM for interacting with different database systems:
- **Database Interfaces**: Abstract object handlers for database interactions
- **ORM**: Object-relational mappings to simplify data access
## Installation
Install the package via Poetry:
```bash
poetry install
```
Or by creating a distribution:
```bash
poetry build
pip install dist/nrcan_etl_toolbox-*.whl
```
## Usage
### Logging Module (etl_logging)
```python
from nrcan_etl_toolbox.etl_logging import CustomLogger
logger = CustomLogger(name="Test Logger", level='INFO'
,logger_type='verbose',
logger_file_name='test_logger.log')
# Logging messages
logger.info("Starting ETL process")
logger.debug("Technical details", extra={"data": {"items": 100}})
logger.error("Processing error", exc_info=True)
```
### Data Readers (etl_toolbox)
```python
from nrcan_etl_toolbox.etl_toolbox.reader import ReaderFactory
from nrcan_etl_toolbox.etl_toolbox.reader.source_readers import ExcelReader
# Creating a CSV reader
csv_reader = ReaderFactory(input_source="data.csv")
data = csv_reader.data
# Creating a Shapefile reader
shp_reader = ReaderFactory(input_source="data.shp")
geo_data = shp_reader.data
# Creating a PostGIS reader
postgis_reader = ReaderFactory(input_source="postgresql://user:password@host:port/database", # Use the connection string for your database
table_name="table_name",
schema="schema_name")
geo_data = postgis_reader.data
# Creating an Excel reader
reader = ReaderFactory(input_source="data.xlsx")
# Get the Reader object
excel_reader : ExcelReader = reader.reader
# If excel file contains multiple sheets,
# data will be a dictionary with sheet names as keys and dataframes as values
data = excel_reader.dataframe
# data = {'Sheet1': df1, 'Sheet2': df2}
# To read a specific sheet, use the sheet_name parameter
data = excel_reader.read_sheet('Sheet1')
# data = df1
```
### Database Interface
```python
# TODO: Complete documentation.
from nrcan_etl_toolbox.database.interface import AbstractDatabaseHandler
# Usage example to be documented
```
## Development
To contribute to the project, install development dependencies:
```bash
poetry install --with dev
```
Run tests with:
```bash
pytest
```
## Project Structure
```
nrcan_etl_toolbox/
├── database/ # Database interactions
│ ├── interface/ # Abstract interfaces for databases
│ └── orm/ # Object-relational mappings
├── etl_logging/ # ETL logging module
└── etl_toolbox/ # Main ETL tools
└── reader/ # Data source readers
└── source_readers/ # Specific reader implementations
```
[//]: # (## License)
[//]: # ()
[//]: # (This project is distributed under the MIT license. See the [LICENSE](LICENSE) file for more information.)
## Authors
- NRCAN (Natural Resources Canada)
- [Xavier Malet](mailto:xavier.malet@nrcan-rncan.gc.ca)
For questions or suggestions, please use the project's GitHub issues.
| text/markdown | Xavier Malet | xavier.malet@nrcan-rncan.gc.ca | null | null | MIT License
Copyright 2025, (c) Her Majesty the Queen in Right of Canada, as represented by the Minister of Natural Resources
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"SQLAlchemy>2.0.40",
"geoalchemy2>0.17.1",
"geopandas",
"omegaconf>=2.3.0",
"openpyxl>=3.1.5",
"pandas",
"paramiko>=3.5.1",
"psycopg2-binary>2.9.10",
"pyodbc; sys_platform == \"win32\"",
"pytest>=8.0.0",
"python-dotenv",
"ruff>=0.11.10",
"shapely",
"sqlalchemy-access; sys_platform == \"win... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:52:37.866613 | nrcan_etl_toolbox-0.2.26.tar.gz | 21,403 | c6/86/d43aec2e4681df589f6a79d74f3a23187ff2e8ef0ad6c687e7e41fcb1b9b/nrcan_etl_toolbox-0.2.26.tar.gz | source | sdist | null | false | fdfc7ec3b1136cc5b63c88128ac5fc88 | 222dd04724618ba6547a2acfa6395f363df39e3960ff3912553a8d34033aff2b | c686d43aec2e4681df589f6a79d74f3a23187ff2e8ef0ad6c687e7e41fcb1b9b | null | [
"LICENSE"
] | 228 |
2.4 | hydra-sweeper-explicit | 0.0.1 | Hydra sweeper for explicit parameter combinations without Cartesian product | # hydra-sweeper-explicit
[![Tests][badge-tests]][tests]
[![PyPI][badge-pypi]][pypi]
[badge-tests]: https://img.shields.io/github/actions/workflow/status/quadbio/hydra-sweeper-explicit/test.yaml?branch=main&label=tests
[badge-pypi]: https://img.shields.io/pypi/v/hydra-sweeper-explicit
[tests]: https://github.com/quadbio/hydra-sweeper-explicit/actions/workflows/test.yaml
[pypi]: https://pypi.org/project/hydra-sweeper-explicit
A Hydra sweeper for running explicit parameter combinations without Cartesian product.
## Installation
```bash
pip install hydra-sweeper-explicit
```
## Usage
```yaml
hydra:
sweeper:
_target_: hydra_sweeper_explicit.ExplicitSweeper
combinations:
- {model: small, lr: 0.01}
- {model: large, lr: 0.001}
- {model: large, lr: 0.0001, dropout: 0.5}
```
```bash
python train.py --multirun
```
Runs exactly 3 jobs—no Cartesian product.
| text/markdown | Marius Lange | null | Marius Lange | null | null | hydra, hyperparameter, machine-learning, sweeper | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :... | [] | null | null | >=3.12 | [] | [] | [] | [
"hydra-core>=1.3",
"omegaconf>=2.3",
"pre-commit; extra == \"dev\"",
"coverage>=7.10; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest>=8; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://github.com/quadbio/hydra-sweeper-explicit#readme",
"Issues, https://github.com/quadbio/hydra-sweeper-explicit/issues",
"Source, https://github.com/quadbio/hydra-sweeper-explicit"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:52:37.099611 | hydra_sweeper_explicit-0.0.1.tar.gz | 24,996 | 77/11/7df1878db2a11727d6134343d3710c06bbfd2726225ec701736a4fe55856/hydra_sweeper_explicit-0.0.1.tar.gz | source | sdist | null | false | aa110db5ebceaa982034fe1cae824875 | f372187c29001c98d01fa0c54b1d1fcc7d1ad018184c8f5bde5347a4dfaae3bb | 77117df1878db2a11727d6134343d3710c06bbfd2726225ec701736a4fe55856 | MIT | [
"LICENSE"
] | 229 |
2.4 | strands-agents-mcp-server | 0.2.6 | A Model Context Protocol server that provides knowledge about building AI agents with Strands Agents | <div align="center">
<div>
<a href="https://strandsagents.com">
<img src="https://strandsagents.com/latest/assets/logo-github.svg" alt="Strands Agents" width="55px" height="105px">
</a>
</div>
<h1>
Strands Agents MCP Server
</h1>
<h2>
A model-driven approach to building AI agents in just a few lines of code.
</h2>
<div align="center">
<a href="https://github.com/strands-agents/mcp-server/graphs/commit-activity"><img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/strands-agents/mcp-server"/></a>
<a href="https://github.com/strands-agents/mcp-server/issues"><img alt="GitHub open issues" src="https://img.shields.io/github/issues/strands-agents/mcp-server"/></a>
<a href="https://github.com/strands-agents/mcp-server/pulls"><img alt="GitHub open pull requests" src="https://img.shields.io/github/issues-pr/strands-agents/mcp-server"/></a>
<a href="https://github.com/strands-agents/mcp-server/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/strands-agents/mcp-server"/></a>
<a href="https://pypi.org/project/strands-agents-mcp-server/"><img alt="PyPI version" src="https://img.shields.io/pypi/v/strands-agents-mcp-server"/></a>
<a href="https://python.org"><img alt="Python versions" src="https://img.shields.io/pypi/pyversions/strands-agents-mcp-server"/></a>
</div>
<p>
<a href="https://strandsagents.com/">Documentation</a>
◆ <a href="https://github.com/strands-agents/samples">Samples</a>
◆ <a href="https://github.com/strands-agents/sdk-python">Python SDK</a>
◆ <a href="https://github.com/strands-agents/tools">Tools</a>
◆ <a href="https://github.com/strands-agents/agent-builder">Agent Builder</a>
◆ <a href="https://github.com/strands-agents/mcp-server">MCP Server</a>
</p>
</div>
This MCP server provides curated documentation access to your GenAI tools via llms.txt files, enabling AI coding assistants to search and retrieve relevant documentation with intelligent ranking.
## Features
- **Smart Document Search**: TF-IDF based search with Markdown-aware scoring that prioritizes titles, headers, and code blocks
- **Curated Content**: Indexes documentation from llms.txt files with clean, human-readable titles
- **On-Demand Fetching**: Lazy-loads full document content only when needed for optimal performance
- **Snippet Generation**: Provides contextual snippets with relevance scoring for quick overview
- **Real URL Support**: Works with actual HTTPS URLs while maintaining backward compatibility
## Prerequisites
The usage methods below require [uv](https://github.com/astral-sh/uv) to be installed on your system. You can install it by following the [official installation instructions](https://github.com/astral-sh/uv#installation).
## Installation
You can use the Strands Agents MCP server with
[40+ applications that support MCP servers](https://modelcontextprotocol.io/clients),
including Amazon Q Developer CLI, Anthropic Claude Code, Cline, and Cursor.
Get started quickly with one-click installation buttons for popular MCP clients. Click the buttons below to install servers directly in your IDE:
[](https://kiro.dev/launch/mcp/add?name=strands-agents&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22strands-agents-mcp-server%22%5D%2C%22disabled%22%3Afalse%2C%22autoApprove%22%3A%5B%22search_docs%22%2C%22fetch_doc%22%5D%7D)
[](https://cursor.com/en-US/install-mcp?name=strands-agents&config=eyJjb21tYW5kIjoidXZ4IHN0cmFuZHMtYWdlbnRzLW1jcC1zZXJ2ZXIifQ%3D%3D)
[](https://vscode.dev/redirect?url=vscode:mcp/install?%7B%22name%22%3A%22strands-agents%22%2C%22type%22%3A%22stdio%22%2C%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22strands-agents-mcp-server%22%5D%7D)
### Kiro example
See the [Kiro documentation](https://kiro.dev/docs/mcp/configuration/)
for instructions on managing MCP configuration.
In `~/.kiro/settings/mcp.json`:
```json
{
"mcpServers": {
"strands-agents": {
"command": "uvx",
"args": ["strands-agents-mcp-server"],
"env": {
"FASTMCP_LOG_LEVEL": "INFO"
},
"disabled": false,
"autoApprove": ["search_docs", "fetch_doc"]
}
}
}
```
### Q Developer CLI example
See the [Q Developer CLI documentation](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-mcp-configuration.html)
for instructions on managing MCP configuration.
In `~/.aws/amazonq/mcp.json`:
```json
{
"mcpServers": {
"strands-agents": {
"command": "uvx",
"args": ["strands-agents-mcp-server"],
"env": {
"FASTMCP_LOG_LEVEL": "INFO"
},
"disabled": false,
"autoApprove": ["search_docs", "fetch_doc"]
}
}
}
```
### Claude Code example
See the [Claude Code documentation](https://docs.anthropic.com/en/docs/claude-code/tutorials#configure-mcp-servers)
for instructions on managing MCP servers.
```bash
claude mcp add strands uvx strands-agents-mcp-server
```
### Cline example
See the [Cline documentation](https://docs.cline.bot/mcp-servers/configuring-mcp-servers#editing-mcp-settings-files)
for instructions on managing MCP configuration.
Provide Cline with the following information:
```
I want to add the MCP server for Strands Agents.
Here's the GitHub link: @https://github.com/strands-agents/mcp-server
Can you add it?"
```
### Cursor example
See the [Cursor documentation](https://docs.cursor.com/context/model-context-protocol#configuring-mcp-servers)
for instructions on managing MCP configuration.
In `~/.cursor/mcp.json`:
```json
{
"mcpServers": {
"strands-agents": {
"command": "uvx",
"args": ["strands-agents-mcp-server"],
"env": {
"FASTMCP_LOG_LEVEL": "INFO"
},
"disabled": false,
"autoApprove": ["search_docs", "fetch_doc"]
}
}
}
```
### VS Code example
See the [VS Code documentation](https://code.visualstudio.com/docs/copilot/customization/mcp-servers)
for instructions on managing MCP configuration.
In your `mcp.json` file:
```json
{
"servers": {
"strands-agents": {
"command": "uvx",
"args": ["strands-agents-mcp-server"]
}
}
}
```
## Quick Testing
You can quickly test the MCP server using the MCP Inspector:
```bash
# For published package
npx @modelcontextprotocol/inspector uvx strands-agents-mcp-server
# For local development
npx @modelcontextprotocol/inspector python -m strands_mcp_server
```
Note: This requires [npx](https://docs.npmjs.com/cli/v11/commands/npx) to be installed on your system. It comes bundled with [Node.js](https://nodejs.org/).
The Inspector is also useful for troubleshooting MCP server issues as it provides detailed connection and protocol information. For an in-depth guide, have a look at the [MCP Inspector documentation](https://modelcontextprotocol.io/docs/tools/inspector).
## Getting Started
1. **Install prerequisites**:
- Install [uv](https://github.com/astral-sh/uv) following the [official installation instructions](https://github.com/astral-sh/uv#installation)
- Make sure you have [Node.js](https://nodejs.org/) installed for npx commands
2. **Configure your MCP client**:
- Choose your preferred MCP client from the installation examples above
- Add the Strands Agents MCP server configuration to your client
3. **Test the connection**:
```bash
# For published package
npx @modelcontextprotocol/inspector uvx strands-agents-mcp-server
# For local development
npx @modelcontextprotocol/inspector python -m strands_mcp_server
```
4. **Start using the documentation tools**:
- Use `search_docs` to find relevant documentation with intelligent ranking
- Use `fetch_doc` to retrieve full content from specific URLs
- The server automatically indexes curated content from llms.txt files
## Server Development
```bash
git clone https://github.com/strands-agents/mcp-server.git
cd mcp-server
python3 -m venv venv
source venv/bin/activate
pip3 install -e .
npx @modelcontextprotocol/inspector python -m strands_mcp_server
```
## Contributing ❤️
We welcome contributions! See our [Contributing Guide](CONTRIBUTING.md) for details on:
- Reporting bugs & features
- Development setup
- Contributing via Pull Requests
- Code of Conduct
- Reporting of security issues
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
| text/markdown | null | AWS <opensource@amazon.com> | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Langu... | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.1.3",
"pydantic>=2.0.0",
"commitizen>=4.4.0; extra == \"dev\"",
"hatch>=1.0.0; extra == \"dev\"",
"pre-commit>=2.20.0; extra == \"dev\"",
"ruff>=0.4.4; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/strands-agents/mcp-server",
"Bug Tracker, https://github.com/strands-agents/mcp-server",
"Documentation, https://strandsagents.com"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:52:31.594679 | strands_agents_mcp_server-0.2.6.tar.gz | 16,452 | 3e/c3/3c8246bb3950902ad450d80841b97cd91a351b8e153f81beab3f826f679e/strands_agents_mcp_server-0.2.6.tar.gz | source | sdist | null | false | 9b1e572ee46d14cc7ea8976eb15752ad | c064ee2f41e7927561da3564da13eaabaa7a83d2bfbe35ea5f5c324b7f259d08 | 3ec33c8246bb3950902ad450d80841b97cd91a351b8e153f81beab3f826f679e | null | [
"LICENSE",
"NOTICE"
] | 9,514 |
2.4 | pwa-launcher | 1.2.3 | Cross-platform PWA launcher using Chromium | # py-pwa-launcher
Launch Progressive Web Apps from Python
A Python library for launching Progressive Web Apps (PWAs) using Chromium-based browsers. Automatically detects installed Chromium browsers and launches PWAs in app mode with all necessary flags.
PyPi Stats:


## Features
- 🚀 **Launch PWAs with a single function call**
- 🔍 **Auto-detect** system Chromium-based browsers (Chrome, Edge, Brave, Vivaldi, Opera, Arc)
- ⚙️ **PWA-optimized flags** for installation and features
- 🔒 **Custom profiles** for isolated PWA data
- ✅ **Check PWA support** before launching
- 🧪 **Fully tested** with comprehensive test suite
- 🌍 **Cross-platform** support (Windows, macOS, Linux)
## Installation
```bash
pip install pwa-launcher
```
**Requirements**: You need a Chromium-based browser installed on your system:
- Google Chrome
- Microsoft Edge
- Brave
- Vivaldi
- Opera
- Arc (macOS)
- Chromium
The library will automatically detect any of these browsers.
## Quick Start
### Launch a PWA
```python
from pwa_launcher import open_pwa
# Launch a PWA - that's it!
open_pwa("https://weatherlite.app")
```
### Check PWA Support
```python
from pwa_launcher import check_pwa_support
# Check if a URL supports PWA
result = check_pwa_support("https://weatherlite.app")
if result.is_pwa_supported:
print(f"✓ {result.url} is PWA-ready!")
print(f" Manifest: {result.manifest_url}")
print(f" Service Worker: {result.service_worker_url}")
else:
print(f"✗ Not a PWA")
for error in result.errors:
print(f" - {error}")
```
### Launch with Custom Options
```python
from pwa_launcher import open_pwa
from pathlib import Path
# Launch with custom profile and flags
process = open_pwa(
"https://excalidraw.com",
user_data_dir=Path("./my_pwa_profile"),
additional_flags=["--start-maximized"]
)
print(f"Launched PWA (PID: {process.pid})")
```
### Keep Process Alive
By default, each PWA runs in an **isolated profile** to keep the process alive:
```python
from pwa_launcher import open_pwa
# Auto-generates isolated profile - process stays alive!
process = open_pwa("https://example.com")
print(f"PID: {process.pid}") # Process won't exit immediately
# To disable auto-profile (may cause process to exit if Chrome is already running):
process = open_pwa("https://example.com", auto_profile=False)
```
**Why this matters:** When Chrome reuses an existing profile, it hands off to an already-running Chrome instance and the new process exits immediately. With `auto_profile=True` (default), each PWA gets its own isolated profile, keeping the process running.
## API Reference
### `open_pwa(url, **kwargs)`
Launch a PWA using Chromium browser.
**Parameters:**
- `url` (str): URL to open as PWA (required)
- `chromium_path` (Path, optional): Path to Chromium executable (auto-detected if None)
- `user_data_dir` (Path, optional): Custom browser profile directory
- `additional_flags` (List[str], optional): Extra Chromium flags
- `wait` (bool, default=False): Wait for browser to exit
- `auto_profile` (bool, default=False): Auto-generate isolated profile (keeps process alive)
**Returns:** `subprocess.Popen` - Browser process
**Raises:**
- `ChromiumNotFoundError`: No browser found
- `ValueError`: Invalid URL
**Note:** When `auto_profile=True`, each PWA gets its own isolated profile based on the URL hostname. This prevents Chrome from handing off to an existing instance and keeps your process alive.
### `check_pwa_support(url, timeout=10)`
Check if a URL supports PWA features.
**Parameters:**
- `url` (str): URL to check
- `timeout` (int): Request timeout in seconds
**Returns:** `PWACheckResult` with:
- `is_pwa_supported` (bool): Whether PWA is supported
- `has_manifest` (bool): Has web manifest
- `manifest_url` (str): URL of manifest file
- `manifest_data` (dict): Parsed manifest data
- `has_service_worker` (bool): Has service worker
- `service_worker_url` (str): URL of service worker
- `has_https` (bool): Uses HTTPS
- `errors` (list): List of error messages
- `warnings` (list): List of warnings
### `get_chromium_install()`
Get a Chromium browser executable path from system-installed browsers.
**Returns:** `Path` - Path to Chromium executable
**Raises:** `ChromiumNotFoundError` - No browser found
### `get_chromium_installs()`
Get all available Chromium browser executable paths from system.
**Returns:** `List[Path]` - List of paths to Chromium executables
## Examples
See the `examples/` directory for more examples:
- `examples/check_pwa.py` - Check PWA support
## Command Line Usage
### Launch a PWA
```bash
python -m pwa_launcher.open_pwa https://weatherlite.app
```
### Check PWA Support
```bash
python -m pwa_launcher.pwa_support https://weatherlite.app
```
## How It Works
1. **Detect Browser**: Searches for installed Chromium-based browsers on your system
2. **Build Command**: Creates command with `--app={url}` and PWA flags
3. **Launch**: Starts browser in app mode with PWA features enabled
### PWA Flags Included
- `--app={url}`: Launch in app mode (no browser UI)
- `--enable-features=WebAppInstallation`: Enable PWA installation
- `--enable-features=DesktopPWAsTabStrip`: Enable tab strip in PWAs
- `--enable-features=FileSystemAccessAPI`: Enable file system access
- `--enable-features=NotificationTriggers`: Enable notifications
- `--no-default-browser-check`: Skip default browser check
- `--no-first-run`: Skip first run experience
- `--disable-infobars`: Remove automation banners
- **Linux only**: `--no-sandbox`, `--disable-gpu`, `--disable-dev-shm-usage`
## Development
### Setup
```bash
# Clone the repository
git clone https://github.com/yourusername/py-pwa-launcher.git
cd py-pwa-launcher
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements-dev.txt
```
### Run Tests
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=pwa_launcher
# Run specific test file
pytest tests/test_open_pwa.py -v
```
## License
MIT License - see LICENSE file for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | null | Michael Dennis <michael@dipduo.com> | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\"",
"autope8>=2.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T17:51:42.907706 | pwa_launcher-1.2.3.tar.gz | 17,925 | 46/61/ba60605e1b87629d038c709c091524fb01f4d57e2b9ce3390352f45eb46d/pwa_launcher-1.2.3.tar.gz | source | sdist | null | false | 90609af43f65a7fb33fc70acf577721f | e79df40bbf7868218047d60d03f89a7ba958cfe5ff8b24c63842696c8e6a3019 | 4661ba60605e1b87629d038c709c091524fb01f4d57e2b9ce3390352f45eb46d | null | [
"LICENSE"
] | 540 |
2.4 | calced | 0.1.0 | A notepad calculator that evaluates expressions in text files | # calced
A notepad calculator that evaluates math expressions in plain text files. Available as a **CLI tool** and a **web app** — both use the same syntax and are validated against the same test suite.
## Web
Open the web app in a browser — no install required.
## CLI
```
calced <file> # evaluate and update file in place
calced -s <file> # print result to stdout (don't modify file)
calced -w <file> # watch for changes and auto-update
calced -w -s <file> # watch and print (clears screen on change)
```
### Installation
Requires Python 3.9+.
```sh
# With uv (recommended)
uv tool install ./python
# Or just run the script directly
python python/calced.py <file>
```
## How it works
Write math anywhere in a plain text file. Results are appended inline as `# => result` comments. Non-math lines are left untouched.
<!-- [[[cog
import subprocess, tempfile, os
def run_calced(text):
text = text.lstrip('\n')
with tempfile.NamedTemporaryFile(mode='w', suffix='.txt', delete=False) as f:
f.write(text)
fname = f.name
env = {**os.environ, 'NO_COLOR': '1'}
result = subprocess.run(
['python', 'python/calced.py', '-s', fname],
capture_output=True, text=True, env=env
)
os.unlink(fname)
cog.out('```\n' + result.stdout + '```\n')
run_calced("""
rent 1500
groceries 200 + 150
utilities 80 + 45 + 30
total
""")
]]] -->
```
rent 1500 # => 1_500
groceries 200 + 150 # => 350
utilities 80 + 45 + 30 # => 155
total # => 2_005
```
<!-- [[[end]]] -->
Results are aligned and updated in place each time you run the CLI (or automatically in watch mode), or live as you type in the web app.
## Features
### Basic arithmetic
<!-- [[[cog
run_calced("""
2 + 3
10 * (4 + 6)
2 ^ 10
17 % 5
""")
]]] -->
```
2 + 3 # => 5
10 * (4 + 6) # => 100
2 ^ 10 # => 1_024
17 % 5 # => 2
```
<!-- [[[end]]] -->
### Variables
<!-- [[[cog
run_calced("""
income = 5000
tax_rate = 22%
tax = income * tax_rate
after_tax = income - tax
""")
]]] -->
```
income = 5000 # => 5_000
tax_rate = 22% # => 0.22
tax = income * tax_rate # => 1_100
after_tax = income - tax # => 3_900
```
<!-- [[[end]]] -->
### Percentages
<!-- [[[cog
run_calced("""
50% of 300
200 + 15%
200 - 10%
""")
]]] -->
```
50% of 300 # => 150
200 + 15% # => 230
200 - 10% # => 180
```
<!-- [[[end]]] -->
### SI prefixes
<!-- [[[cog
run_calced("""
1k
1M
1.5G
500n * 2
""")
]]] -->
```
1k # => 1_000
1M # => 1_000_000
1.5G # => 1_500_000_000
500n * 2 # => 0.000001
```
<!-- [[[end]]] -->
Supported: `k`/`K` (kilo), `M` (mega), `G` (giga), `T` (tera), `P` (peta), `E` (exa), `m` (milli), `u`/`μ` (micro), `n` (nano), `p` (pico), `f` (femto), and more.
### Unit conversions
<!-- [[[cog
run_calced("""
5 km in miles
100 C in F
1 gib in mib
60 min in hr
1 gal in l
""")
]]] -->
```
5 km in miles # => 3.11
100 C in F # => 212
1 gib in mib # => 1_024
60 min in hr # => 1
1 gal in l # => 3.79
```
<!-- [[[end]]] -->
Supported dimensions: length, mass, temperature, data, time, volume. Use `in` or `to`.
### Functions
<!-- [[[cog
run_calced("""
sqrt(16)
round(3.14159, 2)
min(5, 2, 8)
max(1, 9, 3)
log10(1000)
sin(0)
""")
]]] -->
```
sqrt(16) # => 4
round(3.14159, 2) # => 3.14
min(5, 2, 8) # => 2
max(1, 9, 3) # => 9
log10(1000) # => 3
sin(0) # => 0
```
<!-- [[[end]]] -->
Available: `sqrt`, `abs`, `floor`, `ceil`, `round`, `log`, `log2`, `log10`, `sin`, `cos`, `tan`, `asin`, `acos`, `atan`, `exp`, `min`, `max`
### Constants
<!-- [[[cog
run_calced("""
pi * 2
e ^ 1
""")
]]] -->
```
pi * 2 # => 6.28
e ^ 1 # => 2.72
```
<!-- [[[end]]] -->
### Totals
The `total` (or `sum`) keyword sums all numeric results since the last `#` heading or start of file.
<!-- [[[cog
run_calced("""
rent 1500
groceries 350
utilities 155
total
""")
]]] -->
```
rent 1500 # => 1_500
groceries 350 # => 350
utilities 155 # => 155
total # => 2_005
```
<!-- [[[end]]] -->
Blank lines are ignored in the total; headings reset it.
### Number formats
Numbers can be written with commas or underscores as separators (`1,000` or `1_000`), in hex/binary/octal (`0xFF`, `0b1010`, `0o77`), or in scientific notation (`1.5e3`).
### Trailing annotations
Parenthetical notes after an expression are ignored:
<!-- [[[cog
run_calced("""
celo_price = 0.08 (see http://coinmarketcap.com)
""")
]]] -->
```
celo_price = 0.08 (see http://coinmarketcap.com) # => 0.08
```
<!-- [[[end]]] -->
## Format directives
Control output formatting with `@format` and `@separator` directives. These apply to all subsequent lines until changed.
<!-- [[[cog
run_calced("""
1000000
@format = fixed(2)
1000000
@format = scientific
1000000
@separator = comma
@format = minSig(3)
1000000
""")
]]] -->
```
1000000 # => 1_000_000
@format = fixed(2)
1000000 # => 1_000_000.00
@format = scientific
1000000 # => 1.00e+06
@separator = comma
@format = minSig(3)
1000000 # => 1,000,000
```
<!-- [[[end]]] -->
| text/markdown | null | Karl Bartel <karl@karl.berlin> | null | null | null | calculator, cli, math, notepad | [
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Utilities"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://karlb.github.io/calced/",
"Repository, https://github.com/karlb/calced",
"Issues, https://github.com/karlb/calced/issues"
] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T17:50:42.324874 | calced-0.1.0.tar.gz | 10,637 | 13/e2/6aa37807142223ded952c69f9bbd18a07aec1a617b357b250ccbc27b417a/calced-0.1.0.tar.gz | source | sdist | null | false | b0cf8abd63632da248da1b3b698b250a | aaf0f509703621a72a9c9b8d63b6fd1926b8a3e00fa2af26223076ec46a3d985 | 13e26aa37807142223ded952c69f9bbd18a07aec1a617b357b250ccbc27b417a | MIT | [
"LICENSE"
] | 246 |
2.4 | small-py | 0.2.4 | Utility library for various uses | # Smallpy Utility Functions
A collection of reusable Python utility functions and classes for:
- console output control
- JSON "memory" file management
- Excel exporting
- basic image recognition and screen navigation
- progression tracking with time estimation
This module is intended to be imported and reused across automation and data-processing scripts.
---
## Features
- Enable ANSI color / cursor control on Windows terminals
- Clear previously printed terminal lines
- Write Pandas DataFrames to Excel
- Persist structured data to JSON "memory"
- check for existing entries in "memory"
- Wait for UI images to appear or disappear (via PyAutoGUI)
- Click UI elements based on image matching
- Track progess of iteration with dynamic formatting options
- Get most recentely created file in a directory
## Installation
```bash
pip install small-py
```
**Dependencies:**
```bash
pip install pandas pyautogui
```
---
## Usage
Import the functions or classes you need:
```python
from utils import (
enable_virtual_terminal,
clear_terminal,
write_to_excel,
add_to_memory,
is_in_memory,
wait_for_image,
find_and_click,
Counter,
get_most_recent_file,
)
```
## Console Utilities
### Enable ANSI / Virtual Terminal Support (Windows)
```python
enable_virtual_terminal()
```
### Clear Previously Printed Lines
```python
clear_terminal(lines=2)
```
## Excel Output
### Write Dataframe to Excel
```python
import pandas as pd
df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
write_to_excel(df, "output.xlsx")
```
## Persistent Memory (JSON)
Stored in ./memory/memory.json by default
### Memory Strucutre
```python
{"key":"entry"}
```
```json
{
"key1":{
"id":"entry1",
"optional_field_1":"value",
...
"optional_field_n":"value"
},
...
}
```
- "Key" identifies a shared entry structure for a particular purpose
- All entries under the same key must share the same structure
- A "memory" file can have multiple "keys"
- Entries **must** contain an "id" field
- New entries replace existing entries with the same id
### Add or Update an Entry
```python
entry = {"id": 1, "status": "done"}
add_to_memory(
memory_key = "tasks",
new_entry = entry
)
```
### Check if an Entry Exists
```python
exists = is_in_memory(
memory_key="tasks",
new_entry={"id": 1, "status": "done"},
comparison_field="status"
)
```
## Screen Automation (PyAutoGUI)
### Wait for an Image
```python
coord = wait_for_image("button.png", timeout=10)
```
- Accepts a single image path or a list of paths
- Can optionally wait for an image to **disappear**:
```python
wait_for_image("loading.png", invert_search=True)
```
### Find and Click an Image
Clicks on the center of the found image
```python
find_and_click("submit.png")
```
- Optionally offset the click location, measured in pixels from the center of the reference image
- offset=(x_offset,y_offset)
- increasing x offsets to right, increasing y offsets down
```python
find_and_click("submit.png",offset=(5,5))
```
## Progress Tracking
### Counter Class
Tracks progress and estimates remaining time.
```python
counter = Counter(
count=10
)
for _ in range(10):
# do work
counter.display()
```
- Output of `counter.display()` will default to `n/N` where `n` is the iteration number and `N` is the total count
- A custom format can be passed upon initialization
```python
counter = Counter(
count=10,
format = "Iteration %n/%N"
)
for _ in range(10):
# do work
counter.display()
```
- Or by changing the `formatter` attribute to utilize dynamic formatting with f-strings
```python
counter = Counter(
count=10
)
for item in ['foo','bar','baz','qux']:
# do work
counter.formatter = f"Iteration %n/%N - {item}"
counter.display()
```
#### Format tokens:
- `%n` — iteration number
- `%N` — total count
- `%T` — estimated completion time (e.g. "02:04 PM")
- `%t` — raw seconds remaining as float
- `%f` — time remaining split by unit (e.g. "2h 4m 8s")
## File Utilities
### Get Most Recent File in a Directory
```python
latest = get_most_recent_file("downloads")
```
Returns the path to the most recently created file, or "No files found".
## Notes & Limitations
- Screen automation relies on image matching and screen resolution
- JSON memory assumes consistent dictionary structure per key
- Designed for scripting and automation, not as a full framework
## License
MIT License (or update as appropriate)
| text/markdown | null | Nathan Smalley <nathansmalley2@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"pandas",
"pyautogui"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:50:26.630555 | small_py-0.2.4.tar.gz | 10,646 | 29/c8/e6768eba34493154d4eb4d06975f798847f4f7af0ef600547bc7a8dab518/small_py-0.2.4.tar.gz | source | sdist | null | false | 5f749f5dce1dfc96894650be0baa4dfc | 5e7fe5890b279bd20f1a44ec53742cbb39f0b795c6324e4fcb644884ef892b61 | 29c8e6768eba34493154d4eb4d06975f798847f4f7af0ef600547bc7a8dab518 | null | [
"LICENSE"
] | 218 |
2.4 | onecode | 1.2.1 | Python skeleton and library for OneCode projects | 







[](https://codeclimate.com/github/deeplime-io/onecode/maintainability)
[](https://codecov.io/gh/deeplime-io/onecode)
---
OneCode, your gateway to Python application deployment on the Cloud!
Pssst, if you're not into rolling out your App but simply using them, check out the [OneCode Cloud user doc](https://deeplime-io.github.io/onecode/1.0.0/user_doc).
* [OneCode in One Minute](#onecode-in-one-minute)
* [Deploy on OneCode Cloud](#deploy-on-onecode-cloud)
* [Getting Started with the OneCode API](#getting-started-with-the-onecode-api)
* [Upgrading from 0.x](#upgrading-from-0x)
* [Work in Progress](#work-in-progress)
* [Getting Help](#getting-help)
## :snake: OneCode in One Minute
### Install OneCode
```bash
pip install onecode
```
### Create your first OneCode project
```bash
onecode-create
# then follow the prompts
? Enter the path where to create OneCode project: ~/
? Enter your OneCode project name: HelloWorld
⠋ Creating new OneCode project
✅ Created HelloWorld OneCode project
```
### Add your first OneCode Element
Edit the file `HelloWorld/flows/helloworld.py` such as
```python
from onecode import Logger, text_input
def run():
Logger.info(f"Hello {text_input('your name', 'OneCoder')}!")
```
### Running your OneCode project
```bash
cd HelloWorld
python main.py
# You should see the following printed
[INFO] helloworld - |OneCode|.helloworld.py:5 - Hello OneCoder!
```
By default, the OneCode text input is `OneCoder` but now can take any other values without having to change the code.
:tada: Congratulations, you now are a OneCoder! :tada:
## :volcano: Deploy on OneCode Cloud
The following steps will show you how to get setup for the 1st time:
1. Ensure you install at least `onecode >= 1.0.0` and have a [GitHub](https://github.com) account
* If you have an app with a previous `onecode` version, [upgrade from 0.x](#upgrading-from-0x).
* [Create](#onecode-in-one-minute) your OneCode App (or use an existing one) and [push it to your GitHub account](https://docs.github.com/en/migrations/importing-source-code/using-the-command-line-to-import-source-code/adding-locally-hosted-code-to-github).
2. Request a beta-tester access [here](https://tally.so/r/mVJbWN).
3. Once you received your confirmation email, login on [onecode.rocks](https://www.onecode.rocks/login).
4. Register your first app
* From the dashboard, navigate to **Apps** in the top menubar.

* Click on **Register New App**.

* On your first visit, you'll need to **Link GitHub Account** to your OneCode account.

* As you are redirected to GitHub, login to your GitHub account.

* **Authorize OneCode**.

* Upon authorization, you will be redirected back to OneCode with your GitHub identity.
You now need to decide which repositories OneCode may access in order to build your app by
clicking on **GitHub App**.

* Choose which repositories should be accessible by OneCode.
Note that you can change these permissions at anytime.

* Select the repository and the branch corresponding to the OneCode App you want to deploy.
Choose if needed a different image and Python version than the default one.

5. The App will then appear in your personal Apps Workspace and be automatically built.
Each new commit that you push to the registered branch will automatically trigger a new build
:tada: :tada: :tada: Congratulations, you now are an Cloud OneCoder! :tada: :tada: :tada:
## :rocket: Getting Started with the OneCode API
OneCode relies on the following principles:
* **no-disruption**: OneCode doesn't force you to change the way you code. No matter what your code structure and
Python files hierarchy, OneCode can seamlessly be integrated with it.
* **controllable input parameters**: simply replace your hard-coded parameters with OneCode functions
(called **Elements**) so that their value can change without having to change the code. One Code, many ways to run!
* **automated interface**: OneCode push on the cloud, the interface will automatically be generated from the OneCode
Elements
* **easy deployment**: no need to change the code between your local machine and the cloud. Simply push your code
as-is on your synchronized GitHub account and your App (environment and UI!) will build automatically!
The most important part of the API are Input and Output Elements. They can be inlined within your code
or not, that's up to you (no-disruption!), see examples below:
* use [Input Elements](https://deeplime-io.github.io/onecode/1.0.0/reference/elements/element_list/#input-elements) whenever you need to expose a parameter
with a specific widget. For example:
```python
# instead of: df = pd.read_csv('test.csv')
df = csv_reader('your df', 'test.csv')
# instead of: for i in range(5):
for i in range(slider('N', 5, min=0, max=10)): # inlined
# do stuff
# instead of: choice = 'cat'
choice = dropdown('your choice', 'cat', options=['dog', 'cat', 'fish']) # not inlined
Logger.info(f'Your choice is {choice}')
```
* use [Output Elements](https://deeplime-io.github.io/onecode/1.0.0/reference/elements/element_list/#output-elements) whenever an output should be returned. For example:
```python
# instead of: plt.savefig('stuff.png')
plt.savefig(file_output('stuff', 'stuff.png')) # inlined
# instead of: filepath = 'test.txt'
filepath = file_output('test', 'test.txt') # not inlined
with open(filepath, 'w') as f:
# do stuff
```
Check out the full API documentation [here](https://deeplime-io.github.io/onecode/1.0.0/reference/elements/input_elements_api)!
## :arrow_up: Upgrading from 0.x
* Ensure there is `requirements.txt` file at the root of your App and that it contains at least `onecode>=1,<2`.
* Change all Output Elements (e.g. `image_output()`, `text_output()`, etc.) to simply `file_output()`.
* Remove any `section_header()` element.
* Check out the [work in progress section](#work-in-progress) in case you were using advanced features.
## :construction: Work in Progress
As `onecode` is still transitioning to OneCode Cloud, early versions of the OneCode Cloud don't yet
support completely the following features:
* **Multi-steps**: adding more than one flow to your App will eventually be supported. In the meantime,
either split your app (one app per step) or merge all steps under a single one
(you may directly update the `.onecode.json` file or create a new app and move the code to it).
* **Folder Inputs**: as the cloud doesn't really have directory structures, it needs some special work.
In the meantime, replace with multiple selection `file_input` instead.
* **Custom Elements** (in custom plugin or `onecode_ext`): extra security precautions must be taken
to allow custom UI on the Cloud. It has therefore been disabled for now. Replace them with regular elements until the Cloud is ready for them.
* **Dynamic `options`**: dynamic expressions in `options` of the `dropdown` element) is not fully
supported yet. You can still use it, in that case, the elements will ask user to fill out values
as regular text input (e.g. CSV column names, etc.).
* **Dynamic `optional`**: `optional` as `True/False` (static) works as expected, however dynamnic
expressions will be ignored for now. As a consequence, `hide_when_disabled` attribute is obsolete
until dynamic `optional` are supported again.
* **Attribute `count`**: we go back-and-forth with this one on bringing this one to the Cloud. In the meantime, switch back to non-dynamic elements, e.g. multiple dropdown, text input collecting list of values, etc.
* **Running `onecode-start`**: getting a local UI is in the works, it's a pretty big feature, thanks
for your patience on that one.
## :wave: Getting Help
If you are a OneCode customer, you may directly email our support team.
Feel free as well to browse the [GitHub Issues](https://github.com/deeplime-io/onecode/issues)
and reach out to the community by posting bug reports, questions and suggestions.
| text/markdown | DeepLime | contact@deeplime.io | null | null | MIT | onecode, share, deploy, cloud | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: Microsoft :: Windows",
"Operating System :: Unix",
"Programming Language :: Python",... | [] | null | null | <3.15,>=3.8 | [] | [] | [] | [
"datatest<1,>=0.11.1; extra == \"developer\"",
"flufl.lock<9,>=7.1.1",
"griffe<1,>=0; extra == \"docs\"",
"inquirerpy<1,>=0.3.3",
"mike<1.2,>=1.1; extra == \"docs\"",
"mkdocs<2.0,>=1.5; extra == \"docs\"",
"mkdocs-material<10.0,>=9.5; extra == \"docs\"",
"mkdocstrings<1,>=0; extra == \"docs\"",
"mkd... | [] | [] | [] | [
"Documentation, https://deeplime-io.github.io/onecode",
"Homepage, https://github.com/deeplime-io/onecode",
"Repository, https://github.com/deeplime-io/onecode"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Pop!_OS","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T17:50:24.201948 | onecode-1.2.1.tar.gz | 41,897 | e3/65/2a143d8c201fadc55ed758695e8b25267aafffc276e9dc83214d0389e5f4/onecode-1.2.1.tar.gz | source | sdist | null | false | b7158b429bc71c795c0373e71d9ff318 | 92dc06cee249dea74b759ba11dd5f01f684fd7eda9fffe5f561eaffb84722fa3 | e3652a143d8c201fadc55ed758695e8b25267aafffc276e9dc83214d0389e5f4 | null | [
"LICENSE"
] | 227 |
2.4 | snapy | 1.3.4 | Compressible Finite Volume Solver for Atmospheric Dynamics, Chemistry and Thermodynamics | # Snapy
**Compressible Finite Volume Solver for Atmospheric Dynamics, Chemistry and Thermodynamics**
Snapy is the dynamic core for simulating atmospheric and planetary dynamics using PyTorch tensors and GPU acceleration.
[](https://badge.fury.io/py/snapy)
[](https://opensource.org/licenses/MIT)
## Features
- **GPU-Accelerated**: Built on PyTorch for efficient GPU computation
- **Flexible Interfaces**: Both Python and C++ APIs available
- **Compressible Flow**: Finite volume solver for atmospheric dynamics
- **Multi-platform**: Support for Linux and macOS
- **NetCDF Output**: Standard output format for scientific data
## Installation
### Quick Install (Python Interface)
The easiest way to get started is to install via pip:
```bash
pip install snapy
```
This will install the Python interface with pre-built binaries for Python 3.9-3.13 on Linux (x86_64) and macOS (ARM64).
### Parallel run
```
pd-run 6 ./test_exchange.release
```
### list listening port
```bash
lsof -i:29500
```
### kill listening port
```bash
pkill -9 XXXXX
```
**Requirements:**
- Python 3.9 or higher
- PyTorch 2.7.x
- NumPy
- kintera >= 1.1.5
### Build from Source (Advanced)
Building from source is recommended only for advanced users who need to:
- Modify the C++ core
- Use custom PyTorch versions
- Access the C++ interface directly
- Develop new features
**Prerequisites:**
- CMake 3.20+
- C++17 compatible compiler
- PyTorch 2.7.x with C++ libraries
- NetCDF C library
- kintera >= 1.1.5
**Build steps:**
1. Clone the repository:
```bash
git clone https://github.com/chengcli/snapy.git
cd snapy
```
2. Install dependencies:
```bash
pip install numpy kintera torch==2.7.1
```
3. Install NetCDF:
- **Linux (Ubuntu/Debian):**
```bash
sudo apt-get install libnetcdf-dev
```
- **macOS:**
```bash
brew install netcdf
```
4. Install NCCL (if enables GPU)
- **Linux (Ubuntu/Debian):**
```bash
sudo apt-get install libnccl2 libnccl-dev
```
- **Linux (CentOS/RHEL):**
```bash
sudo yum install libnccl libnccl-devel libnccl-static
```
4. Configure and build:
```bash
cmake -B build -DCMAKE_BUILD_TYPE=Release -DNETCDF=ON
cmake --build build --parallel 3
```
5. Install the Python package:
```bash
pip install .
```
## Examples
The `examples/` directory contains several working examples:
**Python Examples:**
- `shock.py` - Sod shock tube with internal boundary
- `straka.py` - Straka cold bubble convection test
- `robert.py` - Robert warm bubble convection test
**C++ Examples:**
- `shock.cpp` - Sod shock tube (C++)
- `straka.cpp` - Straka cold bubble (C++)
Run a Python example:
```bash
cd examples
python shock.py
```
Run a C++ example (after building):
```bash
cd build/examples
./shock
```
See `examples/README` for detailed documentation on the code structure and available examples.
## Configuration
Simulations are configured using YAML files that specify:
- Grid dimensions and domain size
- Time integration settings (RK stages, CFL number)
- Boundary conditions
- Output settings (frequency, variables, format)
- Equation of state and thermodynamics
Example configuration files (`.yaml`) are provided alongside the examples.
## Development
### Testing
Run tests after building:
```bash
cd build/tests
ctest --output-on-failure
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Contact
- **Author**: Cheng Li
- **Email**: chengcli@umich.edu
- **GitHub**: [https://github.com/chengcli/snapy](https://github.com/chengcli/snapy)
| text/markdown | null | Cheng Li <chengcli@umich.edu> | null | null | LICENSE | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: C",
"Programming Language :: C++",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Program... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"kintera>=1.3.1"
] | [] | [] | [] | [
"Homepage, https://github.com/chengcli/snapy",
"Documentation, https://snapy.readthedocs.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:50:23.858382 | snapy-1.3.4-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl | 40,733,894 | 16/79/74200efaebe9fe6ab3566b5f3a51f9026d39aa003d7f3664b2f90d6af393/snapy-1.3.4-cp39-cp39-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl | cp39 | bdist_wheel | null | false | cfb0f375caa9e30e50fb8c811b5eee91 | 830afacecf7c7bc4de4f4079cebc5646351a38d53b2a2638a15b2b161a789305 | 167974200efaebe9fe6ab3566b5f3a51f9026d39aa003d7f3664b2f90d6af393 | null | [
"LICENSE"
] | 988 |
2.4 | lpcf | 0.1.1 | Learning parametric convex functions | # LPCF
LPCF stands for *learning parametrized convex functions*.
A parametrized convex function, or PCF, depends on a variable and a parameter,
and is convex in the variable for any valid value of the parameter.
LPCF is a framework for fitting a parametrized convex function to some given data
that is compatible with *disciplined convex programming*.
This allows to fit a function that can be used in a convex optimization
formulation directly to observed or simulated data.
The PCF is represented as a simple
neural network whose architecture is designed
to ensure disciplined convexity in the variable, for any valid
parameter value. After fitting this neural network to triplets
of observed (or simulated) values of the function, the variable,
and the parameter, the learned PCF can be exported for use in optimization
frameworks like [CVXPY](https://www.cvxpy.org) or [JAX](https://docs.jax.dev/en/latest/index.html).
LPCF supports learning vector functions that depend on multiple variables and parameters.
An overview of LPCF can be found in our [manuscript](https://stanford.edu/~boyd/papers/lpcf.html).
## Installation
LPCF is available on PyPI, and can be installed with
```
pip install lpcf
```
LPCF has the following dependencies:
- Python >= 3.10, <3.13
- jax-sysid >= 1.0.6
- CVXPY >= 1.6.0
- NumPy >= 1.21.6
## Example
The following code fits a PCF to observed function values `Y`,
variable values `X`, and parameter values `Theta`, and
exports the result to CVXPY.
```python3
from lpcf.pcf import PCF
# observed data
Y = ... # shape (N, d)
X = ... # shape (N, n)
Theta = ... # shape (N, p)
# fit PCF to data
pcf = PCF()
pcf.fit(Y, X, Theta)
# export PCF to CVXPY
x = cp.Variable((n, 1))
theta = cp.Parameter((p, 1))
pcf_cvxpy = pcf.tocvxpy(x=x, theta=theta)
```
The CVXPY expression `pcf_cvxpy`
might appear in the objective or the constraints of a CVXPY problem.
## Settings
### Neural network architecture
The function is approximated as an input-convex *main network* mapping variables to function values.
The weights of the main network are generated by another *parameter network*, whose inputs are the parameters.
When constructing the `PCF` object, we allow for a number of
customizations to the neural network architecture:
| Argument | Description | Type | Default |
| ---------------- | ---------------------------------------------------------------------- | ---------- | --------------- |
| `widths` | widths of the main network's hidden layers | array-like | `[2((n+d)//2), 2((n+d)//2)]` |
| `widths_psi` | widths of the parameter network's hidden layers | array-like | `[2((p+m)//2), 2((p+m)//2)]` |
| `activation` | activation function used in the main network | str | `'relu'` |
| `activation_psi` | activation function used in the parameter network | str | `'relu'` |
| `nonneg` | Force the PCF to be nonnegative | Bool | `False` |
| `increasing` | Force the PCF to be increasing | Bool | `False` |
| `decreasing` | Force the PCF to be decreasing | Bool | `False` |
| `quadratic` | Include a convex quadratic term in the PCF | Bool | `False` |
| `quadratic_r` | Include a quadratic term with low-rank + diagonal structure | Bool | `False` |
| `classification` | Use the PCF to solve a classification problem | Bool | `False` |
Note that `d` is the number of components of the function, `n` the number of variables, `p` the
number of parameters, and `m` the number of outputs of the parameter network, i.e., the number of weights
of the main network.
### Learning configuration
When fitting the `PCF` to data with its `.fit()` method, we provide
the following options:
| Argument | Description | Type | Default |
| ---------------- | ---------------------------------------------------------------------- | ---------- | --------------- |
| `rho_th` | regularization on the sum of squared weights of the parameter network | float | `1e-8` |
| `tau_th` | regularization on the sum of absolute weights of the parameter network | float | `0` |
| `zero_coeff` | entries smaller (in abs value) than `zero_coeff` are zeroed | float | `1e-4` |
| `cores` | number of cores used for parallel training | int | `4` |
| `seeds` | random seeds for training from multiple initial guesses | array-like | `max(10, cores)`|
| `adam_epochs` | number of epochs for running ADAM | int | `200` |
| `lbfgs_epochs` | number of epochs for running L-BFGS-B | int | `2000` |
| `tune` | auto-tune `tau_th`? | Bool | `False` |
| `n_folds` | number of cross-validation folds when auto-tuning `tau_th` | int | `5` |
| `warm_start` | warm-start training? | Bool | `False` |
## Citing LPCF
<a name="ref1"></a>
Please cite the following paper if you use this software:
```
@article{SBB25,
author={Maximilian Schaller and Alberto Bemporad and Stephen Boyd},
title={Learning Parametric Convex Functions},
note = {available on arXiv at \url{https://arxiv.org/pdf/2506.04183}},
year=2025
}
```
| text/markdown | null | Maximilian Schaller <mschall@stanford.edu>, Alberto Bemporad <alberto.bemporad@imtlucca.it> | null | null | Apache 2.0 | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"jax-sysid>=1.0.6",
"cvxpy>=1.6.0",
"numpy>=1.21.6"
] | [] | [] | [] | [
"Homepage, https://github.com/cvxgrp/lpcf"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T17:49:49.667082 | lpcf-0.1.1.tar.gz | 17,157 | 2c/39/2af24df4796af40079c86d0ae216ea6955ca379d1ac0d820b6e8d3de9e12/lpcf-0.1.1.tar.gz | source | sdist | null | false | a9aba8e8b1813b5ce0f6d1e30573ca98 | 17c857d893884295c35d6c4a78959c539b8709227847d978d86ed21828062033 | 2c392af24df4796af40079c86d0ae216ea6955ca379d1ac0d820b6e8d3de9e12 | null | [
"LICENSE"
] | 223 |
2.4 | wisefood | 0.0.11 | A small client for accessing and managing resources in the WiseFood platform. | # wisefood-client
A small client for accesing and populating the data infrastructure of the WiseFood platform
| text/markdown | null | dpetrou <dpetrou@athenarc.gr> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28"
] | [] | [] | [] | [
"Homepage, https://github.com/wisefood/wisefood-client",
"Issues, https://github.com/wisefood/wisefood-client/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T17:49:35.708152 | wisefood-0.0.11.tar.gz | 31,452 | ca/87/3cef2db3a21277a909de03c12b01489573f576be448d8aadadc1cf4946a3/wisefood-0.0.11.tar.gz | source | sdist | null | false | 0debe22637ebb7242165e0a53a0146cd | f27704c49cd71946647cb84aa6915721e6b4818a098d8f838eae0210260ee2cd | ca873cef2db3a21277a909de03c12b01489573f576be448d8aadadc1cf4946a3 | null | [
"LICENSE"
] | 227 |
2.4 | strands-agents-evals | 0.1.7 | Evaluation framework for Strands | <div align="center">
<div>
<a href="https://strandsagents.com">
<img src="https://strandsagents.com/latest/assets/logo-github.svg" alt="Strands Agents" width="55px" height="105px">
</a>
</div>
<h1>
Strands Evals SDK
</h1>
<h2>
A comprehensive evaluation framework for AI agents and LLM applications.
</h2>
<div align="center">
<a href="https://github.com/strands-agents/evals/graphs/commit-activity"><img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/strands-agents/evals"/></a>
<a href="https://github.com/strands-agents/evals/issues"><img alt="GitHub open issues" src="https://img.shields.io/github/issues/strands-agents/evals"/></a>
<a href="https://github.com/strands-agents/evals/pulls"><img alt="GitHub open pull requests" src="https://img.shields.io/github/issues-pr/strands-agents/evals"/></a>
<a href="https://github.com/strands-agents/evals/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/strands-agents/evals"/></a>
<a href="https://pypi.org/project/strands-agents-evals/"><img alt="PyPI version" src="https://img.shields.io/pypi/v/strands-agents-evals"/></a>
<a href="https://python.org"><img alt="Python versions" src="https://img.shields.io/pypi/pyversions/strands-agents-evals"/></a>
</div>
<p>
<a href="https://strandsagents.com/">Documentation</a>
◆ <a href="https://github.com/strands-agents/samples">Samples</a>
◆ <a href="https://github.com/strands-agents/sdk-python">Python SDK</a>
◆ <a href="https://github.com/strands-agents/sdk-typescript">Typescript SDK</a>
◆ <a href="https://github.com/strands-agents/tools">Tools</a>
◆ <a href="https://github.com/strands-agents/evals">Evaluations</a>
</p>
</div>
Strands Evaluation is a powerful framework for evaluating AI agents and LLM applications. From simple output validation to complex multi-agent interaction analysis, trajectory evaluation, and automated experiment generation, Strands Evaluation provides comprehensive tools to measure and improve your AI systems.
## Feature Overview
- **Multiple Evaluation Types**: Output evaluation, trajectory analysis, tool usage assessment, and interaction evaluation
- **Dynamic Simulators**: Multi-turn conversation simulation with realistic user behavior and goal-oriented interactions
- **LLM-as-a-Judge**: Built-in evaluators using language models for sophisticated assessment with structured scoring
- **Trace-based Evaluation**: Analyze agent behavior through OpenTelemetry execution traces
- **Automated Experiment Generation**: Generate comprehensive test suites from context descriptions
- **Custom Evaluators**: Extensible framework for domain-specific evaluation logic
- **Experiment Management**: Save, load, and version your evaluation experiments with JSON serialization
- **Built-in Scoring Tools**: Helper functions for exact, in-order, and any-order trajectory matching
## Quick Start
```bash
# Install Strands Evals SDK
pip install strands-agents-evals
```
```python
from strands import Agent
from strands_evals import Case, Experiment
from strands_evals.evaluators import OutputEvaluator
# Create test cases
test_cases = [
Case[str, str](
name="knowledge-1",
input="What is the capital of France?",
expected_output="The capital of France is Paris.",
metadata={"category": "knowledge"}
)
]
# Create evaluators with custom rubric
evaluators = [
OutputEvaluator(
rubric="""
Evaluate based on:
1. Accuracy - Is the information correct?
2. Completeness - Does it fully answer the question?
3. Clarity - Is it easy to understand?
Score 1.0 if all criteria are met excellently.
Score 0.5 if some criteria are partially met.
Score 0.0 if the response is inadequate.
"""
)
]
# Create experiment and run evaluation
experiment = Experiment[str, str](cases=test_cases, evaluators=evaluators)
def get_response(case: Case) -> str:
agent = Agent(callback_handler=None)
return str(agent(case.input))
# Run evaluations
reports = experiment.run_evaluations(get_response)
reports[0].run_display()
```
## Installation
Ensure you have Python 3.10+ installed, then:
```bash
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows use: .venv\Scripts\activate
# Install in development mode
pip install -e .
# Install with test dependencies
pip install -e ".[test]"
# Install with both test and dev dependencies
pip install -e ".[test,dev]"
```
## Features at a Glance
### Output Evaluation with Custom Rubrics
Evaluate agent responses using LLM-as-a-judge with flexible scoring criteria:
```python
from strands_evals.evaluators import OutputEvaluator
evaluator = OutputEvaluator(
rubric="Score 1.0 for accurate, complete responses. Score 0.5 for partial answers. Score 0.0 for incorrect or unhelpful responses.",
include_inputs=True, # Include context in evaluation
model="us.anthropic.claude-sonnet-4-20250514-v1:0" # Custom judge model
)
```
### Trajectory Evaluation with Built-in Scoring
Analyze agent tool usage and action sequences with helper scoring functions:
```python
from strands_evals.evaluators import TrajectoryEvaluator
from strands_evals.extractors import tools_use_extractor
from strands_tools import calculator
def get_response_with_tools(case: Case) -> dict:
agent = Agent(tools=[calculator])
response = agent(case.input)
# Extract trajectory efficiently to prevent context overflow
trajectory = tools_use_extractor.extract_agent_tools_used_from_messages(agent.messages)
# Update evaluator with tool descriptions
evaluator.update_trajectory_description(
tools_use_extractor.extract_tools_description(agent, is_short=True)
)
return {"output": str(response), "trajectory": trajectory}
# Evaluator includes built-in scoring tools: exact_match_scorer, in_order_match_scorer, any_order_match_scorer
evaluator = TrajectoryEvaluator(
rubric="Score 1.0 if correct tools used in proper sequence. Use scoring tools to verify trajectory matches."
)
```
### Trace-based Helpfulness Evaluation
Evaluate agent helpfulness using OpenTelemetry traces with seven-level scoring:
```python
from strands_evals.evaluators import HelpfulnessEvaluator
from strands_evals.telemetry import StrandsEvalsTelemetry
from strands_evals.mappers import StrandsInMemorySessionMapper
# Setup telemetry for trace capture
telemetry = StrandsEvalsTelemetry().setup_in_memory_exporter()
def user_task_function(case: Case) -> dict:
telemetry.memory_exporter.clear()
agent = Agent(
trace_attributes={"session.id": case.session_id},
callback_handler=None
)
response = agent(case.input)
# Map spans to session for evaluation
spans = telemetry.memory_exporter.get_finished_spans()
mapper = StrandsInMemorySessionMapper()
session = mapper.map_to_session(spans, session_id=case.session_id)
return {"output": str(response), "trajectory": session}
# Seven-level scoring: Not helpful (0.0) to Above and beyond (1.0)
evaluators = [HelpfulnessEvaluator()]
experiment = Experiment[str, str](cases=test_cases, evaluators=evaluators)
# Run evaluations
reports = experiment.run_evaluations(user_task_function)
reports[0].run_display()
```
### Multi-turn Conversation Simulation
Simulate realistic user interactions with dynamic, goal-oriented conversations using ActorSimulator:
```python
from strands import Agent
from strands_evals import Case, Experiment, ActorSimulator
from strands_evals.evaluators import HelpfulnessEvaluator, GoalSuccessRateEvaluator
from strands_evals.mappers import StrandsInMemorySessionMapper
from strands_evals.telemetry import StrandsEvalsTelemetry
# Setup telemetry
telemetry = StrandsEvalsTelemetry().setup_in_memory_exporter()
memory_exporter = telemetry.in_memory_exporter
def task_function(case: Case) -> dict:
# Create simulator to drive conversation
simulator = ActorSimulator.from_case_for_user_simulator(
case=case,
max_turns=10
)
# Create agent to evaluate
agent = Agent(
trace_attributes={
"gen_ai.conversation.id": case.session_id,
"session.id": case.session_id
},
callback_handler=None
)
# Run multi-turn conversation
all_spans = []
user_message = case.input
while simulator.has_next():
memory_exporter.clear()
agent_response = agent(user_message)
turn_spans = list(memory_exporter.get_finished_spans())
all_spans.extend(turn_spans)
user_result = simulator.act(str(agent_response))
user_message = str(user_result.structured_output.message)
# Map to session for evaluation
mapper = StrandsInMemorySessionMapper()
session = mapper.map_to_session(all_spans, session_id=case.session_id)
return {"output": str(agent_response), "trajectory": session}
# Use evaluators to assess simulated conversations
evaluators = [
HelpfulnessEvaluator(),
GoalSuccessRateEvaluator()
]
experiment = Experiment(cases=test_cases, evaluators=evaluators)
reports = experiment.run_evaluations(task_function)
```
**Key Benefits:**
- **Dynamic Interactions**: Simulator adapts responses based on agent behavior
- **Goal-Oriented Testing**: Verify agents can complete user objectives through dialogue
- **Realistic Conversations**: Generate authentic multi-turn interaction patterns
- **No Predefined Scripts**: Test agents without hardcoded conversation paths
- **Comprehensive Evaluation**: Combine with trace-based evaluators for full assessment
### Automated Experiment Generation
Generate comprehensive test suites automatically from context descriptions:
```python
from strands_evals.generators import ExperimentGenerator
from strands_evals.evaluators import TrajectoryEvaluator
# Define available tools and context
tool_context = """
Available tools:
- calculator(expression: str) -> float: Evaluate mathematical expressions
- web_search(query: str) -> str: Search the web for information
- file_read(path: str) -> str: Read file contents
"""
# Generate experiment with multiple test cases
generator = ExperimentGenerator[str, str](str, str)
experiment = await generator.from_context_async(
context=tool_context,
num_cases=10,
evaluator=TrajectoryEvaluator,
task_description="Math and research assistant with tool usage",
num_topics=3 # Distribute cases across multiple topics
)
# Save generated experiment
experiment.to_file("generated_experiment", "json")
```
### Custom Evaluators with Structured Output
Create domain-specific evaluation logic with standardized output format:
```python
from strands_evals.evaluators import Evaluator
from strands_evals.types import EvaluationData, EvaluationOutput
class PolicyComplianceEvaluator(Evaluator[str, str]):
def evaluate(self, evaluation_case: EvaluationData[str, str]) -> EvaluationOutput:
# Custom evaluation logic
response = evaluation_case.actual_output
# Check for policy violations
violations = self._check_policy_violations(response)
if not violations:
return EvaluationOutput(
score=1.0,
test_pass=True,
reason="Response complies with all policies",
label="compliant"
)
else:
return EvaluationOutput(
score=0.0,
test_pass=False,
reason=f"Policy violations: {', '.join(violations)}",
label="non_compliant"
)
def _check_policy_violations(self, response: str) -> list[str]:
# Implementation details...
return []
```
### Tool Usage and Parameter Evaluation
Evaluate specific aspects of tool usage with specialized evaluators:
```python
from strands_evals.evaluators import ToolSelectionAccuracyEvaluator, ToolParameterAccuracyEvaluator
# Evaluate if correct tools were selected
tool_selection_evaluator = ToolSelectionAccuracyEvaluator(
rubric="Score 1.0 if optimal tools selected, 0.5 if suboptimal but functional, 0.0 if wrong tools"
)
# Evaluate if tool parameters were correct
tool_parameter_evaluator = ToolParameterAccuracyEvaluator(
rubric="Score based on parameter accuracy and appropriateness for the task"
)
```
## Available Evaluators
### Output-Based Evaluators
These evaluators work directly with inputs and outputs without requiring OpenTelemetry traces:
- **OutputEvaluator**: Flexible LLM-based evaluation with custom rubrics
- **TrajectoryEvaluator**: Action sequence evaluation with built-in scoring tools (supports both list-based trajectories and Session traces via extractors)
- **InteractionsEvaluator**: Multi-agent interaction and handoff evaluation
- **Custom Evaluators**: Extensible base class for domain-specific logic
### Trace-Based Evaluators
These evaluators require OpenTelemetry traces (Session objects) to analyze agent behavior:
#### Tool-Level Evaluators
Evaluate individual tool calls within a conversation:
- **ToolSelectionAccuracyEvaluator**: Evaluates appropriateness of tool choices at specific points
- **ToolParameterAccuracyEvaluator**: Evaluates correctness of tool parameters based on context
#### Trace-Level Evaluators
Evaluate the most recent turn in a conversation:
- **HelpfulnessEvaluator**: Seven-level helpfulness assessment from user perspective
- **FaithfulnessEvaluator**: Evaluates if responses are grounded in conversation history
- **CoherenceEvaluator**: Assesses logical cohesion and reasoning quality with five-level scoring
- **ConcisenessEvaluator**: Evaluates response brevity with three-level scoring
- **ResponseRelevanceEvaluator**: Evaluates relevance of responses to user questions
- **HarmfulnessEvaluator**: Binary evaluation for harmful content detection
#### Session-Level Evaluators
Evaluate entire conversation sessions:
- **GoalSuccessRateEvaluator**: Measures if user goals were achieved across the full conversation
## Experiment Management and Serialization
Save, load, and version experiments for reproducibility:
```python
# Save experiment with metadata
experiment.to_file("customer_service_eval", "json")
# Load experiment from file
loaded_experiment = Experiment.from_file("./experiment_files/customer_service_eval.json", "json")
# Experiment files include:
# - Test cases with metadata
# - Evaluator configuration
# - Expected outputs and trajectories
# - Versioning information
```
## Evaluation Metrics and Analysis
Track comprehensive metrics across multiple dimensions:
```python
# Built-in metrics to consider:
metrics = {
"accuracy": "Factual correctness of responses",
"task_completion": "Whether agent completed the task",
"tool_selection": "Appropriateness of tool choices",
"response_time": "Agent response latency",
"hallucination_rate": "Frequency of fabricated information",
"token_usage": "Efficiency of token consumption",
"user_satisfaction": "Subjective helpfulness ratings"
}
# Generate analysis reports
reports = experiment.run_evaluations(task_function)
reports[0].run_display() # Interactive display with metrics breakdown
```
## Best Practices
### Evaluation Strategy
1. **Diversify Test Cases**: Cover knowledge, reasoning, tool usage, conversation, edge cases, and safety scenarios
2. **Use Statistical Baselines**: Run multiple evaluations to account for LLM non-determinism
3. **Combine Multiple Evaluators**: Use output, trajectory, and helpfulness evaluators together
4. **Regular Evaluation Cadence**: Implement consistent evaluation schedules for continuous improvement
### Performance Optimization
1. **Use Extractors**: Always use `tools_use_extractor` functions to prevent context overflow
2. **Update Descriptions Dynamically**: Call `update_trajectory_description()` with tool descriptions
3. **Choose Appropriate Judge Models**: Use stronger models for complex evaluations
4. **Batch Evaluations**: Process multiple test cases efficiently
### Experiment Design
1. **Write Clear Rubrics**: Include explicit scoring criteria and examples
2. **Include Expected Trajectories**: Define exact sequences for trajectory evaluation
3. **Use Appropriate Matching**: Choose between exact, in-order, or any-order matching
4. **Version Control**: Track agent configurations alongside evaluation results
## Documentation
For detailed guidance & examples, explore our documentation:
- [User Guide](https://strandsagents.com/latest/documentation/docs/user-guide/evals-sdk/quickstart/)
- [Evaluator Reference](https://strandsagents.com/latest/documentation/docs/user-guide/evals-sdk/evaluators/)
- [Simulators Guide](https://strandsagents.com/latest/documentation/docs/user-guide/evals-sdk/simulators/)
## Contributing ❤️
We welcome contributions! See our [Contributing Guide](CONTRIBUTING.md) for details on:
- Development setup
- Contributing via Pull Requests
- Code of Conduct
- Reporting of security issues
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
| text/markdown | null | AWS <opensource@amazon.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3>=1.26.0",
"opentelemetry-api>=1.20.0",
"opentelemetry-instrumentation-threading<1.00b0,>=0.51b0",
"opentelemetry-sdk>=1.20.0",
"pydantic<3.0.0,>=2.0.0",
"rich<15.0.0,>=14.0.0",
"strands-agents-tools<1.0.0,>=0.1.0",
"strands-agents>=1.0.0",
"tenacity<10.0.0,>=8.0.0",
"typing-extensions>=4.0"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:49:33.635538 | strands_agents_evals-0.1.7.tar.gz | 115,067 | e8/11/3f6c278bf978c6a9f2054c65f561b0499055c204e886e6a90e81a78fdfe5/strands_agents_evals-0.1.7.tar.gz | source | sdist | null | false | 325eb484350cb4ab04dc8714203a0ece | d085c5c1e0d41a3e1d534f1d4ebafbd94badffbc1dada4c5562c7e2a396dc2b3 | e8113f6c278bf978c6a9f2054c65f561b0499055c204e886e6a90e81a78fdfe5 | null | [
"LICENSE",
"NOTICE"
] | 1,050 |
2.4 | hermes-client-python | 1.7.59 | Async Python client for Hermes search server | # Hermes Client
Async Python client for [Hermes](https://github.com/SpaceFrontiers/hermes) search server.
## Installation
```bash
pip install hermes-client-python
```
## Quick Start
```python
import asyncio
from hermes_client_python import HermesClient
async def main():
async with HermesClient("localhost:50051") as client:
# Create index with SDL schema
await client.create_index("articles", '''
index articles {
field title: text [indexed, stored]
field body: text [indexed, stored]
field score: f64 [stored]
}
''')
# Index documents
await client.index_documents("articles", [
{"title": "Hello World", "body": "First article", "score": 1.5},
{"title": "Goodbye World", "body": "Last article", "score": 2.0},
])
# Commit changes
await client.commit("articles")
# Search
results = await client.search("articles", term=("title", "hello"), limit=10)
for hit in results.hits:
print(f"Doc {hit.doc_id}: score={hit.score}, fields={hit.fields}")
# Get document by ID
doc = await client.get_document("articles", 0)
print(doc.fields)
# Delete index
await client.delete_index("articles")
asyncio.run(main())
```
## API Reference
### HermesClient
```python
client = HermesClient(address="localhost:50051")
```
#### Connection
```python
# Using context manager (recommended)
async with HermesClient("localhost:50051") as client:
...
# Manual connection
client = HermesClient("localhost:50051")
await client.connect()
# ... use client ...
await client.close()
```
#### Index Management
```python
# Create index with SDL schema
await client.create_index("myindex", '''
index myindex {
field title: text [indexed, stored]
field body: text [indexed, stored]
}
''')
# Create index with JSON schema
await client.create_index("myindex", '''
{
"fields": [
{"name": "title", "type": "text", "indexed": true, "stored": true},
{"name": "body", "type": "text", "indexed": true, "stored": true}
]
}
''')
# Get index info
info = await client.get_index_info("myindex")
print(f"Documents: {info.num_docs}, Segments: {info.num_segments}")
# Delete index
await client.delete_index("myindex")
```
#### Document Indexing
```python
# Index multiple documents (batch)
indexed, errors = await client.index_documents("myindex", [
{"title": "Doc 1", "body": "Content 1"},
{"title": "Doc 2", "body": "Content 2"},
])
# Index single document
await client.index_document("myindex", {"title": "Doc", "body": "Content"})
# Stream documents (for large datasets)
async def doc_generator():
for i in range(10000):
yield {"title": f"Doc {i}", "body": f"Content {i}"}
count = await client.index_documents_stream("myindex", doc_generator())
# Commit changes (required to make documents searchable)
num_docs = await client.commit("myindex")
# Force merge segments (for optimization)
num_segments = await client.force_merge("myindex")
```
#### Searching
```python
# Term query
results = await client.search("myindex", term=("title", "hello"), limit=10)
# Boolean query
results = await client.search("myindex", boolean={
"must": [("title", "hello")],
"should": [("body", "world")],
"must_not": [("title", "spam")],
})
# With pagination
results = await client.search("myindex", term=("title", "hello"), limit=10, offset=20)
# With field loading
results = await client.search(
"myindex",
term=("title", "hello"),
fields_to_load=["title", "body"]
)
# Access results
for hit in results.hits:
print(f"Doc {hit.doc_id}: {hit.score}")
print(f" Title: {hit.fields.get('title')}")
print(f"Total hits: {results.total_hits}")
print(f"Took: {results.took_ms}ms")
```
#### Document Retrieval
```python
# Get document by ID
doc = await client.get_document("myindex", doc_id=42)
if doc:
print(doc.fields["title"])
```
## Field Types
| Type | Python Type | Description |
| --------------- | --------------- | ------------------------------------- |
| `text` | `str` | Full-text searchable string |
| `u64` | `int` (>= 0) | Unsigned 64-bit integer |
| `i64` | `int` | Signed 64-bit integer |
| `f64` | `float` | 64-bit floating point |
| `bytes` | `bytes` | Binary data |
| `json` | `dict` / `list` | JSON object (auto-serialized) |
| `dense_vector` | `list[float]` | Dense vector for semantic search |
| `sparse_vector` | `dict` | Sparse vector with indices and values |
## Error Handling
```python
import grpc
try:
await client.search("nonexistent", term=("field", "value"))
except grpc.RpcError as e:
if e.code() == grpc.StatusCode.NOT_FOUND:
print("Index not found")
else:
raise
```
## Development
Generate protobuf stubs:
```bash
pip install grpcio-tools
python generate_proto.py
```
## License
MIT
| text/markdown | izihawa | null | null | null | null | async, full-text-search, grpc, search | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"grpcio>=1.76.0",
"protobuf>=6.33.4"
] | [] | [] | [] | [
"Homepage, https://github.com/SpaceFrontiers/hermes",
"Repository, https://github.com/SpaceFrontiers/hermes"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T17:49:19.568042 | hermes_client_python-1.7.59.tar.gz | 15,606 | e7/3c/e8c98ca08200646e779f04cbb2ba03887a5b1774804f2d1c9de25317bc96/hermes_client_python-1.7.59.tar.gz | source | sdist | null | false | d3dce7d5a69567f7c5ed4b6a36a766d3 | c77c31b7cf3b7029b4f0776b38f7e6284c24cf91e6e828f4943cfca2f8510434 | e73ce8c98ca08200646e779f04cbb2ba03887a5b1774804f2d1c9de25317bc96 | MIT | [] | 228 |
2.4 | ttnn-visualizer | 0.73.1 | TT-NN Visualizer |
<div align="center">
<h1>TT-NN Visualizer</h1>
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/tenstorrent/ttnn-visualizer/refs/heads/main/src/assets/tt-logo-dark.svg">
<img alt="" src="https://raw.githubusercontent.com/tenstorrent/ttnn-visualizer/refs/heads/main/src/assets/tt-logo.svg">
</picture>
A tool for visualizing the Tenstorrent Neural Network model (TT-NN)
</div>
<h2>
[Buy Hardware](https://tenstorrent.com/cards/) | [Install TT-NN](https://docs.tenstorrent.com/tt-metal/latest/ttnn/ttnn/installing.html) | [Discord](https://discord.gg/tvhGzHQwaj) | [Join Us](https://boards.greenhouse.io/tenstorrent/jobs/4155609007)
</h2>
</div>
## Quick Start
TT-NN Visualizer can be installed from PyPI:
`pip install ttnn-visualizer`
After installation run `ttnn-visualizer` to start the application.
It is recommended to do this within a virtual environment. The minimum Python version is **3.10**.
Please see the [install guide](https://docs.tenstorrent.com/ttnn-visualizer/src/installing.html) guide for further information on getting up and running with TT-NN Visualizer.
If you want to test out TT-NN Visualizer you can try some of the [sample data](https://github.com/tenstorrent/ttnn-visualizer/tree/main?tab=readme-ov-file#sample-reports). See [loading data](https://docs.tenstorrent.com/ttnn-visualizer/src/installing.html#loading-data) for instructions on how to use this.
## Features
For the latest updates and features, please see [releases](https://github.com/tenstorrent/ttnn-visualizer/releases).
### Reports
- Upload reports from the local file system or sync remotely via SSH
- Switch seamlessly between previously uploaded or synced reports
- Run multiple instances of the application concurrently with different data
- Set data ranges for both memory and performance traces
- Display physical topology and configuration of Tenstorrent chip clusters
### Operations
- Filterable list of all operations in the model
- Interactive memory and tensor visualizations, including per core allocations, memory layout, allocation over time
- Input/output tensors details per operation including allocation details per core
- Navigable device operation tree with associated buffers and circular buffers
### Tensors
- List of tensor details filterable by buffer type
- Flagging of high consumer or late deallocated tensors
### Buffers
- Visual overview of all buffers for the entire model run by L1 or DRAM memory
- Toggle additional overlays such as memory layouts or late deallocated tensors
- Ease of navigation to the relevant operation
- Track a specific buffer in the data across the application
- Filterable table view for a more schematic look at buffers
### Graph
- Interactive model graph view showing all operations and connecting tensors
- Filter out deallocated operations
- Find all operations by name
### Performance
- Integration with tt-perf-report and rendering of performance analysis
- Interactive charts and tables
- Multiple filtering options of performance data
- Compare multiple performance traces
### NPE
- Network-on-chip performance estimator (NPE) for Tenstorrent Tensix-based devices
- Dedicated NPE visualizations: zones, transfers, congestion, timelines with elaborate filtering capability
### Demo
#### Application demo
https://github.com/user-attachments/assets/4e51a636-c6d6-46df-bf34-a06bca13c0b3
| L1 Summary with Tensor highlight | Operation inputs and outputs |
|-----------------------------------------------|------------------------------------------|
| <img width="400" alt="L1 Summary with Tensor highlight" src="https://github.com/user-attachments/assets/7c6a3558-1084-492b-ac0b-f5f910487c8f" /> | <img width="400" alt="Operation inputs and outputs" src="https://github.com/user-attachments/assets/48197e65-4831-4005-9da8-99574c47d5c7" /> |
| Device operations with memory consumption | DRAM memory allocation |
|-----------------------------------------------|------------------------------------------|
| <img width="400" alt="Device operations with memory consumption" src="https://github.com/user-attachments/assets/4b8cefb9-fd75-4291-9e64-ab2f2c866c51" />| <img width="400" alt="DRAM memory allocations" src="https://github.com/user-attachments/assets/a9ad8b1d-200c-4c10-b1d8-5d76900c688c" /> |
| Operation graph view | Model buffer summary |
|-----------------------------------------------|------------------------------------------|
| <img width="400" alt="Operation graph view" src="https://github.com/user-attachments/assets/422f1591-4232-4d16-a783-726960261443" /> | <img width="400" alt="Model buffer summary" src="https://github.com/user-attachments/assets/9afa48b2-628d-4dad-ac89-42fda762aee6" /> |
| Per core allocation details | Per core allocation details for individual tensors |
|-----------------------------------------------|------------------------------------------|
| <img width="400" alt="Per core allocation details" src="https://github.com/user-attachments/assets/681c8d0e-c628-4839-afca-f31ff9d53f73" /> | <img width="400" alt="Per core allocation details for individual tensor" src="https://github.com/user-attachments/assets/a9d66f2d-2457-4ced-b777-6e8f0c54eb86" /> |
| Tensor details list | Performance report |
|-----------------------------------------------|------------------------------------------|
| <img width="400" alt="Tensor details list" src="https://github.com/user-attachments/assets/315089ff-ae75-4615-87b9-19c45431871c" /> | <img width="400" alt="Performnance analysis" src="https://github.com/user-attachments/assets/468b0acb-733e-4891-8e16-781c47889017" /> |
| Performance charts | |
|-----------------------------------------------|------------------------------------------|
| <img width="400" alt="Performance charts" src="https://github.com/user-attachments/assets/19f6bd6f-8f48-48dd-b9ee-726b1a1e40e3" /> | <img width="400" alt="Performance charts" src="https://github.com/user-attachments/assets/bc6ae03b-f143-4ee5-9f14-834ddf8b0cde" /> |
| NPE | |
|-----------------------------------------------|------------------------------------------|
| <img width="400" alt="NPE" src="https://github.com/user-attachments/assets/5f45c1bf-565d-4003-b3b7-0ddd90cbdeca" /> | <img width="400" alt="NPE" src="https://github.com/user-attachments/assets/8a3e9a09-4c86-45a6-9916-52fba16debc6" />
## Sample reports
You may test the application using the following sample reports.
Unzip the files into their own directories and select them with the local folder selector, or load the NPE data on the `/npe` route.
**Segformer encoder**
[memory report](https://github.com/user-attachments/files/17996493/segformer_encoder.zip)
**Segformer decoder**
[memory report](https://github.com/user-attachments/files/17996491/segformer_decoder_good.zip)
**Llama mlp**
[memory + performance report](https://github.com/user-attachments/files/18770763/llama_attn_32l_10iter_30jan.zip)
**N300 llama**
[memory + performance report with NPE data + cluster description](https://github.com/user-attachments/files/21496609/n300.zip)
### NPE report
**T3K synthetic**
[synthetic_t3k_small.json.zip](https://github.com/user-attachments/files/20491459/synthetic_t3k_small.json.zip)
## Contributing
How to run [TT-NN Visualizer](https://docs.tenstorrent.com/ttnn-visualizer/src/running-from-source.html) from source.
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"build",
"Flask-Cors==6.0.0",
"flask-socketio==5.4.1",
"flask-sqlalchemy==3.1.1",
"Flask-Static-Digest==0.4.1",
"Flask==3.1.1",
"gevent==24.10.2",
"gunicorn~=23.0.0",
"orjson>=3.9.0",
"pandas==2.2.3",
"pydantic_core==2.27.1",
"pydantic==2.10.3",
"python-dotenv==1.0.1",
"PyYAML==6.0.2",
"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:49:10.662832 | ttnn_visualizer-0.73.1-py3-none-any.whl | 2,599,378 | db/c0/5d064550d317b4df95bc6b6d739120a69e5a2ed96658eac40897809e520e/ttnn_visualizer-0.73.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 7ab71d3269b5c94eb21998c7c511cb43 | 7995345ab1006b286f6cf7f3ccf6f987ab397ad982b7e9b214a7893b57d79ea8 | dbc05d064550d317b4df95bc6b6d739120a69e5a2ed96658eac40897809e520e | null | [
"LICENSE",
"LICENSE_understanding.txt"
] | 108 |
2.4 | duplocloud-client | 0.4.1 | Command line Client for interacting with Duplocloud portals. | # Duplocloud Py Client
[](https://github.com/duplocloud/duploctl/actions/workflows/test_unit.yml) [](https://pypi.org/project/duplocloud-client/) [](https://hub.docker.com/r/duplocloud/duploctl) [
](https://github.com/duplocloud/duploctl) [
](https://cli.duplocloud.com/)
```duploctl``` is a cli and package to work with a Duplocloud portal. It is a CLI for interacting with Duplocloud resources, such as Tenants, and is designed to work seamlessly within CLI-based CI/CD pipelines. It is a fully extensible package and can be used as both a Python module and a CLI.
## Installation
From PyPi:
```sh
pip install duplocloud-client
```
From Homebrew:
```sh
brew install duplocloud/tap/duploctl
```
## Usage
Use ```duploctl``` as a CLI or as a standalone Python module called by your custom script.
### Configuration
Use the following syntax for these global arguments:
| Arg | Env Var | Description | Default | Required |
| --- | ------- | ----------- | ------- | -------- |
| --host, -H | DUPLO_HOST | The host to connect to | | Yes |
| --token, -T | DUPLO_TOKEN | The token to use for auth | | Yes |
| --tenant, -t | DUPLO_TENANT | The tenant to use for auth | default | No |
### CLI
CLI command syntax for invoking ```duploctl```
```sh
duploctl <resource> <command> <args...>
```
### Example Usages
Full documentation is in the Wiki section.
Configure `duploctl` access with environment variables:
```sh
export DUPLO_HOST=https://example.duplocloud.net
export DUPLO_TOKEN=AQAAA...
export DUPLO_TENANT=dev01
```
List the services in a tenant:
```sh
duploctl service list
```
Register Profile for AWS:
```sh
duploctl jit update_aws_config myportal
```
Open AWS Web Console:
```sh
duploctl jit web
```
Get Kubernetes config:
```sh
duploctl jit update_kubeconfig myinfra
```
### Python Module
Spawn your client from a Python script using the ```DuploClient.from_env()``` method and arguments. The second return value are the unparsed arguments from the command line. This example uses the client as a callable using command like syntax.
```python
duplo, args = DuploClient.from_env()
t = duplo("tenant", "find", "mytenant")
print(t)
```
Spawn a client with a custom host and token from a Python script. This example loads a resource and runs a method manually.
```python
duplo = DuploClient.from_creds(host="https://example.duplocloud.net", token="mytoken")
tenants = duplo.load("tenant")
t = tenants.find("mytenant")
print(t)
```
| text/markdown | null | Kelly <kelly@duplocloud.net> | null | Kelly <kelly@duplocloud.net> | null | duplocloud, duplo, duploctl, duplo-client | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"requests>=2.22.0",
"cachetools>=5.2.0",
"jmespath>=1.0.1",
"pyyaml>=6.0.1",
"jsonpatch>=1.33",
"pyjwt>=2.8.0",
"jsonpointer>=2.4",
"invoke; extra == \"build\"",
"setuptools_scm; extra == \"build\"",
"build; extra == \"build\"",
"wheel; extra == \"build\"",
"twine; extra == \"build\"",
"pyin... | [] | [] | [] | [
"Homepage, https://duplocloud.com/",
"Documentation, https://cli.duplocloud.com/",
"Repository, https://github.com/duplocloud/duploctl",
"Issues, https://github.com/duplocloud/duploctl/issues",
"Changelog, https://cli.duplocloud.com/Changelog",
"LatestRelease, https://github.com/duplocloud/duploctl/releas... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:48:45.332892 | duplocloud_client-0.4.1.tar.gz | 150,374 | 2c/18/683ec5c9b52447c09c0c4c7e29f87c087d01b52b4ce8bfeaa91b09369667/duplocloud_client-0.4.1.tar.gz | source | sdist | null | false | ae644accc5c60c349e974236bdd6bf22 | 35456548411cd75793beedc9ef1f1021769d565cf5f5c8f0f1995b5a2b0dee15 | 2c18683ec5c9b52447c09c0c4c7e29f87c087d01b52b4ce8bfeaa91b09369667 | null | [
"LICENSE"
] | 13,408 |
2.4 | robot-framework-reporter | 0.2.2 | Robot Framework plugin for sync and report test to testomat.io | [](https://github.com/support-ukraine/support-ukraine)
# Testomat.io plugin for Robot Framework
A powerful plugin that integrates your tests with [Testomat.io](https://testomat.io) platform for test management, reporting and analytics
## Features
- ✅ Sync tests with testomat.io
- 📊 Real-time test execution reporting
## Uses testomat.io API:
- https://testomatio.github.io/check-tests/ - for sync
- https://testomatio.github.io/reporter/ - for reporting
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Advanced Usage](#advanced-usage)
- [Basic Configuration](#basic-configuration)
- [Import Listener](#testomatioimport-listener)
- [Listener Configuration](#import-listener-configuration)
- [Clean Test IDs](#clean-test-ids)
- [Disable Detach Test](#detaching-tests)
- [Remove Empty Suites](#removing-empty-suites)
- [Keep Test IDs Between Projects](#keep-test-ids-between-projects)
- [Keep structure](#keep-structure)
- [Report Listener](#testomatioreport-listener)
- [Listener Configuration](#report-listener-configuration)
## Installation
Prerequisites:
- Python 3.10+
- Robot Framework 4.0+
- Active [testomat.io](https://testomat.io) account
Install via pip:
```bash
pip install robot-framework-reporter
```
If you have Python 2.x and Python 3.x in your system:
```bash
pip3 install robot-framework-reporter
```
## Quick Start
### Get your API token
1. Login to [Testomat.io](https://testomat.io)
2. Create project or go to existing project
3. Click on "Import Tests from Source Code"
4. Copy your project token(starts with "tstmt_")
### Sync tests
Synchronize tests to Testomat.io using **Testomatio.Import** listener:
```bash
TESTOMATIO=your_token robot --listener Testomatio.Import path/to/tests
```
### Report tests
Execute tests and send results to Testomat.io using **Testomatio.Report** listener:
```bash
TESTOMATIO=your_token robot --listener Testomatio.Report path/to/tests
```
### Example of test
After importing tests to Testomat.io, each test is automatically assigned a unique Test ID.
Testomat.io Test ID is a string value that starts with `@T` and contains 8 characters after. Test ID is appended to the test name
**Before import** (original test):
```robotframework
*** Test Cases ***
Test Addition
[Documentation] Check addition of two numbers
[Tags] math positive
${result}= Evaluate 10 + 5
Should Be Equal As Numbers ${result} 15
```
**After import** (with Test ID):
```robotframework
*** Test Cases ***
Test Addition @T96c700e6
[Documentation] Check addition of two numbers
[Tags] math positive
${result}= Evaluate 10 + 5
Should Be Equal As Numbers ${result} 15
```
## Advanced Usage
Testomat.io integration with Robot Framework is implemented through the Listener Interface. Currently, two Listeners are available:
- **Testomatio.Import**. Used for synchronizing tests with Testomat.io
- **Testomatio.Report**. Used for reporting test results to Testomat.io
### Basic Configuration
Listeners can be configured through parameters or environment variables. Each Listener has its own configuration options, which are described in the corresponding sections.
> 💡 **Note:** Parameters and environment variables configure different aspects of the Listener's behavior. Each configuration option is available through only one method - either as a parameter or as an environment variable, not both.
#### Common Environment Variables
Both Listeners use the following environment variables:
| Variable | Description | Required | Default |
|---------------|---------------------------------------------|----------|---------------------------|
| `TESTOMATIO` | API key for accessing Testomat.io | ✅ Yes | - |
| `TESTOMATIO_URL` | Testomat.io server URL | ➖ No | `https://app.testomat.io` |
| `TESTOMATIO_REQUEST_INTERVAL` | Interval between requests to Testomat.io in seconds | ➖ No | `5` |
| `TESTOMATIO_MAX_REQUEST_FAILURES`| Max attempts to send request to Testomat.io | ➖ No | `5` |
### Testomatio.Import Listener
Used for importing tests to Testomat.io.
#### Import Listener Configuration
###### Environment variables
| Variable | Description | Required | Default |
|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------|
| `TESTOMATIO_IMPORT_DIRECTORY` | Specifies directory where tests will be imported | ➖ No | `None` |
| `TESTOMATIO_SYNC_LABELS` | Assign labels to a test case when you import test to Testomat.io. <br/>Labels must exist in project and their scope must be enabled for tests. To pass multiple labels, separate them by a comma | ➖ No | `None` |
###### Listener Parameters
| Parameter | Description | Required | Type | Default |
|------------|----------------------------------------------------------------|------|-------|---------|
| remove_ids | Remove all test ids from source code |➖ No|`bool`|`False`|
| no_detach | Disables detaching tests on Testomat.io | ➖ No |`bool`|`False`|
| no_empty | Removes empty suites on Testomat.io | ➖ No |`bool`|`False`|
| create | Use to import Test ids set in source code into another project | ➖ No |`bool`|`False`|
| structure | Force to keep original file structure | ➖ No |`bool`|`False`|
#### Clean Test IDs
If you want to import the synced project as new project, you have to clean the test ids. To clean up test ids use **remove_ids** parameter:
```bash
TESTOMATIO=your_key robot --listener Testomatio.Import:remove_ids=1 path/to/tests
```
This method may be unsafe, as it cleans all @T* tags from tests names. So if you have a tag like @Test1234 in test name this may also be removed. If you use this option make sure if all the test titles a proper before committing the tests in GIT.
#### Detaching tests
If a test from a previous import was not found on next import it is marked as "detached". This is done to ensure that deleted tests are not staying in Testomatio while deleted in codebase.
To disable this behavior and don't mark anything on detached on import use **no_detach** parameter:
```bash
TESTOMATIO=your_key robot --listener Testomatio.Import:no_detach=1 path/to/tests
```
#### Removing empty suites
If tests were marked with IDs and imported to already created suites in Testomat.io newly imported suites may become empty. Use **no_empty** parameter to clean them up after import.
```bash
TESTOMATIO=your_key robot --listener Testomatio.Import:no_empty=1 path/to/tests
```
This prevents usage **structure** parameter.
#### Keep Test IDs between projects
To import tests with Test IDs set in source code into another project use **create** parameter. In this case, a new project will be populated with the same Test IDs.
```bash
TESTOMATIO=your_key robot --listener Testomatio.Import:create=1 path/to/tests
```
#### Keep structure
When tests in source code have IDs assigned and those tests are imported, Testomat.io uses current structure in a project to put the tests in. If folders in source code doesn't match folders in Testomat.io project, existing structure in source code will be ignored. To force using the structure from the source code, use **structure** parameter on import:
```bash
TESTOMATIO=your_key robot --listener Testomatio.Import:structure=1 path/to/tests
```
### Testomatio.Report Listener
Used for reporting test results to Testomat.io. By default, sends test results in batches after each test suite completes.
#### Report Listener Configuration
###### Environment variables
| Variable | Description | Required | Default |
|-----------------------------------|----------------------------------------------------------------------------------------|----------|---------|
| `TESTOMATIO_DISABLE_BATCH_UPLOAD` | Disables batch uploading and uploads each test result one by one | ➖ No | `False` |
| `TESTOMATIO_BATCH_SIZE` | Changes size of batch for batch uploading. Maximum is 100. | ➖ No | `50` |
| `TESTOMATIO_RUN` | Id of existing test run to use for sending test results to | ➖ No | `None` |
| `TESTOMATIO_PUBLISH` | Publish run after reporting and provide a public URL | ➖ No | `False` |
| `TESTOMATIO_TITLE` | Name of a test run to create on Testomat.io | ➖ No | `None` |
| `TESTOMATIO_RUNGROUP_TITLE` | Create a group (folder) for a test run. If group already exists, attach test run to it | ➖ No | `None` |
###### Listener Parameters
Currently, has no parameters
| text/markdown | null | Vladyslav Krutko <krutkovladyslav@gmail.com> | null | null | MIT | robotframework, testing, reporter, plugin | [
"Framework :: Robot Framework",
"Framework :: Robot Framework :: Library",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"robotframework>=4.0",
"requests>=2.25.0",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Testomat.io, https://testomat.io/",
"Homepage, https://github.com/testomatio/robot-framework-reporter",
"Bug Tracker, https://github.com/testomatio/robot-framework-reporter/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:48:24.975747 | robot_framework_reporter-0.2.2.tar.gz | 14,671 | 4b/39/95c203dab05528198c7b317123c4e4d68dbd72ec8c9faa3a17b5fe8508c5/robot_framework_reporter-0.2.2.tar.gz | source | sdist | null | false | 6b3935d0233caf525b3ca6181bcf8de7 | daf25e5d7235811356f2d86a739380c9963eb4cc0319ef4a4ece9e968dadf3ae | 4b3995c203dab05528198c7b317123c4e4d68dbd72ec8c9faa3a17b5fe8508c5 | null | [] | 212 |
2.4 | api-aggregator | 0.1.3 | API aggregator core runtime with registry, scheduler, local persistence, and dashboard. | # api-aggregator
<p align="center">
<img src="./src/api_aggregator/dashboard/assets/images/logo.png" alt="api-aggregator logo" width="160" />
</p>
<p align="center">
<a href="https://github.com/Zhalslar/api-aggregator"><img alt="repo" src="https://img.shields.io/badge/repo-GitHub-181717?logo=github"></a>
<img alt="python" src="https://img.shields.io/badge/python-3.10%2B-3776AB?logo=python&logoColor=white">
<img alt="license" src="https://img.shields.io/badge/license-GPL--3.0--only-blue">
</p>
<p align="center">
中文 | <a href="README.en.md">English</a>
</p>
面向机器人/自动化场景的 API 聚合运行时,提供 API 池与站点池管理、远程拉取与解析、本地去重持久化、定时触发、Web Dashboard 管理。
## 目录
- [核心能力](#核心能力)
- [安装](#安装)
- [Docker 部署](#docker-部署)
- [快速开始](#快速开始)
- [运行与配置](#运行与配置)
- [项目结构](#项目结构)
- [Dashboard 与 HTTP API 文档](#dashboard-与-http-api-文档)
- [与机器人框架集成](#与机器人框架集成)
- [开发与发布](#开发与发布)
## 核心能力
- API/站点池管理
- 统一在 SQLite 中持久化,支持增删改查、排序、筛选、分页。
- 批量测试与可见化结果
- 通过 NDJSON 流实时返回测试进度,自动回写 API `valid` 状态。
- 远程数据拉取与解析
- 支持 `text/image/video/audio` 四类结果,支持 JSON 路径提取与 HTML 纯文本提取。
- 本地去重与回退
- 远程成功时写入本地(文本/二进制去重);远程失败时可回退读取本地随机历史数据。
- Dashboard 运维能力
- 池导入导出、批量删除、本地数据浏览与删除、系统重启、代码更新。
## 安装
推荐 Python 3.10+。
```bash
pip install -r requirements.txt
```
或安装当前项目包:
```bash
pip install .
```
说明:
- 发布包名:`api-aggregator`
- 导入名:`api_aggregator`
## Docker 部署
完整说明见:`docs/zh-CN/docker.md`
### 方式一:直接使用 Docker
构建镜像:
```bash
docker build -t api-aggregator:latest .
```
运行容器:
```bash
docker run -d \
--name api-aggregator \
-p 4141:4141 \
-v "$(pwd)/data:/app/data" \
-v "$(pwd)/pool_files:/app/pool_files" \
--restart unless-stopped \
api-aggregator:latest
```
### 方式二:使用 Docker Compose
```bash
docker compose up -d --build
```
停止并移除容器:
```bash
docker compose down
```
说明:
- Dashboard 地址:`http://127.0.0.1:4141`
- 持久化目录:
- `./data` -> `/app/data`
- `./pool_files` -> `/app/pool_files`
## 快速开始
直接启动:
```bash
python start.py
```
常用参数:
```bash
python start.py --dashboard-host 127.0.0.1 --dashboard-port 4141
python start.py --no-dashboard
```
代码方式接入:
```python
import asyncio
from api_aggregator import APICoreApp
async def main() -> None:
app = APICoreApp()
await app.start()
try:
await asyncio.Event().wait()
finally:
await app.stop()
asyncio.run(main())
```
## 运行与配置
默认运行目录(相对仓库根目录):
- `data/api_aggregator.db`:站点池/API 池持久化数据库
- `data/local/`:本地缓存数据(text/image/video/audio)
- `pool_files/`:池导入导出默认目录
默认配置(代码内置):
- Dashboard:`0.0.0.0:4141`
- 默认请求超时:`60s`
- 默认请求头:`User-Agent` + `Accept: */*`
注意:当前版本的 `APIConfig` 以代码默认值为主,`data/app_config.json` 并非核心配置来源。
## 项目结构
```text
api-aggregator/
src/api_aggregator/
dashboard/ # Web UI 与 HTTP API
data_service/ # 远程请求、本地缓存、聚合数据服务
entry/ # API/站点实体与管理器
service/ # 测试、导入导出、重启、更新等服务
database.py # SQLite 持久化
main.py # APICoreApp 生命周期
pool_files/ # 池导入导出默认目录
data/ # 运行时数据目录
docs/
```
## Dashboard 与 HTTP API 文档
- 中文:`docs/zh-CN/dashboard-http-api.md`
- English: `docs/en/dashboard-http-api.md`
- 数据结构说明:`docs/zh-CN/api-data-schema.md`
- Docker 部署:`docs/zh-CN/docker.md`
Dashboard 默认地址:`http://127.0.0.1:4141`
## 与机器人框架集成
建议按三层对接:
1. 生命周期:框架启动时 `await app.start()`,关闭时 `await app.stop()`。
2. 消息匹配:`api_mgr.match_entries(...)` 找命中 API,再 `data_service.fetch(...)` 拉取。
3. 定时触发:`set_cron_entry_handler(...)` 注册回调,回调中调用 `fetch_cron_data(...)`。
最小适配器:
```python
from api_aggregator import APICoreApp, APIEntry
class BotFrameworkAdapter:
def __init__(self) -> None:
self.app = APICoreApp()
self.app.set_cron_entry_handler(self.on_cron_entry)
async def on_framework_start(self) -> None:
await self.app.start()
async def on_framework_stop(self) -> None:
await self.app.stop()
async def on_message(self, text: str) -> list[str]:
replies: list[str] = []
matched = self.app.api_mgr.match_entries(text, only_enabled=True)
for entry in matched:
data = await self.app.data_service.fetch(entry, use_local=True)
if data and data.final_text:
replies.append(data.final_text)
return replies
async def on_cron_entry(self, entry: APIEntry) -> None:
data = await self.app.fetch_cron_data(entry, use_local=True)
if data and data.final_text:
print(f"[cron] {entry.name}: {data.final_text}")
```
## 开发与发布
本地检查:
```bash
python -m compileall src tests
python -m unittest discover -s tests -p "test_*.py"
uv run ruff check .
```
构建:
```bash
uv build
```
## Star History
[](https://star-history.com/#Zhalslar/api-aggregator&Date)
| text/markdown | Zhalslar | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp<4,>=3.9",
"beautifulsoup4<5,>=4.12",
"APScheduler<4,>=3.10"
] | [] | [] | [] | [
"Homepage, https://github.com/Zhalslar/api-aggregator",
"Repository, https://github.com/Zhalslar/api-aggregator",
"Issues, https://github.com/Zhalslar/api-aggregator/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:47:38.254197 | api_aggregator-0.1.3.tar.gz | 212,115 | cf/f9/67102f4022212cd13ffdae1177ff57f7771d2ba269c1bba2ffc9f296c36b/api_aggregator-0.1.3.tar.gz | source | sdist | null | false | 99d6866cbb37b853902ff5bef29afe5a | c14807236638a4b823bf9f1381ccac6c03439c6e1d5f30a9db9a22d2e71a3c0c | cff967102f4022212cd13ffdae1177ff57f7771d2ba269c1bba2ffc9f296c36b | GPL-3.0-only | [
"LICENSE"
] | 240 |
2.4 | Cirilla | 1.0.0 | Crilla is a simple way to introduce optimized single-GPU training into your project | > [!IMPORTANT]
> For a much nicer README visit [Cirilla](https://anthonyp57.github.io/Cirilla---a-LLM-made-on-a-budget/)
>
> *(Note: the site is made for 16:9 1080p displays — I’m not a web developer, so it may look a bit rough on other screen sizes.)*

*Ciri from The Witcher 4 trailer*
# Cirilla
Cirilla is an open source learning project aiming at implmenting various LLMs.
It is focused mainly on showing how to make, train, infer and deploy a LLM from scratch using Pytorch and a budget friendly GPU (RTX 4060Ti 16GiB ~500$).
- [About Cirilla](#who-is-cirilla)
- [Repo organization](#repo-organization)
- [Getting started](#getting-started)
- [Why Cirilla](#why-cirilla)
## Who is Cirilla
**Cirilla Fiona Elen Riannon**, known as *Ciri*, is one of the central characters in
*The Witcher* saga by Andrzej Sapkowski and its adaptations.
She is the princess of Cintra, granddaughter of Queen Calanthe, and the sole heir
to a powerful lineage marked by the mysterious Elder Blood.
Ciri is defined by her destiny, adaptability, and potential. Unlike kings who wield authority by birthright, her strength comes from surviving chaos, learning from mentors like Geralt and Yennefer, and unlocking extraordinary powers.
Her unique abilities make her one of the most pivotal figures in the saga. Known as the *Lady of Space and Time*, the *Lion Cub of Cintra*, and the *Child of the Elder Blood*, she can manipulate space and time, travel between worlds, and influence the course of events in ways few can.
<p align="center">
<img src="https://github.com/AnthonyP57/Radovid---a-LLM-made-on-a-budget/blob/master/img/fake_ciri.webp?raw=true" width="250"/>
</p>
<div align='center'>
<em>Fig.1 Ciri Gwent card by Bogna Gawrońska</em>
</div>
</br>
## Why name a LLM Cirilla
Unlike rulers who inherit authority, *Cirilla* embodies potential realized through learning, experience, and adaptability. She is resilient, capable of navigating complex and unpredictable worlds, and able to respond to challenges with skill and precision - qualities that mirror how an language model can shift between tasks, domains, and contexts.
Guided by mentors and shaped by hardships, Ciri develops her abilities quickly, mastering both strategy and instinct while remaining flexible in the face of unforeseen circumstances.
Her combination of innate talent, adaptability, and the capacity for growth makes her an fitting symbol for a language model designed to acquire knowledge, evolve over time, and connect information across domains.
<p align="center">
<img src="https://github.com/AnthonyP57/Radovid---a-LLM-made-on-a-budget/blob/master/img/Ciri.webp?raw=true" width="220"/>
</p>
<div align='center'>
<em>Fig.2 Ciri Gwent card by Anna Podedworna</em>
</div>
</br>
## What is a LLM
On a high level: imagine a toddler with an huge amount of knowledge but still possessing a toddler-like way of reasoning and understanding.
On a lower level: an LLM is a neural network trained on so-called big data to recognize patterns, generate human-like responses, and predict the most likely next word in a given context. While it can process and recall information efficiently, it lacks true understanding, reasoning, or consciousness, relying only on statistical correlations rather than genuine comprehension. the reasoning of LLMs is being impoved in projects (most notably) like DeepSeek, which focus on enhancing the ability to understand context and simulating human-like reasoning.
## Repo organization:
```bash
Cirilla - a LLM made on a budget/
│
├── BERT/ # overview of BERT
│ └── RAG/ # overview of RAG
│
├── cirilla/
│ ├── Cirilla_model/ # implementation of the Cirilla LLM
│ ├── Few_shot/ # Few-shot learning techniques
│ ├── LLM_pieces/ # building blocks of LLMs
│ └── synth_data/ # creating synthetic data
│
├── cirilla_training/ # proper LLM training with the Cirilla package
│
├── Decoder_only_architecture/ # overview of decoder only transformer architecture
│ ├── Llama2/ # implementation of Llama 2 inference loop
│ └── Mistral/ # overview of the Mistral 7B architecture and inference tricks
│
├── DPO/ # overview of Direct Preference Optimization (DPO)
│
├── examples/ # examples how to use this package
│
├── Few_shot/ # overview of Few-shot ML techniques
│
├── KAN/ # overview of Kolmogorov-Arnold Networks (KAN)
│
├── Multimodal/ # overview of Paligemma (VLM)
│
├── Tiny_recursive_model/ # overview of Tiny recursive model (TRM)
│
├── Training_optimizations/
│ ├── FlexAttention/ # overview of Pytorch's FlexAttention
│ ├── HF_kernels/ # overview of HF's kernel hub
│ ├── Mamba/ # overview of Mamba
│ ├── Multi_Token_Prediction/ # overview of MTP
│ └── Optimizer_dusion/ # fusing Pytorch optimizer into the backward pass
│
└── Transformer_from_scratch/ # transformer implementation
├── model.py # transformer model
├── dataset.py # dataset for MLM - masked language modelling
├── train.py # main transformer training loop
└── LongNet.py # LongNet - crude dilated attention implementation
```
## Getting started
### 1. Installing Cirilla
```bash
uv add Cirilla
#or
pip install Cirilla # that's it
```
### 2. Installing Mamaba (not required, but recommended)
```bash
uv add Cirilla[mamba]
```
In case the is some problem, try:
```bash
uv pip install --no-cache-dir --no-binary :all: --no-build-isolation mamba-ssm[causal-conv1d]
```
and then
```bash
uv add Cirilla[mamba]
```
To verify that everything works you can try running: `./examples/cirilla_hybrid.py`
## Why Cirilla
Cirilla is a project focused on building **simple and optimized transformer models**. The goal is to give you access to all the modern bells and whistles, like Mixture of Experts (MoE) and [FlexAttention](https://pytorch.org/blog/flexattention/), without requiring you to implement or learn about them from scratch.
### Modular building blocks
Cirilla is organized around reusable transformer components. Each module is implemented in a clean and transparent way, making it easy to experiment, swap, or optimize parts of the model.
*Some highlights:*
- **Hybrid Architecture**: Transformer architecture containing Mamba blocks (similar to [IBM Granite 4.0](https://www.ibm.com/new/announcements/ibm-granite-4-0-hyper-efficient-high-performance-hybrid-models)).
- **Multimodal models**: Similar to [PaliGemma](https://arxiv.org/pdf/2407.07726).
- **Tiny Recursive Model (TRM)**: A simpler recursive reasoning approach to Hierarchical Reasoning Model (HRM).
- **Few-shot ML techniques**: like [ProtoNet](https://arxiv.org/pdf/1703.05175), [MAML](https://arxiv.org/pdf/1703.03400), [Setfit](https://arxiv.org/pdf/2209.11055)
- **Attention mechanisms**: sliding window attention with PyTorch FlexAttention, and non-causal “BERT-like” attention.
- **Rotary Positional Embeddings (RoPE)**: lightweight and efficient PyTorch implementation.
- **Muon optimizer**: optimizer for hidden layers
- **Accelerated Sparse Training**: available with [torchao](https://github.com/pytorch/ao/tree/main/torchao/sparsity/training)
- **From-scratch transformer**: complete implementations including dataset handling, model definition, training loops and checkpointing.
#### LLM blocks - learn where the magic happens
- You can learn about the RMS norm [here](https://github.com/AnthonyP57/Cirilla---a-LLM-made-on-a-budget/tree/master/Decoder_only_architecture#normalization-and-rms-norm)
- RoPE embeddings [here](https://github.com/AnthonyP57/Cirilla---a-LLM-made-on-a-budget/tree/master/Decoder_only_architecture/Llama2#rope)
- Grouped-Query Attention [here](https://github.com/AnthonyP57/Cirilla---a-LLM-made-on-a-budget/tree/master/Decoder_only_architecture#multi-query-attention---mqa)
- Sliding window attention [here](https://github.com/AnthonyP57/Cirilla---a-LLM-made-on-a-budget/tree/master/Decoder_only_architecture/Mistral#sliding-window-attention)
- Rolling buffer cache [here](https://github.com/AnthonyP57/Cirilla---a-LLM-made-on-a-budget/tree/master/Decoder_only_architecture/Mistral#kv-cache-with-rolling-buffer-cache)
- SwiGLU [here](https://github.com/AnthonyP57/Cirilla---a-LLM-made-on-a-budget/tree/master/Decoder_only_architecture#swiglu)
- Mixture of Experts [here](https://github.com/AnthonyP57/Cirilla---a-LLM-made-on-a-budget/tree/master/Decoder_only_architecture/Mistral#sparse-mixture-of-experts)
- BERT models [here](https://github.com/AnthonyP57/Cirilla---a-LLM-made-on-a-budget/tree/master/BERT)
- dropless-MoE (dMoE) [here](https://arxiv.org/abs/2211.15841)
### Focus on efficiency
- **Optimized kernels** from [HuggingFace kernel hub](https://huggingface.co/models?other=kernel).
- **Alternative attention mechanisms** for handling longer contexts and specialized training setups.
- **Sparse Mixture of Experts** to scale models without an increase in compute cost.
- **Fused optimizers** that reduce memory usage.
- **FlexAttention** for efficient and sparse attention computation.
### Research + Education
Cirilla explains and integrates ideas from notable papers. This makes it an great resource for:
- **Researchers**, who want to test new variations of transformer models quickly.
- **Practitioners**, who need efficient and flexible code for training on limited hardware.
- **Students and hobbyists**, who want to learn how modern LLMs are built.
### HuggingFace integration
Cirilla models can be easily pushed to and pulled from the HuggingFace Hub, making collaboration, sharing, and deployment straightforward.
### Data generation tools
The repository also provides scripts for **synthetic data generation**, including multi-turn dialogues, reasoning datasets, and domain-specific examples. This allows users to create datasets for fine-tuning and evaluation without relying solely on large, external corpora of questionable quality.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=24.1.0",
"attn-gym>=0.0.4",
"bitsandbytes>=0.46.1",
"datasets>=4.0.0",
"ema-pytorch>=0.7.7",
"fandom-py>=0.2.1",
"fuzzywuzzy>=0.18.0",
"huggingface-hub>=0.33.4",
"kernels>=0.11.5",
"mistral-common>=1.8.8",
"mistralai>=1.10.0",
"ollama>=0.5.3",
"openai>=1.90.0",
"polars>=1.35.1",... | [] | [] | [] | [] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T17:47:15.783135 | cirilla-1.0.0-py3-none-any.whl | 74,868 | b2/eb/7d0d4a2035630f019f6f2c305fbca7ead02a48f0a8766aeb71082e60771e/cirilla-1.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | d1900c22873fe6fb840257a3f5adafbe | c35d3ffb27884051bbaa499325170e13b7b4528f3989ea12da518346848e8e9e | b2eb7d0d4a2035630f019f6f2c305fbca7ead02a48f0a8766aeb71082e60771e | null | [] | 0 |
2.4 | hindsight-embed | 0.4.13 | Hindsight embedded CLI - local memory operations without a server | # hindsight-embed
Hindsight embedded CLI - local memory operations with automatic daemon management.
This package provides a simple CLI for storing and recalling memories using Hindsight's memory engine. It automatically manages a background daemon for fast operations - no manual server setup required.
## How It Works
`hindsight-embed` uses a background daemon architecture for optimal performance:
1. **First command**: Automatically starts a local daemon (first run downloads dependencies and loads ML models - can take 1-3 minutes)
2. **Subsequent commands**: Near-instant responses (~1-2s) since daemon is already running
3. **Auto-shutdown**: Daemon automatically exits after 5 minutes of inactivity
The daemon runs on `localhost:8888` and uses an embedded PostgreSQL database (pg0) - everything stays local on your machine.
## Installation
```bash
pip install hindsight-embed
# or with uvx (no install needed)
uvx hindsight-embed --help
```
## Quick Start
```bash
# Interactive setup (configures default profile)
hindsight-embed configure
# Or set your LLM API key manually
export OPENAI_API_KEY=sk-...
# Store a memory (bank_id = "default")
hindsight-embed memory retain default "User prefers dark mode"
# Recall memories
hindsight-embed memory recall default "What are user preferences?"
```
All commands use the "default" profile unless you specify a different one with `--profile` or `HINDSIGHT_EMBED_PROFILE`.
## Commands
### configure
Configure the default profile or create/update named profiles:
```bash
# Interactive setup for default profile
hindsight-embed configure
# Create/update named profile with single command
hindsight-embed configure --profile my-app \
--env HINDSIGHT_EMBED_LLM_PROVIDER=openai \
--env HINDSIGHT_EMBED_LLM_API_KEY=sk-xxx
# Create/update named profile interactively
hindsight-embed configure --profile staging
```
This will:
- Let you choose an LLM provider (OpenAI, Groq, Google, Ollama)
- Configure your API key
- Set the model and memory bank ID
- Start the daemon with your configuration
### memory retain
Store a memory:
```bash
hindsight-embed memory retain default "User prefers dark mode"
hindsight-embed memory retain default "Meeting on Monday" --context work
hindsight-embed memory retain myproject "API uses JWT authentication"
```
### memory recall
Search memories:
```bash
hindsight-embed memory recall default "user preferences"
hindsight-embed memory recall default "upcoming events"
```
Use `-o json` for JSON output:
```bash
hindsight-embed memory recall default "user preferences" -o json
```
### memory reflect
Get contextual answers that synthesize multiple memories:
```bash
hindsight-embed memory reflect default "How should I set up the dev environment?"
```
### bank list
List all memory banks:
```bash
hindsight-embed bank list
```
### profile
Manage configuration profiles:
```bash
# List all profiles with status
hindsight-embed profile list
# Show current active profile
hindsight-embed profile show
# Set active profile (persists across commands)
hindsight-embed profile set-active my-app
# Clear active profile (revert to default)
hindsight-embed profile set-active --none
# Delete a profile
hindsight-embed profile delete my-app
```
### daemon
Manage the background daemon:
```bash
hindsight-embed daemon status # Check if daemon is running
hindsight-embed daemon start # Start the daemon
hindsight-embed daemon stop # Stop the daemon
hindsight-embed daemon logs # View last 50 lines of logs
hindsight-embed daemon logs -f # Follow logs in real-time
hindsight-embed daemon logs -n 100 # View last 100 lines
```
## Configuration
### Interactive Setup
Run `hindsight-embed configure` for a guided setup that saves to `~/.hindsight/embed`.
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `HINDSIGHT_EMBED_PROFILE` | Profile name to use (overrides active profile) | None (uses default profile) |
| `HINDSIGHT_EMBED_LLM_API_KEY` | LLM API key (or use `OPENAI_API_KEY`) | Required |
| `HINDSIGHT_EMBED_LLM_PROVIDER` | LLM provider (`openai`, `groq`, `google`, `ollama`) | `openai` |
| `HINDSIGHT_EMBED_LLM_MODEL` | LLM model | `gpt-4o-mini` |
| `HINDSIGHT_EMBED_BANK_ID` | Default memory bank ID (optional, used when not specified in CLI) | `default` |
| `HINDSIGHT_EMBED_API_URL` | Use external API server instead of starting local daemon | None (starts local daemon) |
| `HINDSIGHT_EMBED_API_TOKEN` | Authentication token for external API (sent as Bearer token) | None |
| `HINDSIGHT_EMBED_API_DATABASE_URL` | Database URL for daemon | `pg0://hindsight-embed` |
| `HINDSIGHT_EMBED_DAEMON_IDLE_TIMEOUT` | Seconds before daemon auto-exits when idle | `300` |
**Using an External API Server:**
To connect to an existing Hindsight API server instead of starting the local daemon:
```bash
export HINDSIGHT_EMBED_API_URL=http://your-server:8000
export HINDSIGHT_EMBED_API_TOKEN=your-api-token # Optional, if API requires auth
hindsight-embed memory recall default "query"
```
**Custom Database:**
To use an external PostgreSQL database instead of the embedded pg0 database (useful when running as root or in containerized environments):
```bash
export HINDSIGHT_EMBED_API_DATABASE_URL=postgresql://user:password@localhost:5432/dbname
hindsight-embed daemon start
```
**Note:** All banks share a single database. Bank isolation happens within the database via the `bank_id` parameter passed to CLI commands.
### Configuration Profiles
Profiles let you maintain multiple independent configurations (e.g., different API endpoints, LLM providers, or projects). Each profile runs its own daemon on a unique port (8889-9888).
**The Default Profile:**
When you run `hindsight-embed configure` without specifying a profile, it configures the "default" profile. This uses the backward-compatible configuration at `~/.hindsight/embed` and runs on port 8888.
**Creating Named Profiles:**
```bash
# Create a profile with single command
hindsight-embed configure --profile my-app \
--env HINDSIGHT_EMBED_LLM_PROVIDER=openai \
--env HINDSIGHT_EMBED_LLM_API_KEY=sk-xxx \
--env HINDSIGHT_EMBED_LLM_MODEL=gpt-4o-mini
# Create a profile interactively
hindsight-embed configure --profile staging
```
**Using Profiles:**
```bash
# Option 1: Environment variable (recommended for apps)
HINDSIGHT_EMBED_PROFILE=my-app hindsight-embed memory retain default "text"
# Option 2: CLI flag
hindsight-embed --profile my-app memory recall default "query"
# Option 3: Set as active (persists across commands)
hindsight-embed profile set-active my-app
hindsight-embed memory recall default "query" # Uses my-app profile
# Clear active profile (revert to default)
hindsight-embed profile set-active --none
```
**Profile Management:**
```bash
# List all profiles with status
hindsight-embed profile list
# Show active profile
hindsight-embed profile show
# Delete a profile
hindsight-embed profile delete my-app
```
**Profile Resolution Priority:**
1. `HINDSIGHT_EMBED_PROFILE` environment variable (highest)
2. `--profile` CLI flag
3. Active profile from `~/.hindsight/active_profile` file
4. Default profile (lowest)
**Note:** If a profile is specified but doesn't exist, the command will fail with an error. Profiles must be explicitly created using `hindsight-embed configure --profile <name>`.
### Files
**Default Profile:**
| Path | Description |
|------|-------------|
| `~/.hindsight/embed` | Configuration file for default profile |
| `~/.hindsight/daemon.log` | Daemon logs for default profile |
| `~/.hindsight/daemon.lock` | Daemon lock file (PID) for default profile |
**Named Profiles:**
| Path | Description |
|------|-------------|
| `~/.hindsight/profiles/<name>.env` | Configuration file for profile |
| `~/.hindsight/profiles/<name>.log` | Daemon logs for profile |
| `~/.hindsight/profiles/<name>.lock` | Daemon lock file (PID) for profile |
| `~/.hindsight/profiles/metadata.json` | Profile metadata (ports, timestamps) |
| `~/.hindsight/active_profile` | Active profile name (when set with `profile set-active`) |
## Use with AI Coding Assistants
This CLI is designed to work with AI coding assistants like Claude Code, Cursor, and Windsurf. Install the Hindsight skill:
```bash
curl -fsSL https://hindsight.vectorize.io/get-skill | bash
```
This will configure the LLM provider and install the skill to your assistant's skills directory.
## Troubleshooting
**Daemon won't start:**
```bash
# Check logs for errors
hindsight-embed daemon logs
# Stop any stuck daemon and restart
hindsight-embed daemon stop
hindsight-embed daemon start
```
**Slow first command:**
This is expected - the first command needs to download dependencies, start the daemon, and load ML models. First run can take 1-3 minutes depending on network speed. Subsequent commands will be fast (~1-2s).
**Change configuration:**
```bash
# Re-run configure (automatically restarts daemon)
hindsight-embed configure
```
## License
Apache 2.0
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"rich>=13.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:47:14.426545 | hindsight_embed-0.4.13.tar.gz | 32,766 | 17/c6/ff78c9168d899a8760c4b75c93ae625ab59e3211c8fa9dcbd58d313f5686/hindsight_embed-0.4.13.tar.gz | source | sdist | null | false | 1e370bd0ced97f682b6a1ea18e99ed8c | e5c3a14e9b4ddd2715dfeac74115a3db0268857a3ef7499814fdca5e0099333c | 17c6ff78c9168d899a8760c4b75c93ae625ab59e3211c8fa9dcbd58d313f5686 | null | [] | 423 |
2.4 | hindsight-litellm | 0.4.13 | Universal LLM memory integration via LiteLLM - works with 100+ providers | # hindsight-litellm
Universal LLM memory integration via LiteLLM. Add persistent memory to any LLM application with just a few lines of code.
## Features
- **Universal LLM Support** - Works with 100+ LLM providers via LiteLLM (OpenAI, Anthropic, Groq, Azure, AWS Bedrock, Google Vertex AI, and more)
- **Simple Integration** - Just configure, set defaults, enable, and use `hindsight_litellm.completion()`
- **Automatic Memory Injection** - Relevant memories are injected into prompts before LLM calls
- **Automatic Conversation Storage** - Conversations are stored to Hindsight for future recall (async by default for performance)
- **Two Memory Modes** - Choose between `reflect` (synthesized context) or `recall` (raw memory retrieval)
- **Direct Memory APIs** - Query, synthesize, and store memories manually
- **Native Client Wrappers** - Alternative wrappers for OpenAI and Anthropic SDKs
- **Debug Mode** - Inspect exactly what memories are being injected
- **Async Error Tracking** - Check for background operation failures with `get_pending_retain_errors()`
## Installation
```bash
pip install hindsight-litellm
```
## Quick Start
```python
import hindsight_litellm
# Step 1: Configure static settings
hindsight_litellm.configure(
hindsight_api_url="http://localhost:8888",
verbose=True,
)
# Step 2: Set defaults (bank_id is required)
hindsight_litellm.set_defaults(
bank_id="my-agent",
use_reflect=True, # Use reflect for synthesized context
)
# Step 3: Enable memory integration
hindsight_litellm.enable()
# Step 4: Use with explicit hindsight_query (required when inject_memories=True)
response = hindsight_litellm.completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What did we discuss about AI?"}],
hindsight_query="What do I know about AI discussions?", # Required!
)
```
**Important:** When `inject_memories=True` (default), you must provide `hindsight_query` to specify what to search for in memory. This ensures intentional, focused memory queries.
## How It Works
Here's what happens under the hood when you call `completion()`:
```
┌─────────────────────────────────────────────────────────────────────────────┐
│ 1. YOUR CODE │
│ ───────────────────────────────────────────────────────────────────────── │
│ response = hindsight_litellm.completion( │
│ model="gpt-4o-mini", │
│ messages=[{"role": "user", "content": "Help me with my Python project"}]│
│ ) │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ 2. MEMORY RETRIEVAL (before LLM call) │
│ ───────────────────────────────────────────────────────────────────────── │
│ # hindsight_litellm queries Hindsight for relevant memories │
│ │
│ # If use_reflect=False (default) - raw memories: │
│ memories = hindsight.recall(query="Help me with my Python project") │
│ # Returns: ["User prefers pytest", "User is building a FastAPI app", ...] │
│ │
│ # If use_reflect=True - synthesized context: │
│ context = hindsight.reflect(query="Help me with my Python project") │
│ # Returns: "The user is an experienced Python developer working on..." │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ 3. PROMPT INJECTION │
│ ───────────────────────────────────────────────────────────────────────── │
│ # Memories are injected into the system message: │
│ │
│ messages = [ │
│ {"role": "system", "content": """ │
│ # Relevant Memories │
│ 1. [WORLD] User prefers pytest for testing │
│ 2. [WORLD] User is building a FastAPI app │
│ 3. [OPINION] User likes type hints │
│ """}, │
│ {"role": "user", "content": "Help me with my Python project"} │
│ ] │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ 4. LLM CALL │
│ ───────────────────────────────────────────────────────────────────────── │
│ # The enriched prompt is sent to the LLM │
│ response = litellm.completion(model="gpt-4o-mini", messages=messages) │
│ │
│ # LLM now has context and can give personalized responses like: │
│ # "Since you're working on your FastAPI app, here's how to add tests │
│ # with pytest..." │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ 5. CONVERSATION STORAGE (after LLM call) │
│ ───────────────────────────────────────────────────────────────────────── │
│ # The conversation is stored to Hindsight for future recall │
│ hindsight.retain( │
│ content="User: Help me with my Python project\n" │
│ "Assistant: Since you're working on FastAPI..." │
│ ) │
│ # Hindsight extracts facts: "User asked about Python project help" │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ 6. RESPONSE RETURNED │
│ ───────────────────────────────────────────────────────────────────────── │
│ # You receive the response as normal │
│ print(response.choices[0].message.content) │
└─────────────────────────────────────────────────────────────────────────────┘
```
The memory injection and storage happen automatically - you just use `completion()` as normal.
## Configuration Options
The API is split into two functions for clarity:
### 1. `configure()` - Static Settings
Settings that typically don't change during a session:
```python
hindsight_litellm.configure(
# Required
hindsight_api_url="http://localhost:8888", # Hindsight API server URL
# Optional - Authentication
api_key="your-api-key", # API key for Hindsight authentication
# Optional - Memory behavior
store_conversations=True, # Store conversations after LLM calls
inject_memories=True, # Inject relevant memories into prompts
sync_storage=False, # False = async storage (default, better performance)
# True = sync storage (blocks, raises errors immediately)
# Optional - Advanced
injection_mode="system_message", # How to inject: "system_message" or "prepend_user"
excluded_models=["gpt-3.5*"], # Exclude certain models from interception
verbose=True, # Enable verbose logging and debug info
)
```
### 2. `set_defaults()` - Per-Call Defaults
Default values for per-call settings. These can be overridden on individual calls using `hindsight_*` kwargs:
```python
hindsight_litellm.set_defaults(
# Required
bank_id="my-agent", # Memory bank ID
# Optional - Memory retrieval
budget="mid", # Budget level: "low", "mid", "high"
fact_types=["world", "opinion"], # Filter fact types to retrieve
max_memories=10, # Maximum memories to inject (None = unlimited)
max_memory_tokens=4096, # Maximum tokens for memory context
include_entities=True, # Include entity observations in recall
# Optional - Reflect mode
use_reflect=True, # Use reflect API (synthesized) vs recall (raw memories)
reflect_include_facts=False, # Include source facts in debug info
reflect_context="I am a delivery agent finding recipients.", # Context for reflect reasoning
reflect_response_schema={...}, # JSON Schema for structured reflect output
# Optional - Debugging
trace=False, # Enable trace info for debugging
document_id="conversation-1", # Document ID for grouping conversations
)
```
### 3. Per-Call Overrides
Override any default on individual calls using `hindsight_*` kwargs:
```python
response = hindsight_litellm.completion(
model="gpt-4o-mini",
messages=[...],
hindsight_query="Where is Alice located?", # REQUIRED when inject_memories=True
hindsight_reflect_context="Currently on floor 3", # Per-call reflect context override
# hindsight_bank_id="other-bank", # Override bank_id for this call
)
```
### Bank Configuration: mission
Use `set_bank_mission()` to configure what the memory bank should learn and remember (used for mental models):
```python
hindsight_litellm.set_bank_mission(
mission="""This agent routes customer support requests to the appropriate team.
Remember which types of issues should go to which teams (billing, technical, sales).
Track customer preferences for communication channels and past issue resolutions.""",
name="Customer Support Router", # Optional display name
)
```
### Memory Modes: Reflect vs Recall
- **Recall mode** (`use_reflect=False`, default): Retrieves raw memory facts and injects them as a numbered list. Best when you need precise, individual memories.
- **Reflect mode** (`use_reflect=True`): Synthesizes memories into a coherent context paragraph. Best for natural, conversational memory context.
```python
# Recall mode - raw memories
hindsight_litellm.set_defaults(bank_id="my-agent", use_reflect=False)
# Injects: "1. [WORLD] User prefers Python\n2. [OPINION] User dislikes Java..."
# Reflect mode - synthesized context
hindsight_litellm.set_defaults(bank_id="my-agent", use_reflect=True)
# Injects: "Based on previous conversations, the user is a Python developer who..."
# Reflect with context - shapes LLM reasoning (not retrieval)
hindsight_litellm.set_defaults(
bank_id="my-agent",
use_reflect=True,
reflect_context="I am a delivery agent looking for package recipients.",
)
```
## Multi-Provider Support
Works with any LiteLLM-supported provider:
```python
import hindsight_litellm
hindsight_litellm.configure(hindsight_api_url="http://localhost:8888")
hindsight_litellm.set_defaults(bank_id="my-agent")
hindsight_litellm.enable()
messages = [{"role": "user", "content": "Hello!"}]
# OpenAI
hindsight_litellm.completion(model="gpt-4o", messages=messages, hindsight_query="greeting")
# Anthropic
hindsight_litellm.completion(model="claude-3-5-sonnet-20241022", messages=messages, hindsight_query="greeting")
# Groq
hindsight_litellm.completion(model="groq/llama-3.1-70b-versatile", messages=messages, hindsight_query="greeting")
# Azure OpenAI
hindsight_litellm.completion(model="azure/gpt-4", messages=messages, hindsight_query="greeting")
# AWS Bedrock
hindsight_litellm.completion(model="bedrock/anthropic.claude-3", messages=messages, hindsight_query="greeting")
# Google Vertex AI
hindsight_litellm.completion(model="vertex_ai/gemini-pro", messages=messages, hindsight_query="greeting")
```
## Direct Memory APIs
### Recall - Query raw memories
```python
from hindsight_litellm import configure, set_defaults, recall
configure(hindsight_api_url="http://localhost:8888")
set_defaults(bank_id="my-agent")
# Query memories
memories = recall("what projects am I working on?", budget="mid")
for m in memories:
print(f"- [{m.fact_type}] {m.text}")
# Output:
# - [world] User is building a FastAPI project
# - [opinion] User prefers Python over JavaScript
```
### Reflect - Get synthesized context
```python
from hindsight_litellm import configure, set_defaults, reflect
configure(hindsight_api_url="http://localhost:8888")
set_defaults(bank_id="my-agent")
# Get synthesized memory context
result = reflect("what do you know about the user's preferences?")
print(result.text)
# Output:
# "Based on our conversations, the user prefers Python for backend development..."
# With context to shape the response (doesn't affect retrieval)
result = reflect(
query="what do I know about Alice?",
context="I am a delivery agent looking for package recipients.",
)
```
### Retain - Store memories
```python
from hindsight_litellm import configure, set_defaults, retain, get_pending_retain_errors
configure(hindsight_api_url="http://localhost:8888")
set_defaults(bank_id="my-agent")
# Async retain (default) - fast, non-blocking
# Returns immediately; actual storage happens in background
result = retain(
content="User mentioned they're working on a machine learning project",
context="Discussion about current projects",
)
# result.success is True immediately (actual errors collected separately)
# Sync retain - blocks until complete, raises errors immediately
result = retain(
content="Critical information that must be stored",
context="Important data",
sync=True, # Block until storage completes
)
# Check for async retain errors (call periodically)
errors = get_pending_retain_errors()
if errors:
for e in errors:
print(f"Background retain failed: {e}")
```
### Async APIs
```python
from hindsight_litellm import arecall, areflect, aretain
# Async versions of all memory APIs
memories = await arecall("what do you know about me?")
context = await areflect("summarize user preferences")
result = await aretain(content="New information to remember")
```
## Native Client Wrappers
Alternative to LiteLLM callbacks for direct SDK integration:
### OpenAI Wrapper
```python
from openai import OpenAI
from hindsight_litellm import wrap_openai
client = OpenAI()
wrapped = wrap_openai(
client,
bank_id="my-agent",
hindsight_api_url="http://localhost:8888",
)
response = wrapped.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What do you know about me?"}]
)
```
### Anthropic Wrapper
```python
from anthropic import Anthropic
from hindsight_litellm import wrap_anthropic
client = Anthropic()
wrapped = wrap_anthropic(
client,
bank_id="my-agent",
hindsight_api_url="http://localhost:8888",
)
response = wrapped.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
```
## Debug Mode
When `verbose=True`, you can inspect exactly what memories are being injected:
```python
from hindsight_litellm import configure, set_defaults, enable, completion, get_last_injection_debug
configure(hindsight_api_url="http://localhost:8888", verbose=True)
set_defaults(bank_id="my-agent", use_reflect=True)
enable()
response = completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What's my favorite color?"}],
hindsight_query="What is the user's favorite color?",
)
# Inspect what was injected
debug = get_last_injection_debug()
if debug:
print(f"Mode: {debug.mode}") # "reflect" or "recall"
print(f"Injected: {debug.injected}") # True/False
print(f"Results: {debug.results_count}")
print(f"Memory context:\n{debug.memory_context}")
if debug.error:
print(f"Error: {debug.error}")
```
## Context Manager
```python
from hindsight_litellm import hindsight_memory
import litellm
with hindsight_memory(bank_id="user-123"):
response = litellm.completion(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
hindsight_query="greeting context",
)
# Memory integration automatically disabled after context
```
## Disabling and Cleanup
```python
from hindsight_litellm import disable, cleanup
# Temporarily disable memory integration
disable()
# Clean up all resources (call when shutting down)
cleanup()
```
## API Reference
### Main Functions
| Function | Description |
|----------|-------------|
| `configure(...)` | Configure static Hindsight settings (API URL, auth, storage options) |
| `set_defaults(...)` | Set defaults for per-call settings (bank_id, budget, reflect options) |
| `enable()` | Enable memory integration with LiteLLM |
| `disable()` | Disable memory integration |
| `is_enabled()` | Check if memory integration is enabled |
| `cleanup()` | Clean up all resources |
### Configuration Functions
| Function | Description |
|----------|-------------|
| `get_config()` | Get current static configuration |
| `get_defaults()` | Get current per-call defaults |
| `is_configured()` | Check if Hindsight is configured with a bank_id |
| `reset_config()` | Reset all configuration to defaults |
| `set_document_id(id)` | Convenience function to update document_id |
| `set_bank_mission(...)` | Set mission/instructions for a memory bank (for mental models) |
### Memory Functions
| Function | Description |
|----------|-------------|
| `recall(query, ...)` | Query raw memories (sync) |
| `arecall(query, ...)` | Query raw memories (async) |
| `reflect(query, ...)` | Get synthesized memory context (sync) |
| `areflect(query, ...)` | Get synthesized memory context (async) |
| `retain(content, sync=False, ...)` | Store a memory (async by default, use `sync=True` to block) |
| `aretain(content, ...)` | Store a memory (async) |
### Error Tracking Functions
| Function | Description |
|----------|-------------|
| `get_pending_retain_errors()` | Get and clear errors from background retain operations |
| `get_pending_storage_errors()` | Get and clear errors from background conversation storage |
### Debug Functions
| Function | Description |
|----------|-------------|
| `get_last_injection_debug()` | Get debug info from last memory injection |
| `clear_injection_debug()` | Clear stored debug info |
### Client Wrappers
| Function | Description |
|----------|-------------|
| `wrap_openai(client, ...)` | Wrap OpenAI client with memory |
| `wrap_anthropic(client, ...)` | Wrap Anthropic client with memory |
## Requirements
- Python >= 3.10
- litellm >= 1.40.0
- A running Hindsight API server
## License
MIT
| text/markdown | null | Vectorize <support@vectorize.io> | null | null | MIT | agents, ai, anthropic, groq, hindsight, langchain, litellm, llm, memory, openai | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engin... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.13.3",
"filelock>=3.20.3",
"litellm>=1.40.0",
"urllib3>=2.6.3",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/vectorize-io/hindsight",
"Documentation, https://github.com/vectorize-io/hindsight/tree/main/hindsight-integrations/litellm",
"Repository, https://github.com/vectorize-io/hindsight"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:47:06.013129 | hindsight_litellm-0.4.13.tar.gz | 190,195 | 7f/24/13781e4da0199d48107f60d9a7d5e9862e277f64775ebbb249430278b337/hindsight_litellm-0.4.13.tar.gz | source | sdist | null | false | b7959eadeceab0840dae1b7947375d3b | b88b63b71cf44cdc6b691aa6d9313c155f0e1f99f3d269fd8639968d5bfa56ca | 7f2413781e4da0199d48107f60d9a7d5e9862e277f64775ebbb249430278b337 | null | [] | 235 |
2.4 | hindsight-all | 0.4.13 | Hindsight: Agent Memory That Works Like Human Memory - All-in-One Bundle | # hindsight-all
All-in-one package for Hindsight - Agent Memory That Works Like Human Memory
## Quick Start
```python
from hindsight import start_server, HindsightClient
# Start server with embedded PostgreSQL
server = start_server(
llm_provider="groq",
llm_api_key="your-api-key",
llm_model="openai/gpt-oss-120b"
)
# Create client
client = HindsightClient(base_url=server.url)
# Store memories
client.put(agent_id="assistant", content="User prefers Python for data analysis")
# Search memories
results = client.search(agent_id="assistant", query="programming preferences")
# Generate contextual response
response = client.think(agent_id="assistant", query="What languages should I recommend?")
# Stop server when done
server.stop()
```
## Using Context Manager
```python
from hindsight import HindsightServer, HindsightClient
with HindsightServer(llm_provider="groq", llm_api_key="...") as server:
client = HindsightClient(base_url=server.url)
# ... use client ...
# Server automatically stops
```
## Installation
```bash
pip install hindsight-all
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"hindsight-api>=0.0.7",
"hindsight-client>=0.0.7",
"hindsight-embed>=0.1.0",
"pytest-asyncio>=0.21.0; extra == \"test\"",
"pytest>=7.0.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T17:46:57.873730 | hindsight_all-0.4.13.tar.gz | 15,472 | d0/b3/ed3a12b153b98f6b245eb0becff8adfa4e2f3bed7689433a8f11f8f0d643/hindsight_all-0.4.13.tar.gz | source | sdist | null | false | 10e708edbb24e2ef15a7179c602eccbc | 97307afffaf6296703151b1bd78554ed7c71f919b5c5f4305f8fd62bac5d0967 | d0b3ed3a12b153b98f6b245eb0becff8adfa4e2f3bed7689433a8f11f8f0d643 | null | [] | 244 |
2.4 | fw-storage | 3.8.6 | Unified storage interface. | # fw-storage
Unified file storage interface tuned for simple filtering, memory-efficiency and
performance to support processing large datasets in Flywheel imports and exports.
Supported storage backends:
- `fs://` - Local file-system
- `s3://` - Amazon S3
- `gs://` - Google Cloud Storage
- `az://` - Azure Blob Storage
## Installation
Add as a `poetry` dependency to your project:
```bash
poetry add fw-storage
```
## Usage
```python
from fw_storage import create_storage_client
# instantiate storage with URL
fs = create_storage_client("fs:///tmp")
# set objects from bytes, filepaths or open files
fs.set("test/file1.dat", b"content")
fs.set("test/file2.dat", "/tmp/test/file1.dat")
fs.set("test/file3.dat", open("/tmp/test/file2.dat"))
# list objects, filtering with expressions
files = list(fs.ls("test", include=["size<1kB"], exclude=["path!~file3"]))
len(files) == 2
# get object info with path, size, created and modified
info = fs.stat("test/file1.dat")
info.size == 7
# read object contents
file = fs.get("test/file1.dat")
file.read() == b"content"
# remove one or more objects
fs.rm("test", recurse=True)
```
## Configuration
Credentials for cloud storage providers are loaded using the vendor SDKs to
support every standard config file location and environment variable recommended
by the provider:
| Storage | Config docs |
| ------- | --------------------- |
| `s3://` | [Boto][boto-docs] |
| `gs://` | [Google][google-docs] |
| `az://` | [Azure][azure-docs] |
In addition, `az://` can be configured with the envvar `AZ_ACCESS_KEY`.
[boto-docs]: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
[google-docs]: https://google-auth.readthedocs.io/en/latest/reference/google.auth.html
[azure-docs]: https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential
## Development
Install the project using `poetry` and enable `pre-commit`:
```bash
poetry install -E all
pre-commit install
```
## License
[](LICENSE)
| text/markdown | null | Flywheel <support@flywheel.io> | null | null | null | Flywheel, file, object, storage | [
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"azure-identity<2,>=1.11.0",
"azure-storage-blob>12.22.0",
"boto3<2,>=1.17.7",
"fw-utils>=4.2.2",
"google-cloud-storage<4,>=3.1.0",
"pydantic<3,>=2.3.0",
"typing-extensions<5,>=4.9.0"
] | [] | [] | [] | [
"Repository, https://gitlab.com/flywheel-io/tools/lib/fw-storage",
"Documentation, https://gitlab.com/flywheel-io/tools/lib/fw-storage"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Alpine Linux","version":"3.24.0_alpha20260127","id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T17:46:53.161709 | fw_storage-3.8.6-py3-none-any.whl | 35,916 | e2/17/556d3b599f21ff1a0ac4b7b2258a89dc069b0e7d049a801198c51f244bc5/fw_storage-3.8.6-py3-none-any.whl | py3 | bdist_wheel | null | false | 262f2b479d41c5446e7c6604d79a08d4 | bd62b7e298d8cc6e9de94290ba41db878f0e33ab9b518dbea07b57479a4d2ff1 | e217556d3b599f21ff1a0ac4b7b2258a89dc069b0e7d049a801198c51f244bc5 | MIT | [
"LICENSE"
] | 262 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.