code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
Release History
===============
dev
---
- \[Short description of non-trivial change.\]
2.31.0 (2023-05-22)
-------------------
**Security**
- Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of `Proxy-Authorization` headers to destination servers when
following HTTPS redirects.
When proxies are defined with user info (https://user:pass@proxy:8080), Requests
will construct a `Proxy-Authorization` header that is attached to the request to
authenticate with the proxy.
In cases where Requests receives a redirect response, it previously reattached
the `Proxy-Authorization` header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are *strongly* encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.
Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.
Full details can be read in our [Github Security Advisory](https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q)
and [CVE-2023-32681](https://nvd.nist.gov/vuln/detail/CVE-2023-32681).
2.30.0 (2023-05-03)
-------------------
**Dependencies**
- ⚠️ Added support for urllib3 2.0. ⚠️
This may contain minor breaking changes so we advise careful testing and
reviewing https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html
prior to upgrading.
Users who wish to stay on urllib3 1.x can pin to `urllib3<2`.
2.29.0 (2023-04-26)
-------------------
**Improvements**
- Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (#6226)
- Requests relaxes header component requirements to support bytes/str subclasses. (#6356)
2.28.2 (2023-01-12)
-------------------
**Dependencies**
- Requests now supports charset\_normalizer 3.x. (#6261)
**Bugfixes**
- Updated MissingSchema exception to suggest https scheme rather than http. (#6188)
2.28.1 (2022-06-29)
-------------------
**Improvements**
- Speed optimization in `iter_content` with transition to `yield from`. (#6170)
**Dependencies**
- Added support for chardet 5.0.0 (#6179)
- Added support for charset-normalizer 2.1.0 (#6169)
2.28.0 (2022-06-09)
-------------------
**Deprecations**
- ⚠️ Requests has officially dropped support for Python 2.7. ⚠️ (#6091)
- Requests has officially dropped support for Python 3.6 (including pypy3.6). (#6091)
**Improvements**
- Wrap JSON parsing issues in Request's JSONDecodeError for payloads without
an encoding to make `json()` API consistent. (#6097)
- Parse header components consistently, raising an InvalidHeader error in
all invalid cases. (#6154)
- Added provisional 3.11 support with current beta build. (#6155)
- Requests got a makeover and we decided to paint it black. (#6095)
**Bugfixes**
- Fixed bug where setting `CURL_CA_BUNDLE` to an empty string would disable
cert verification. All Requests 2.x versions before 2.28.0 are affected. (#6074)
- Fixed urllib3 exception leak, wrapping `urllib3.exceptions.SSLError` with
`requests.exceptions.SSLError` for `content` and `iter_content`. (#6057)
- Fixed issue where invalid Windows registry entries caused proxy resolution
to raise an exception rather than ignoring the entry. (#6149)
- Fixed issue where entire payload could be included in the error message for
JSONDecodeError. (#6036)
2.27.1 (2022-01-05)
-------------------
**Bugfixes**
- Fixed parsing issue that resulted in the `auth` component being
dropped from proxy URLs. (#6028)
2.27.0 (2022-01-03)
-------------------
**Improvements**
- Officially added support for Python 3.10. (#5928)
- Added a `requests.exceptions.JSONDecodeError` to unify JSON exceptions between
Python 2 and 3. This gets raised in the `response.json()` method, and is
backwards compatible as it inherits from previously thrown exceptions.
Can be caught from `requests.exceptions.RequestException` as well. (#5856)
- Improved error text for misnamed `InvalidSchema` and `MissingSchema`
exceptions. This is a temporary fix until exceptions can be renamed
(Schema->Scheme). (#6017)
- Improved proxy parsing for proxy URLs missing a scheme. This will address
recent changes to `urlparse` in Python 3.9+. (#5917)
**Bugfixes**
- Fixed defect in `extract_zipped_paths` which could result in an infinite loop
for some paths. (#5851)
- Fixed handling for `AttributeError` when calculating length of files obtained
by `Tarfile.extractfile()`. (#5239)
- Fixed urllib3 exception leak, wrapping `urllib3.exceptions.InvalidHeader` with
`requests.exceptions.InvalidHeader`. (#5914)
- Fixed bug where two Host headers were sent for chunked requests. (#5391)
- Fixed regression in Requests 2.26.0 where `Proxy-Authorization` was
incorrectly stripped from all requests sent with `Session.send`. (#5924)
- Fixed performance regression in 2.26.0 for hosts with a large number of
proxies available in the environment. (#5924)
- Fixed idna exception leak, wrapping `UnicodeError` with
`requests.exceptions.InvalidURL` for URLs with a leading dot (.) in the
domain. (#5414)
**Deprecations**
- Requests support for Python 2.7 and 3.6 will be ending in 2022. While we
don't have exact dates, Requests 2.27.x is likely to be the last release
series providing support.
2.26.0 (2021-07-13)
-------------------
**Improvements**
- Requests now supports Brotli compression, if either the `brotli` or
`brotlicffi` package is installed. (#5783)
- `Session.send` now correctly resolves proxy configurations from both
the Session and Request. Behavior now matches `Session.request`. (#5681)
**Bugfixes**
- Fixed a race condition in zip extraction when using Requests in parallel
from zip archive. (#5707)
**Dependencies**
- Instead of `chardet`, use the MIT-licensed `charset_normalizer` for Python3
to remove license ambiguity for projects bundling requests. If `chardet`
is already installed on your machine it will be used instead of `charset_normalizer`
to keep backwards compatibility. (#5797)
You can also install `chardet` while installing requests by
specifying `[use_chardet_on_py3]` extra as follows:
```shell
pip install "requests[use_chardet_on_py3]"
```
Python2 still depends upon the `chardet` module.
- Requests now supports `idna` 3.x on Python 3. `idna` 2.x will continue to
be used on Python 2 installations. (#5711)
**Deprecations**
- The `requests[security]` extra has been converted to a no-op install.
PyOpenSSL is no longer the recommended secure option for Requests. (#5867)
- Requests has officially dropped support for Python 3.5. (#5867)
2.25.1 (2020-12-16)
-------------------
**Bugfixes**
- Requests now treats `application/json` as `utf8` by default. Resolving
inconsistencies between `r.text` and `r.json` output. (#5673)
**Dependencies**
- Requests now supports chardet v4.x.
2.25.0 (2020-11-11)
-------------------
**Improvements**
- Added support for NETRC environment variable. (#5643)
**Dependencies**
- Requests now supports urllib3 v1.26.
**Deprecations**
- Requests v2.25.x will be the last release series with support for Python 3.5.
- The `requests[security]` extra is officially deprecated and will be removed
in Requests v2.26.0.
2.24.0 (2020-06-17)
-------------------
**Improvements**
- pyOpenSSL TLS implementation is now only used if Python
either doesn't have an `ssl` module or doesn't support
SNI. Previously pyOpenSSL was unconditionally used if available.
This applies even if pyOpenSSL is installed via the
`requests[security]` extra (#5443)
- Redirect resolution should now only occur when
`allow_redirects` is True. (#5492)
- No longer perform unnecessary Content-Length calculation for
requests that won't use it. (#5496)
2.23.0 (2020-02-19)
-------------------
**Improvements**
- Remove defunct reference to `prefetch` in Session `__attrs__` (#5110)
**Bugfixes**
- Requests no longer outputs password in basic auth usage warning. (#5099)
**Dependencies**
- Pinning for `chardet` and `idna` now uses major version instead of minor.
This hopefully reduces the need for releases every time a dependency is updated.
2.22.0 (2019-05-15)
-------------------
**Dependencies**
- Requests now supports urllib3 v1.25.2.
(note: 1.25.0 and 1.25.1 are incompatible)
**Deprecations**
- Requests has officially stopped support for Python 3.4.
2.21.0 (2018-12-10)
-------------------
**Dependencies**
- Requests now supports idna v2.8.
2.20.1 (2018-11-08)
-------------------
**Bugfixes**
- Fixed bug with unintended Authorization header stripping for
redirects using default ports (http/80, https/443).
2.20.0 (2018-10-18)
-------------------
**Bugfixes**
- Content-Type header parsing is now case-insensitive (e.g.
charset=utf8 v Charset=utf8).
- Fixed exception leak where certain redirect urls would raise
uncaught urllib3 exceptions.
- Requests removes Authorization header from requests redirected
from https to http on the same hostname. (CVE-2018-18074)
- `should_bypass_proxies` now handles URIs without hostnames (e.g.
files).
**Dependencies**
- Requests now supports urllib3 v1.24.
**Deprecations**
- Requests has officially stopped support for Python 2.6.
2.19.1 (2018-06-14)
-------------------
**Bugfixes**
- Fixed issue where status\_codes.py's `init` function failed trying
to append to a `__doc__` value of `None`.
2.19.0 (2018-06-12)
-------------------
**Improvements**
- Warn user about possible slowdown when using cryptography version
< 1.3.4
- Check for invalid host in proxy URL, before forwarding request to
adapter.
- Fragments are now properly maintained across redirects. (RFC7231
7.1.2)
- Removed use of cgi module to expedite library load time.
- Added support for SHA-256 and SHA-512 digest auth algorithms.
- Minor performance improvement to `Request.content`.
- Migrate to using collections.abc for 3.7 compatibility.
**Bugfixes**
- Parsing empty `Link` headers with `parse_header_links()` no longer
return one bogus entry.
- Fixed issue where loading the default certificate bundle from a zip
archive would raise an `IOError`.
- Fixed issue with unexpected `ImportError` on windows system which do
not support `winreg` module.
- DNS resolution in proxy bypass no longer includes the username and
password in the request. This also fixes the issue of DNS queries
failing on macOS.
- Properly normalize adapter prefixes for url comparison.
- Passing `None` as a file pointer to the `files` param no longer
raises an exception.
- Calling `copy` on a `RequestsCookieJar` will now preserve the cookie
policy correctly.
**Dependencies**
- We now support idna v2.7.
- We now support urllib3 v1.23.
2.18.4 (2017-08-15)
-------------------
**Improvements**
- Error messages for invalid headers now include the header name for
easier debugging
**Dependencies**
- We now support idna v2.6.
2.18.3 (2017-08-02)
-------------------
**Improvements**
- Running `$ python -m requests.help` now includes the installed
version of idna.
**Bugfixes**
- Fixed issue where Requests would raise `ConnectionError` instead of
`SSLError` when encountering SSL problems when using urllib3 v1.22.
2.18.2 (2017-07-25)
-------------------
**Bugfixes**
- `requests.help` no longer fails on Python 2.6 due to the absence of
`ssl.OPENSSL_VERSION_NUMBER`.
**Dependencies**
- We now support urllib3 v1.22.
2.18.1 (2017-06-14)
-------------------
**Bugfixes**
- Fix an error in the packaging whereby the `*.whl` contained
incorrect data that regressed the fix in v2.17.3.
2.18.0 (2017-06-14)
-------------------
**Improvements**
- `Response` is now a context manager, so can be used directly in a
`with` statement without first having to be wrapped by
`contextlib.closing()`.
**Bugfixes**
- Resolve installation failure if multiprocessing is not available
- Resolve tests crash if multiprocessing is not able to determine the
number of CPU cores
- Resolve error swallowing in utils set\_environ generator
2.17.3 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.2 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.1 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.0 (2017-05-29)
-------------------
**Improvements**
- Removal of the 301 redirect cache. This improves thread-safety.
2.16.5 (2017-05-28)
-------------------
- Improvements to `$ python -m requests.help`.
2.16.4 (2017-05-27)
-------------------
- Introduction of the `$ python -m requests.help` command, for
debugging with maintainers!
2.16.3 (2017-05-27)
-------------------
- Further restored the `requests.packages` namespace for compatibility
reasons.
2.16.2 (2017-05-27)
-------------------
- Further restored the `requests.packages` namespace for compatibility
reasons.
No code modification (noted below) should be necessary any longer.
2.16.1 (2017-05-27)
-------------------
- Restored the `requests.packages` namespace for compatibility
reasons.
- Bugfix for `urllib3` version parsing.
**Note**: code that was written to import against the
`requests.packages` namespace previously will have to import code that
rests at this module-level now.
For example:
from requests.packages.urllib3.poolmanager import PoolManager
Will need to be re-written to be:
from requests.packages import urllib3
urllib3.poolmanager.PoolManager
Or, even better:
from urllib3.poolmanager import PoolManager
2.16.0 (2017-05-26)
-------------------
- Unvendor ALL the things!
2.15.1 (2017-05-26)
-------------------
- Everyone makes mistakes.
2.15.0 (2017-05-26)
-------------------
**Improvements**
- Introduction of the `Response.next` property, for getting the next
`PreparedResponse` from a redirect chain (when
`allow_redirects=False`).
- Internal refactoring of `__version__` module.
**Bugfixes**
- Restored once-optional parameter for
`requests.utils.get_environ_proxies()`.
2.14.2 (2017-05-10)
-------------------
**Bugfixes**
- Changed a less-than to an equal-to and an or in the dependency
markers to widen compatibility with older setuptools releases.
2.14.1 (2017-05-09)
-------------------
**Bugfixes**
- Changed the dependency markers to widen compatibility with older pip
releases.
2.14.0 (2017-05-09)
-------------------
**Improvements**
- It is now possible to pass `no_proxy` as a key to the `proxies`
dictionary to provide handling similar to the `NO_PROXY` environment
variable.
- When users provide invalid paths to certificate bundle files or
directories Requests now raises `IOError`, rather than failing at
the time of the HTTPS request with a fairly inscrutable certificate
validation error.
- The behavior of `SessionRedirectMixin` was slightly altered.
`resolve_redirects` will now detect a redirect by calling
`get_redirect_target(response)` instead of directly querying
`Response.is_redirect` and `Response.headers['location']`. Advanced
users will be able to process malformed redirects more easily.
- Changed the internal calculation of elapsed request time to have
higher resolution on Windows.
- Added `win_inet_pton` as conditional dependency for the `[socks]`
extra on Windows with Python 2.7.
- Changed the proxy bypass implementation on Windows: the proxy bypass
check doesn't use forward and reverse DNS requests anymore
- URLs with schemes that begin with `http` but are not `http` or
`https` no longer have their host parts forced to lowercase.
**Bugfixes**
- Much improved handling of non-ASCII `Location` header values in
redirects. Fewer `UnicodeDecodeErrors` are encountered on Python 2,
and Python 3 now correctly understands that Latin-1 is unlikely to
be the correct encoding.
- If an attempt to `seek` file to find out its length fails, we now
appropriately handle that by aborting our content-length
calculations.
- Restricted `HTTPDigestAuth` to only respond to auth challenges made
on 4XX responses, rather than to all auth challenges.
- Fixed some code that was firing `DeprecationWarning` on Python 3.6.
- The dismayed person emoticon (`/o\\`) no longer has a big head. I'm
sure this is what you were all worrying about most.
**Miscellaneous**
- Updated bundled urllib3 to v1.21.1.
- Updated bundled chardet to v3.0.2.
- Updated bundled idna to v2.5.
- Updated bundled certifi to 2017.4.17.
2.13.0 (2017-01-24)
-------------------
**Features**
- Only load the `idna` library when we've determined we need it. This
will save some memory for users.
**Miscellaneous**
- Updated bundled urllib3 to 1.20.
- Updated bundled idna to 2.2.
2.12.5 (2017-01-18)
-------------------
**Bugfixes**
- Fixed an issue with JSON encoding detection, specifically detecting
big-endian UTF-32 with BOM.
2.12.4 (2016-12-14)
-------------------
**Bugfixes**
- Fixed regression from 2.12.2 where non-string types were rejected in
the basic auth parameters. While support for this behaviour has been
re-added, the behaviour is deprecated and will be removed in the
future.
2.12.3 (2016-12-01)
-------------------
**Bugfixes**
- Fixed regression from v2.12.1 for URLs with schemes that begin with
"http". These URLs have historically been processed as though they
were HTTP-schemed URLs, and so have had parameters added. This was
removed in v2.12.2 in an overzealous attempt to resolve problems
with IDNA-encoding those URLs. This change was reverted: the other
fixes for IDNA-encoding have been judged to be sufficient to return
to the behaviour Requests had before v2.12.0.
2.12.2 (2016-11-30)
-------------------
**Bugfixes**
- Fixed several issues with IDNA-encoding URLs that are technically
invalid but which are widely accepted. Requests will now attempt to
IDNA-encode a URL if it can but, if it fails, and the host contains
only ASCII characters, it will be passed through optimistically.
This will allow users to opt-in to using IDNA2003 themselves if they
want to, and will also allow technically invalid but still common
hostnames.
- Fixed an issue where URLs with leading whitespace would raise
`InvalidSchema` errors.
- Fixed an issue where some URLs without the HTTP or HTTPS schemes
would still have HTTP URL preparation applied to them.
- Fixed an issue where Unicode strings could not be used in basic
auth.
- Fixed an issue encountered by some Requests plugins where
constructing a Response object would cause `Response.content` to
raise an `AttributeError`.
2.12.1 (2016-11-16)
-------------------
**Bugfixes**
- Updated setuptools 'security' extra for the new PyOpenSSL backend in
urllib3.
**Miscellaneous**
- Updated bundled urllib3 to 1.19.1.
2.12.0 (2016-11-15)
-------------------
**Improvements**
- Updated support for internationalized domain names from IDNA2003 to
IDNA2008. This updated support is required for several forms of IDNs
and is mandatory for .de domains.
- Much improved heuristics for guessing content lengths: Requests will
no longer read an entire `StringIO` into memory.
- Much improved logic for recalculating `Content-Length` headers for
`PreparedRequest` objects.
- Improved tolerance for file-like objects that have no `tell` method
but do have a `seek` method.
- Anything that is a subclass of `Mapping` is now treated like a
dictionary by the `data=` keyword argument.
- Requests now tolerates empty passwords in proxy credentials, rather
than stripping the credentials.
- If a request is made with a file-like object as the body and that
request is redirected with a 307 or 308 status code, Requests will
now attempt to rewind the body object so it can be replayed.
**Bugfixes**
- When calling `response.close`, the call to `close` will be
propagated through to non-urllib3 backends.
- Fixed issue where the `ALL_PROXY` environment variable would be
preferred over scheme-specific variables like `HTTP_PROXY`.
- Fixed issue where non-UTF8 reason phrases got severely mangled by
falling back to decoding using ISO 8859-1 instead.
- Fixed a bug where Requests would not correctly correlate cookies set
when using custom Host headers if those Host headers did not use the
native string type for the platform.
**Miscellaneous**
- Updated bundled urllib3 to 1.19.
- Updated bundled certifi certs to 2016.09.26.
2.11.1 (2016-08-17)
-------------------
**Bugfixes**
- Fixed a bug when using `iter_content` with `decode_unicode=True` for
streamed bodies would raise `AttributeError`. This bug was
introduced in 2.11.
- Strip Content-Type and Transfer-Encoding headers from the header
block when following a redirect that transforms the verb from
POST/PUT to GET.
2.11.0 (2016-08-08)
-------------------
**Improvements**
- Added support for the `ALL_PROXY` environment variable.
- Reject header values that contain leading whitespace or newline
characters to reduce risk of header smuggling.
**Bugfixes**
- Fixed occasional `TypeError` when attempting to decode a JSON
response that occurred in an error case. Now correctly returns a
`ValueError`.
- Requests would incorrectly ignore a non-CIDR IP address in the
`NO_PROXY` environment variables: Requests now treats it as a
specific IP.
- Fixed a bug when sending JSON data that could cause us to encounter
obscure OpenSSL errors in certain network conditions (yes, really).
- Added type checks to ensure that `iter_content` only accepts
integers and `None` for chunk sizes.
- Fixed issue where responses whose body had not been fully consumed
would have the underlying connection closed but not returned to the
connection pool, which could cause Requests to hang in situations
where the `HTTPAdapter` had been configured to use a blocking
connection pool.
**Miscellaneous**
- Updated bundled urllib3 to 1.16.
- Some previous releases accidentally accepted non-strings as
acceptable header values. This release does not.
2.10.0 (2016-04-29)
-------------------
**New Features**
- SOCKS Proxy Support! (requires PySocks;
`$ pip install requests[socks]`)
**Miscellaneous**
- Updated bundled urllib3 to 1.15.1.
2.9.2 (2016-04-29)
------------------
**Improvements**
- Change built-in CaseInsensitiveDict (used for headers) to use
OrderedDict as its underlying datastore.
**Bugfixes**
- Don't use redirect\_cache if allow\_redirects=False
- When passed objects that throw exceptions from `tell()`, send them
via chunked transfer encoding instead of failing.
- Raise a ProxyError for proxy related connection issues.
2.9.1 (2015-12-21)
------------------
**Bugfixes**
- Resolve regression introduced in 2.9.0 that made it impossible to
send binary strings as bodies in Python 3.
- Fixed errors when calculating cookie expiration dates in certain
locales.
**Miscellaneous**
- Updated bundled urllib3 to 1.13.1.
2.9.0 (2015-12-15)
------------------
**Minor Improvements** (Backwards compatible)
- The `verify` keyword argument now supports being passed a path to a
directory of CA certificates, not just a single-file bundle.
- Warnings are now emitted when sending files opened in text mode.
- Added the 511 Network Authentication Required status code to the
status code registry.
**Bugfixes**
- For file-like objects that are not sought to the very beginning, we
now send the content length for the number of bytes we will actually
read, rather than the total size of the file, allowing partial file
uploads.
- When uploading file-like objects, if they are empty or have no
obvious content length we set `Transfer-Encoding: chunked` rather
than `Content-Length: 0`.
- We correctly receive the response in buffered mode when uploading
chunked bodies.
- We now handle being passed a query string as a bytestring on Python
3, by decoding it as UTF-8.
- Sessions are now closed in all cases (exceptional and not) when
using the functional API rather than leaking and waiting for the
garbage collector to clean them up.
- Correctly handle digest auth headers with a malformed `qop`
directive that contains no token, by treating it the same as if no
`qop` directive was provided at all.
- Minor performance improvements when removing specific cookies by
name.
**Miscellaneous**
- Updated urllib3 to 1.13.
2.8.1 (2015-10-13)
------------------
**Bugfixes**
- Update certificate bundle to match `certifi` 2015.9.6.2's weak
certificate bundle.
- Fix a bug in 2.8.0 where requests would raise `ConnectTimeout`
instead of `ConnectionError`
- When using the PreparedRequest flow, requests will now correctly
respect the `json` parameter. Broken in 2.8.0.
- When using the PreparedRequest flow, requests will now correctly
handle a Unicode-string method name on Python 2. Broken in 2.8.0.
2.8.0 (2015-10-05)
------------------
**Minor Improvements** (Backwards Compatible)
- Requests now supports per-host proxies. This allows the `proxies`
dictionary to have entries of the form
`{'<scheme>://<hostname>': '<proxy>'}`. Host-specific proxies will
be used in preference to the previously-supported scheme-specific
ones, but the previous syntax will continue to work.
- `Response.raise_for_status` now prints the URL that failed as part
of the exception message.
- `requests.utils.get_netrc_auth` now takes an `raise_errors` kwarg,
defaulting to `False`. When `True`, errors parsing `.netrc` files
cause exceptions to be thrown.
- Change to bundled projects import logic to make it easier to
unbundle requests downstream.
- Changed the default User-Agent string to avoid leaking data on
Linux: now contains only the requests version.
**Bugfixes**
- The `json` parameter to `post()` and friends will now only be used
if neither `data` nor `files` are present, consistent with the
documentation.
- We now ignore empty fields in the `NO_PROXY` environment variable.
- Fixed problem where `httplib.BadStatusLine` would get raised if
combining `stream=True` with `contextlib.closing`.
- Prevented bugs where we would attempt to return the same connection
back to the connection pool twice when sending a Chunked body.
- Miscellaneous minor internal changes.
- Digest Auth support is now thread safe.
**Updates**
- Updated urllib3 to 1.12.
2.7.0 (2015-05-03)
------------------
This is the first release that follows our new release process. For
more, see [our
documentation](https://requests.readthedocs.io/en/latest/community/release-process/).
**Bugfixes**
- Updated urllib3 to 1.10.4, resolving several bugs involving chunked
transfer encoding and response framing.
2.6.2 (2015-04-23)
------------------
**Bugfixes**
- Fix regression where compressed data that was sent as chunked data
was not properly decompressed. (\#2561)
2.6.1 (2015-04-22)
------------------
**Bugfixes**
- Remove VendorAlias import machinery introduced in v2.5.2.
- Simplify the PreparedRequest.prepare API: We no longer require the
user to pass an empty list to the hooks keyword argument. (c.f.
\#2552)
- Resolve redirects now receives and forwards all of the original
arguments to the adapter. (\#2503)
- Handle UnicodeDecodeErrors when trying to deal with a unicode URL
that cannot be encoded in ASCII. (\#2540)
- Populate the parsed path of the URI field when performing Digest
Authentication. (\#2426)
- Copy a PreparedRequest's CookieJar more reliably when it is not an
instance of RequestsCookieJar. (\#2527)
2.6.0 (2015-03-14)
------------------
**Bugfixes**
- CVE-2015-2296: Fix handling of cookies on redirect. Previously a
cookie without a host value set would use the hostname for the
redirected URL exposing requests users to session fixation attacks
and potentially cookie stealing. This was disclosed privately by
Matthew Daley of [BugFuzz](https://bugfuzz.com). This affects all
versions of requests from v2.1.0 to v2.5.3 (inclusive on both ends).
- Fix error when requests is an `install_requires` dependency and
`python setup.py test` is run. (\#2462)
- Fix error when urllib3 is unbundled and requests continues to use
the vendored import location.
- Include fixes to `urllib3`'s header handling.
- Requests' handling of unvendored dependencies is now more
restrictive.
**Features and Improvements**
- Support bytearrays when passed as parameters in the `files`
argument. (\#2468)
- Avoid data duplication when creating a request with `str`, `bytes`,
or `bytearray` input to the `files` argument.
2.5.3 (2015-02-24)
------------------
**Bugfixes**
- Revert changes to our vendored certificate bundle. For more context
see (\#2455, \#2456, and <https://bugs.python.org/issue23476>)
2.5.2 (2015-02-23)
------------------
**Features and Improvements**
- Add sha256 fingerprint support.
([shazow/urllib3\#540](https://github.com/shazow/urllib3/pull/540))
- Improve the performance of headers.
([shazow/urllib3\#544](https://github.com/shazow/urllib3/pull/544))
**Bugfixes**
- Copy pip's import machinery. When downstream redistributors remove
requests.packages.urllib3 the import machinery will continue to let
those same symbols work. Example usage in requests' documentation
and 3rd-party libraries relying on the vendored copies of urllib3
will work without having to fallback to the system urllib3.
- Attempt to quote parts of the URL on redirect if unquoting and then
quoting fails. (\#2356)
- Fix filename type check for multipart form-data uploads. (\#2411)
- Properly handle the case where a server issuing digest
authentication challenges provides both auth and auth-int
qop-values. (\#2408)
- Fix a socket leak.
([shazow/urllib3\#549](https://github.com/shazow/urllib3/pull/549))
- Fix multiple `Set-Cookie` headers properly.
([shazow/urllib3\#534](https://github.com/shazow/urllib3/pull/534))
- Disable the built-in hostname verification.
([shazow/urllib3\#526](https://github.com/shazow/urllib3/pull/526))
- Fix the behaviour of decoding an exhausted stream.
([shazow/urllib3\#535](https://github.com/shazow/urllib3/pull/535))
**Security**
- Pulled in an updated `cacert.pem`.
- Drop RC4 from the default cipher list.
([shazow/urllib3\#551](https://github.com/shazow/urllib3/pull/551))
2.5.1 (2014-12-23)
------------------
**Behavioural Changes**
- Only catch HTTPErrors in raise\_for\_status (\#2382)
**Bugfixes**
- Handle LocationParseError from urllib3 (\#2344)
- Handle file-like object filenames that are not strings (\#2379)
- Unbreak HTTPDigestAuth handler. Allow new nonces to be negotiated
(\#2389)
2.5.0 (2014-12-01)
------------------
**Improvements**
- Allow usage of urllib3's Retry object with HTTPAdapters (\#2216)
- The `iter_lines` method on a response now accepts a delimiter with
which to split the content (\#2295)
**Behavioural Changes**
- Add deprecation warnings to functions in requests.utils that will be
removed in 3.0 (\#2309)
- Sessions used by the functional API are always closed (\#2326)
- Restrict requests to HTTP/1.1 and HTTP/1.0 (stop accepting HTTP/0.9)
(\#2323)
**Bugfixes**
- Only parse the URL once (\#2353)
- Allow Content-Length header to always be overridden (\#2332)
- Properly handle files in HTTPDigestAuth (\#2333)
- Cap redirect\_cache size to prevent memory abuse (\#2299)
- Fix HTTPDigestAuth handling of redirects after authenticating
successfully (\#2253)
- Fix crash with custom method parameter to Session.request (\#2317)
- Fix how Link headers are parsed using the regular expression library
(\#2271)
**Documentation**
- Add more references for interlinking (\#2348)
- Update CSS for theme (\#2290)
- Update width of buttons and sidebar (\#2289)
- Replace references of Gittip with Gratipay (\#2282)
- Add link to changelog in sidebar (\#2273)
2.4.3 (2014-10-06)
------------------
**Bugfixes**
- Unicode URL improvements for Python 2.
- Re-order JSON param for backwards compat.
- Automatically defrag authentication schemes from host/pass URIs.
([\#2249](https://github.com/psf/requests/issues/2249))
2.4.2 (2014-10-05)
------------------
**Improvements**
- FINALLY! Add json parameter for uploads!
([\#2258](https://github.com/psf/requests/pull/2258))
- Support for bytestring URLs on Python 3.x
([\#2238](https://github.com/psf/requests/pull/2238))
**Bugfixes**
- Avoid getting stuck in a loop
([\#2244](https://github.com/psf/requests/pull/2244))
- Multiple calls to iter\* fail with unhelpful error.
([\#2240](https://github.com/psf/requests/issues/2240),
[\#2241](https://github.com/psf/requests/issues/2241))
**Documentation**
- Correct redirection introduction
([\#2245](https://github.com/psf/requests/pull/2245/))
- Added example of how to send multiple files in one request.
([\#2227](https://github.com/psf/requests/pull/2227/))
- Clarify how to pass a custom set of CAs
([\#2248](https://github.com/psf/requests/pull/2248/))
2.4.1 (2014-09-09)
------------------
- Now has a "security" package extras set,
`$ pip install requests[security]`
- Requests will now use Certifi if it is available.
- Capture and re-raise urllib3 ProtocolError
- Bugfix for responses that attempt to redirect to themselves forever
(wtf?).
2.4.0 (2014-08-29)
------------------
**Behavioral Changes**
- `Connection: keep-alive` header is now sent automatically.
**Improvements**
- Support for connect timeouts! Timeout now accepts a tuple (connect,
read) which is used to set individual connect and read timeouts.
- Allow copying of PreparedRequests without headers/cookies.
- Updated bundled urllib3 version.
- Refactored settings loading from environment -- new
Session.merge\_environment\_settings.
- Handle socket errors in iter\_content.
2.3.0 (2014-05-16)
------------------
**API Changes**
- New `Response` property `is_redirect`, which is true when the
library could have processed this response as a redirection (whether
or not it actually did).
- The `timeout` parameter now affects requests with both `stream=True`
and `stream=False` equally.
- The change in v2.0.0 to mandate explicit proxy schemes has been
reverted. Proxy schemes now default to `http://`.
- The `CaseInsensitiveDict` used for HTTP headers now behaves like a
normal dictionary when references as string or viewed in the
interpreter.
**Bugfixes**
- No longer expose Authorization or Proxy-Authorization headers on
redirect. Fix CVE-2014-1829 and CVE-2014-1830 respectively.
- Authorization is re-evaluated each redirect.
- On redirect, pass url as native strings.
- Fall-back to autodetected encoding for JSON when Unicode detection
fails.
- Headers set to `None` on the `Session` are now correctly not sent.
- Correctly honor `decode_unicode` even if it wasn't used earlier in
the same response.
- Stop advertising `compress` as a supported Content-Encoding.
- The `Response.history` parameter is now always a list.
- Many, many `urllib3` bugfixes.
2.2.1 (2014-01-23)
------------------
**Bugfixes**
- Fixes incorrect parsing of proxy credentials that contain a literal
or encoded '\#' character.
- Assorted urllib3 fixes.
2.2.0 (2014-01-09)
------------------
**API Changes**
- New exception: `ContentDecodingError`. Raised instead of `urllib3`
`DecodeError` exceptions.
**Bugfixes**
- Avoid many many exceptions from the buggy implementation of
`proxy_bypass` on OS X in Python 2.6.
- Avoid crashing when attempting to get authentication credentials
from \~/.netrc when running as a user without a home directory.
- Use the correct pool size for pools of connections to proxies.
- Fix iteration of `CookieJar` objects.
- Ensure that cookies are persisted over redirect.
- Switch back to using chardet, since it has merged with charade.
2.1.0 (2013-12-05)
------------------
- Updated CA Bundle, of course.
- Cookies set on individual Requests through a `Session` (e.g. via
`Session.get()`) are no longer persisted to the `Session`.
- Clean up connections when we hit problems during chunked upload,
rather than leaking them.
- Return connections to the pool when a chunked upload is successful,
rather than leaking it.
- Match the HTTPbis recommendation for HTTP 301 redirects.
- Prevent hanging when using streaming uploads and Digest Auth when a
401 is received.
- Values of headers set by Requests are now always the native string
type.
- Fix previously broken SNI support.
- Fix accessing HTTP proxies using proxy authentication.
- Unencode HTTP Basic usernames and passwords extracted from URLs.
- Support for IP address ranges for no\_proxy environment variable
- Parse headers correctly when users override the default `Host:`
header.
- Avoid munging the URL in case of case-sensitive servers.
- Looser URL handling for non-HTTP/HTTPS urls.
- Accept unicode methods in Python 2.6 and 2.7.
- More resilient cookie handling.
- Make `Response` objects pickleable.
- Actually added MD5-sess to Digest Auth instead of pretending to like
last time.
- Updated internal urllib3.
- Fixed @Lukasa's lack of taste.
2.0.1 (2013-10-24)
------------------
- Updated included CA Bundle with new mistrusts and automated process
for the future
- Added MD5-sess to Digest Auth
- Accept per-file headers in multipart file POST messages.
- Fixed: Don't send the full URL on CONNECT messages.
- Fixed: Correctly lowercase a redirect scheme.
- Fixed: Cookies not persisted when set via functional API.
- Fixed: Translate urllib3 ProxyError into a requests ProxyError
derived from ConnectionError.
- Updated internal urllib3 and chardet.
2.0.0 (2013-09-24)
------------------
**API Changes:**
- Keys in the Headers dictionary are now native strings on all Python
versions, i.e. bytestrings on Python 2, unicode on Python 3.
- Proxy URLs now *must* have an explicit scheme. A `MissingSchema`
exception will be raised if they don't.
- Timeouts now apply to read time if `Stream=False`.
- `RequestException` is now a subclass of `IOError`, not
`RuntimeError`.
- Added new method to `PreparedRequest` objects:
`PreparedRequest.copy()`.
- Added new method to `Session` objects: `Session.update_request()`.
This method updates a `Request` object with the data (e.g. cookies)
stored on the `Session`.
- Added new method to `Session` objects: `Session.prepare_request()`.
This method updates and prepares a `Request` object, and returns the
corresponding `PreparedRequest` object.
- Added new method to `HTTPAdapter` objects:
`HTTPAdapter.proxy_headers()`. This should not be called directly,
but improves the subclass interface.
- `httplib.IncompleteRead` exceptions caused by incorrect chunked
encoding will now raise a Requests `ChunkedEncodingError` instead.
- Invalid percent-escape sequences now cause a Requests `InvalidURL`
exception to be raised.
- HTTP 208 no longer uses reason phrase `"im_used"`. Correctly uses
`"already_reported"`.
- HTTP 226 reason added (`"im_used"`).
**Bugfixes:**
- Vastly improved proxy support, including the CONNECT verb. Special
thanks to the many contributors who worked towards this improvement.
- Cookies are now properly managed when 401 authentication responses
are received.
- Chunked encoding fixes.
- Support for mixed case schemes.
- Better handling of streaming downloads.
- Retrieve environment proxies from more locations.
- Minor cookies fixes.
- Improved redirect behaviour.
- Improved streaming behaviour, particularly for compressed data.
- Miscellaneous small Python 3 text encoding bugs.
- `.netrc` no longer overrides explicit auth.
- Cookies set by hooks are now correctly persisted on Sessions.
- Fix problem with cookies that specify port numbers in their host
field.
- `BytesIO` can be used to perform streaming uploads.
- More generous parsing of the `no_proxy` environment variable.
- Non-string objects can be passed in data values alongside files.
1.2.3 (2013-05-25)
------------------
- Simple packaging fix
1.2.2 (2013-05-23)
------------------
- Simple packaging fix
1.2.1 (2013-05-20)
------------------
- 301 and 302 redirects now change the verb to GET for all verbs, not
just POST, improving browser compatibility.
- Python 3.3.2 compatibility
- Always percent-encode location headers
- Fix connection adapter matching to be most-specific first
- new argument to the default connection adapter for passing a block
argument
- prevent a KeyError when there's no link headers
1.2.0 (2013-03-31)
------------------
- Fixed cookies on sessions and on requests
- Significantly change how hooks are dispatched - hooks now receive
all the arguments specified by the user when making a request so
hooks can make a secondary request with the same parameters. This is
especially necessary for authentication handler authors
- certifi support was removed
- Fixed bug where using OAuth 1 with body `signature_type` sent no
data
- Major proxy work thanks to @Lukasa including parsing of proxy
authentication from the proxy url
- Fix DigestAuth handling too many 401s
- Update vendored urllib3 to include SSL bug fixes
- Allow keyword arguments to be passed to `json.loads()` via the
`Response.json()` method
- Don't send `Content-Length` header by default on `GET` or `HEAD`
requests
- Add `elapsed` attribute to `Response` objects to time how long a
request took.
- Fix `RequestsCookieJar`
- Sessions and Adapters are now picklable, i.e., can be used with the
multiprocessing library
- Update charade to version 1.0.3
The change in how hooks are dispatched will likely cause a great deal of
issues.
1.1.0 (2013-01-10)
------------------
- CHUNKED REQUESTS
- Support for iterable response bodies
- Assume servers persist redirect params
- Allow explicit content types to be specified for file data
- Make merge\_kwargs case-insensitive when looking up keys
1.0.3 (2012-12-18)
------------------
- Fix file upload encoding bug
- Fix cookie behavior
1.0.2 (2012-12-17)
------------------
- Proxy fix for HTTPAdapter.
1.0.1 (2012-12-17)
------------------
- Cert verification exception bug.
- Proxy fix for HTTPAdapter.
1.0.0 (2012-12-17)
------------------
- Massive Refactor and Simplification
- Switch to Apache 2.0 license
- Swappable Connection Adapters
- Mountable Connection Adapters
- Mutable ProcessedRequest chain
- /s/prefetch/stream
- Removal of all configuration
- Standard library logging
- Make Response.json() callable, not property.
- Usage of new charade project, which provides python 2 and 3
simultaneous chardet.
- Removal of all hooks except 'response'
- Removal of all authentication helpers (OAuth, Kerberos)
This is not a backwards compatible change.
0.14.2 (2012-10-27)
-------------------
- Improved mime-compatible JSON handling
- Proxy fixes
- Path hack fixes
- Case-Insensitive Content-Encoding headers
- Support for CJK parameters in form posts
0.14.1 (2012-10-01)
-------------------
- Python 3.3 Compatibility
- Simply default accept-encoding
- Bugfixes
0.14.0 (2012-09-02)
-------------------
- No more iter\_content errors if already downloaded.
0.13.9 (2012-08-25)
-------------------
- Fix for OAuth + POSTs
- Remove exception eating from dispatch\_hook
- General bugfixes
0.13.8 (2012-08-21)
-------------------
- Incredible Link header support :)
0.13.7 (2012-08-19)
-------------------
- Support for (key, value) lists everywhere.
- Digest Authentication improvements.
- Ensure proxy exclusions work properly.
- Clearer UnicodeError exceptions.
- Automatic casting of URLs to strings (fURL and such)
- Bugfixes.
0.13.6 (2012-08-06)
-------------------
- Long awaited fix for hanging connections!
0.13.5 (2012-07-27)
-------------------
- Packaging fix
0.13.4 (2012-07-27)
-------------------
- GSSAPI/Kerberos authentication!
- App Engine 2.7 Fixes!
- Fix leaking connections (from urllib3 update)
- OAuthlib path hack fix
- OAuthlib URL parameters fix.
0.13.3 (2012-07-12)
-------------------
- Use simplejson if available.
- Do not hide SSLErrors behind Timeouts.
- Fixed param handling with urls containing fragments.
- Significantly improved information in User Agent.
- client certificates are ignored when verify=False
0.13.2 (2012-06-28)
-------------------
- Zero dependencies (once again)!
- New: Response.reason
- Sign querystring parameters in OAuth 1.0
- Client certificates no longer ignored when verify=False
- Add openSUSE certificate support
0.13.1 (2012-06-07)
-------------------
- Allow passing a file or file-like object as data.
- Allow hooks to return responses that indicate errors.
- Fix Response.text and Response.json for body-less responses.
0.13.0 (2012-05-29)
-------------------
- Removal of Requests.async in favor of
[grequests](https://github.com/kennethreitz/grequests)
- Allow disabling of cookie persistence.
- New implementation of safe\_mode
- cookies.get now supports default argument
- Session cookies not saved when Session.request is called with
return\_response=False
- Env: no\_proxy support.
- RequestsCookieJar improvements.
- Various bug fixes.
0.12.1 (2012-05-08)
-------------------
- New `Response.json` property.
- Ability to add string file uploads.
- Fix out-of-range issue with iter\_lines.
- Fix iter\_content default size.
- Fix POST redirects containing files.
0.12.0 (2012-05-02)
-------------------
- EXPERIMENTAL OAUTH SUPPORT!
- Proper CookieJar-backed cookies interface with awesome dict-like
interface.
- Speed fix for non-iterated content chunks.
- Move `pre_request` to a more usable place.
- New `pre_send` hook.
- Lazily encode data, params, files.
- Load system Certificate Bundle if `certify` isn't available.
- Cleanups, fixes.
0.11.2 (2012-04-22)
-------------------
- Attempt to use the OS's certificate bundle if `certifi` isn't
available.
- Infinite digest auth redirect fix.
- Multi-part file upload improvements.
- Fix decoding of invalid %encodings in URLs.
- If there is no content in a response don't throw an error the second
time that content is attempted to be read.
- Upload data on redirects.
0.11.1 (2012-03-30)
-------------------
- POST redirects now break RFC to do what browsers do: Follow up with
a GET.
- New `strict_mode` configuration to disable new redirect behavior.
0.11.0 (2012-03-14)
-------------------
- Private SSL Certificate support
- Remove select.poll from Gevent monkeypatching
- Remove redundant generator for chunked transfer encoding
- Fix: Response.ok raises Timeout Exception in safe\_mode
0.10.8 (2012-03-09)
-------------------
- Generate chunked ValueError fix
- Proxy configuration by environment variables
- Simplification of iter\_lines.
- New trust\_env configuration for disabling system/environment hints.
- Suppress cookie errors.
0.10.7 (2012-03-07)
-------------------
- encode\_uri = False
0.10.6 (2012-02-25)
-------------------
- Allow '=' in cookies.
0.10.5 (2012-02-25)
-------------------
- Response body with 0 content-length fix.
- New async.imap.
- Don't fail on netrc.
0.10.4 (2012-02-20)
-------------------
- Honor netrc.
0.10.3 (2012-02-20)
-------------------
- HEAD requests don't follow redirects anymore.
- raise\_for\_status() doesn't raise for 3xx anymore.
- Make Session objects picklable.
- ValueError for invalid schema URLs.
0.10.2 (2012-01-15)
-------------------
- Vastly improved URL quoting.
- Additional allowed cookie key values.
- Attempted fix for "Too many open files" Error
- Replace unicode errors on first pass, no need for second pass.
- Append '/' to bare-domain urls before query insertion.
- Exceptions now inherit from RuntimeError.
- Binary uploads + auth fix.
- Bugfixes.
0.10.1 (2012-01-23)
-------------------
- PYTHON 3 SUPPORT!
- Dropped 2.5 Support. (*Backwards Incompatible*)
0.10.0 (2012-01-21)
-------------------
- `Response.content` is now bytes-only. (*Backwards Incompatible*)
- New `Response.text` is unicode-only.
- If no `Response.encoding` is specified and `chardet` is available,
`Response.text` will guess an encoding.
- Default to ISO-8859-1 (Western) encoding for "text" subtypes.
- Removal of decode\_unicode. (*Backwards Incompatible*)
- New multiple-hooks system.
- New `Response.register_hook` for registering hooks within the
pipeline.
- `Response.url` is now Unicode.
0.9.3 (2012-01-18)
------------------
- SSL verify=False bugfix (apparent on windows machines).
0.9.2 (2012-01-18)
------------------
- Asynchronous async.send method.
- Support for proper chunk streams with boundaries.
- session argument for Session classes.
- Print entire hook tracebacks, not just exception instance.
- Fix response.iter\_lines from pending next line.
- Fix but in HTTP-digest auth w/ URI having query strings.
- Fix in Event Hooks section.
- Urllib3 update.
0.9.1 (2012-01-06)
------------------
- danger\_mode for automatic Response.raise\_for\_status()
- Response.iter\_lines refactor
0.9.0 (2011-12-28)
------------------
- verify ssl is default.
0.8.9 (2011-12-28)
------------------
- Packaging fix.
0.8.8 (2011-12-28)
------------------
- SSL CERT VERIFICATION!
- Release of Cerifi: Mozilla's cert list.
- New 'verify' argument for SSL requests.
- Urllib3 update.
0.8.7 (2011-12-24)
------------------
- iter\_lines last-line truncation fix
- Force safe\_mode for async requests
- Handle safe\_mode exceptions more consistently
- Fix iteration on null responses in safe\_mode
0.8.6 (2011-12-18)
------------------
- Socket timeout fixes.
- Proxy Authorization support.
0.8.5 (2011-12-14)
------------------
- Response.iter\_lines!
0.8.4 (2011-12-11)
------------------
- Prefetch bugfix.
- Added license to installed version.
0.8.3 (2011-11-27)
------------------
- Converted auth system to use simpler callable objects.
- New session parameter to API methods.
- Display full URL while logging.
0.8.2 (2011-11-19)
------------------
- New Unicode decoding system, based on over-ridable
Response.encoding.
- Proper URL slash-quote handling.
- Cookies with `[`, `]`, and `_` allowed.
0.8.1 (2011-11-15)
------------------
- URL Request path fix
- Proxy fix.
- Timeouts fix.
0.8.0 (2011-11-13)
------------------
- Keep-alive support!
- Complete removal of Urllib2
- Complete removal of Poster
- Complete removal of CookieJars
- New ConnectionError raising
- Safe\_mode for error catching
- prefetch parameter for request methods
- OPTION method
- Async pool size throttling
- File uploads send real names
- Vendored in urllib3
0.7.6 (2011-11-07)
------------------
- Digest authentication bugfix (attach query data to path)
0.7.5 (2011-11-04)
------------------
- Response.content = None if there was an invalid response.
- Redirection auth handling.
0.7.4 (2011-10-26)
------------------
- Session Hooks fix.
0.7.3 (2011-10-23)
------------------
- Digest Auth fix.
0.7.2 (2011-10-23)
------------------
- PATCH Fix.
0.7.1 (2011-10-23)
------------------
- Move away from urllib2 authentication handling.
- Fully Remove AuthManager, AuthObject, &c.
- New tuple-based auth system with handler callbacks.
0.7.0 (2011-10-22)
------------------
- Sessions are now the primary interface.
- Deprecated InvalidMethodException.
- PATCH fix.
- New config system (no more global settings).
0.6.6 (2011-10-19)
------------------
- Session parameter bugfix (params merging).
0.6.5 (2011-10-18)
------------------
- Offline (fast) test suite.
- Session dictionary argument merging.
0.6.4 (2011-10-13)
------------------
- Automatic decoding of unicode, based on HTTP Headers.
- New `decode_unicode` setting.
- Removal of `r.read/close` methods.
- New `r.faw` interface for advanced response usage.\*
- Automatic expansion of parameterized headers.
0.6.3 (2011-10-13)
------------------
- Beautiful `requests.async` module, for making async requests w/
gevent.
0.6.2 (2011-10-09)
------------------
- GET/HEAD obeys allow\_redirects=False.
0.6.1 (2011-08-20)
------------------
- Enhanced status codes experience `\o/`
- Set a maximum number of redirects (`settings.max_redirects`)
- Full Unicode URL support
- Support for protocol-less redirects.
- Allow for arbitrary request types.
- Bugfixes
0.6.0 (2011-08-17)
------------------
- New callback hook system
- New persistent sessions object and context manager
- Transparent Dict-cookie handling
- Status code reference object
- Removed Response.cached
- Added Response.request
- All args are kwargs
- Relative redirect support
- HTTPError handling improvements
- Improved https testing
- Bugfixes
0.5.1 (2011-07-23)
------------------
- International Domain Name Support!
- Access headers without fetching entire body (`read()`)
- Use lists as dicts for parameters
- Add Forced Basic Authentication
- Forced Basic is default authentication type
- `python-requests.org` default User-Agent header
- CaseInsensitiveDict lower-case caching
- Response.history bugfix
0.5.0 (2011-06-21)
------------------
- PATCH Support
- Support for Proxies
- HTTPBin Test Suite
- Redirect Fixes
- settings.verbose stream writing
- Querystrings for all methods
- URLErrors (Connection Refused, Timeout, Invalid URLs) are treated as
explicitly raised
`r.requests.get('hwe://blah'); r.raise_for_status()`
0.4.1 (2011-05-22)
------------------
- Improved Redirection Handling
- New 'allow\_redirects' param for following non-GET/HEAD Redirects
- Settings module refactoring
0.4.0 (2011-05-15)
------------------
- Response.history: list of redirected responses
- Case-Insensitive Header Dictionaries!
- Unicode URLs
0.3.4 (2011-05-14)
------------------
- Urllib2 HTTPAuthentication Recursion fix (Basic/Digest)
- Internal Refactor
- Bytes data upload Bugfix
0.3.3 (2011-05-12)
------------------
- Request timeouts
- Unicode url-encoded data
- Settings context manager and module
0.3.2 (2011-04-15)
------------------
- Automatic Decompression of GZip Encoded Content
- AutoAuth Support for Tupled HTTP Auth
0.3.1 (2011-04-01)
------------------
- Cookie Changes
- Response.read()
- Poster fix
0.3.0 (2011-02-25)
------------------
- Automatic Authentication API Change
- Smarter Query URL Parameterization
- Allow file uploads and POST data together
-
New Authentication Manager System
: - Simpler Basic HTTP System
- Supports all built-in urllib2 Auths
- Allows for custom Auth Handlers
0.2.4 (2011-02-19)
------------------
- Python 2.5 Support
- PyPy-c v1.4 Support
- Auto-Authentication tests
- Improved Request object constructor
0.2.3 (2011-02-15)
------------------
-
New HTTPHandling Methods
: - Response.\_\_nonzero\_\_ (false if bad HTTP Status)
- Response.ok (True if expected HTTP Status)
- Response.error (Logged HTTPError if bad HTTP Status)
- Response.raise\_for\_status() (Raises stored HTTPError)
0.2.2 (2011-02-14)
------------------
- Still handles request in the event of an HTTPError. (Issue \#2)
- Eventlet and Gevent Monkeypatch support.
- Cookie Support (Issue \#1)
0.2.1 (2011-02-14)
------------------
- Added file attribute to POST and PUT requests for multipart-encode
file uploads.
- Added Request.url attribute for context and redirects
0.2.0 (2011-02-14)
------------------
- Birth!
0.0.1 (2011-02-13)
------------------
- Frustration
- Conception
| /requestspro-2.6.tar.gz/requestspro-2.6/HISTORY.md | 0.829906 | 0.803444 | HISTORY.md | pypi |
Release History
===============
dev
---
- \[Short description of non-trivial change.\]
2.31.0 (2023-05-22)
-------------------
**Security**
- Versions of Requests between v2.3.0 and v2.30.0 are vulnerable to potential
forwarding of `Proxy-Authorization` headers to destination servers when
following HTTPS redirects.
When proxies are defined with user info (https://user:pass@proxy:8080), Requests
will construct a `Proxy-Authorization` header that is attached to the request to
authenticate with the proxy.
In cases where Requests receives a redirect response, it previously reattached
the `Proxy-Authorization` header incorrectly, resulting in the value being
sent through the tunneled connection to the destination server. Users who rely on
defining their proxy credentials in the URL are *strongly* encouraged to upgrade
to Requests 2.31.0+ to prevent unintentional leakage and rotate their proxy
credentials once the change has been fully deployed.
Users who do not use a proxy or do not supply their proxy credentials through
the user information portion of their proxy URL are not subject to this
vulnerability.
Full details can be read in our [Github Security Advisory](https://github.com/psf/requests/security/advisories/GHSA-j8r2-6x86-q33q)
and [CVE-2023-32681](https://nvd.nist.gov/vuln/detail/CVE-2023-32681).
2.30.0 (2023-05-03)
-------------------
**Dependencies**
- ⚠️ Added support for urllib3 2.0. ⚠️
This may contain minor breaking changes so we advise careful testing and
reviewing https://urllib3.readthedocs.io/en/latest/v2-migration-guide.html
prior to upgrading.
Users who wish to stay on urllib3 1.x can pin to `urllib3<2`.
2.29.0 (2023-04-26)
-------------------
**Improvements**
- Requests now defers chunked requests to the urllib3 implementation to improve
standardization. (#6226)
- Requests relaxes header component requirements to support bytes/str subclasses. (#6356)
2.28.2 (2023-01-12)
-------------------
**Dependencies**
- Requests now supports charset\_normalizer 3.x. (#6261)
**Bugfixes**
- Updated MissingSchema exception to suggest https scheme rather than http. (#6188)
2.28.1 (2022-06-29)
-------------------
**Improvements**
- Speed optimization in `iter_content` with transition to `yield from`. (#6170)
**Dependencies**
- Added support for chardet 5.0.0 (#6179)
- Added support for charset-normalizer 2.1.0 (#6169)
2.28.0 (2022-06-09)
-------------------
**Deprecations**
- ⚠️ Requests has officially dropped support for Python 2.7. ⚠️ (#6091)
- Requests has officially dropped support for Python 3.6 (including pypy3.6). (#6091)
**Improvements**
- Wrap JSON parsing issues in Request's JSONDecodeError for payloads without
an encoding to make `json()` API consistent. (#6097)
- Parse header components consistently, raising an InvalidHeader error in
all invalid cases. (#6154)
- Added provisional 3.11 support with current beta build. (#6155)
- Requests got a makeover and we decided to paint it black. (#6095)
**Bugfixes**
- Fixed bug where setting `CURL_CA_BUNDLE` to an empty string would disable
cert verification. All Requests 2.x versions before 2.28.0 are affected. (#6074)
- Fixed urllib3 exception leak, wrapping `urllib3.exceptions.SSLError` with
`requests.exceptions.SSLError` for `content` and `iter_content`. (#6057)
- Fixed issue where invalid Windows registry entries caused proxy resolution
to raise an exception rather than ignoring the entry. (#6149)
- Fixed issue where entire payload could be included in the error message for
JSONDecodeError. (#6036)
2.27.1 (2022-01-05)
-------------------
**Bugfixes**
- Fixed parsing issue that resulted in the `auth` component being
dropped from proxy URLs. (#6028)
2.27.0 (2022-01-03)
-------------------
**Improvements**
- Officially added support for Python 3.10. (#5928)
- Added a `requests.exceptions.JSONDecodeError` to unify JSON exceptions between
Python 2 and 3. This gets raised in the `response.json()` method, and is
backwards compatible as it inherits from previously thrown exceptions.
Can be caught from `requests.exceptions.RequestException` as well. (#5856)
- Improved error text for misnamed `InvalidSchema` and `MissingSchema`
exceptions. This is a temporary fix until exceptions can be renamed
(Schema->Scheme). (#6017)
- Improved proxy parsing for proxy URLs missing a scheme. This will address
recent changes to `urlparse` in Python 3.9+. (#5917)
**Bugfixes**
- Fixed defect in `extract_zipped_paths` which could result in an infinite loop
for some paths. (#5851)
- Fixed handling for `AttributeError` when calculating length of files obtained
by `Tarfile.extractfile()`. (#5239)
- Fixed urllib3 exception leak, wrapping `urllib3.exceptions.InvalidHeader` with
`requests.exceptions.InvalidHeader`. (#5914)
- Fixed bug where two Host headers were sent for chunked requests. (#5391)
- Fixed regression in Requests 2.26.0 where `Proxy-Authorization` was
incorrectly stripped from all requests sent with `Session.send`. (#5924)
- Fixed performance regression in 2.26.0 for hosts with a large number of
proxies available in the environment. (#5924)
- Fixed idna exception leak, wrapping `UnicodeError` with
`requests.exceptions.InvalidURL` for URLs with a leading dot (.) in the
domain. (#5414)
**Deprecations**
- Requests support for Python 2.7 and 3.6 will be ending in 2022. While we
don't have exact dates, Requests 2.27.x is likely to be the last release
series providing support.
2.26.0 (2021-07-13)
-------------------
**Improvements**
- Requests now supports Brotli compression, if either the `brotli` or
`brotlicffi` package is installed. (#5783)
- `Session.send` now correctly resolves proxy configurations from both
the Session and Request. Behavior now matches `Session.request`. (#5681)
**Bugfixes**
- Fixed a race condition in zip extraction when using Requests in parallel
from zip archive. (#5707)
**Dependencies**
- Instead of `chardet`, use the MIT-licensed `charset_normalizer` for Python3
to remove license ambiguity for projects bundling requests. If `chardet`
is already installed on your machine it will be used instead of `charset_normalizer`
to keep backwards compatibility. (#5797)
You can also install `chardet` while installing requests by
specifying `[use_chardet_on_py3]` extra as follows:
```shell
pip install "requests[use_chardet_on_py3]"
```
Python2 still depends upon the `chardet` module.
- Requests now supports `idna` 3.x on Python 3. `idna` 2.x will continue to
be used on Python 2 installations. (#5711)
**Deprecations**
- The `requests[security]` extra has been converted to a no-op install.
PyOpenSSL is no longer the recommended secure option for Requests. (#5867)
- Requests has officially dropped support for Python 3.5. (#5867)
2.25.1 (2020-12-16)
-------------------
**Bugfixes**
- Requests now treats `application/json` as `utf8` by default. Resolving
inconsistencies between `r.text` and `r.json` output. (#5673)
**Dependencies**
- Requests now supports chardet v4.x.
2.25.0 (2020-11-11)
-------------------
**Improvements**
- Added support for NETRC environment variable. (#5643)
**Dependencies**
- Requests now supports urllib3 v1.26.
**Deprecations**
- Requests v2.25.x will be the last release series with support for Python 3.5.
- The `requests[security]` extra is officially deprecated and will be removed
in Requests v2.26.0.
2.24.0 (2020-06-17)
-------------------
**Improvements**
- pyOpenSSL TLS implementation is now only used if Python
either doesn't have an `ssl` module or doesn't support
SNI. Previously pyOpenSSL was unconditionally used if available.
This applies even if pyOpenSSL is installed via the
`requests[security]` extra (#5443)
- Redirect resolution should now only occur when
`allow_redirects` is True. (#5492)
- No longer perform unnecessary Content-Length calculation for
requests that won't use it. (#5496)
2.23.0 (2020-02-19)
-------------------
**Improvements**
- Remove defunct reference to `prefetch` in Session `__attrs__` (#5110)
**Bugfixes**
- Requests no longer outputs password in basic auth usage warning. (#5099)
**Dependencies**
- Pinning for `chardet` and `idna` now uses major version instead of minor.
This hopefully reduces the need for releases every time a dependency is updated.
2.22.0 (2019-05-15)
-------------------
**Dependencies**
- Requests now supports urllib3 v1.25.2.
(note: 1.25.0 and 1.25.1 are incompatible)
**Deprecations**
- Requests has officially stopped support for Python 3.4.
2.21.0 (2018-12-10)
-------------------
**Dependencies**
- Requests now supports idna v2.8.
2.20.1 (2018-11-08)
-------------------
**Bugfixes**
- Fixed bug with unintended Authorization header stripping for
redirects using default ports (http/80, https/443).
2.20.0 (2018-10-18)
-------------------
**Bugfixes**
- Content-Type header parsing is now case-insensitive (e.g.
charset=utf8 v Charset=utf8).
- Fixed exception leak where certain redirect urls would raise
uncaught urllib3 exceptions.
- Requests removes Authorization header from requests redirected
from https to http on the same hostname. (CVE-2018-18074)
- `should_bypass_proxies` now handles URIs without hostnames (e.g.
files).
**Dependencies**
- Requests now supports urllib3 v1.24.
**Deprecations**
- Requests has officially stopped support for Python 2.6.
2.19.1 (2018-06-14)
-------------------
**Bugfixes**
- Fixed issue where status\_codes.py's `init` function failed trying
to append to a `__doc__` value of `None`.
2.19.0 (2018-06-12)
-------------------
**Improvements**
- Warn user about possible slowdown when using cryptography version
< 1.3.4
- Check for invalid host in proxy URL, before forwarding request to
adapter.
- Fragments are now properly maintained across redirects. (RFC7231
7.1.2)
- Removed use of cgi module to expedite library load time.
- Added support for SHA-256 and SHA-512 digest auth algorithms.
- Minor performance improvement to `Request.content`.
- Migrate to using collections.abc for 3.7 compatibility.
**Bugfixes**
- Parsing empty `Link` headers with `parse_header_links()` no longer
return one bogus entry.
- Fixed issue where loading the default certificate bundle from a zip
archive would raise an `IOError`.
- Fixed issue with unexpected `ImportError` on windows system which do
not support `winreg` module.
- DNS resolution in proxy bypass no longer includes the username and
password in the request. This also fixes the issue of DNS queries
failing on macOS.
- Properly normalize adapter prefixes for url comparison.
- Passing `None` as a file pointer to the `files` param no longer
raises an exception.
- Calling `copy` on a `RequestsCookieJar` will now preserve the cookie
policy correctly.
**Dependencies**
- We now support idna v2.7.
- We now support urllib3 v1.23.
2.18.4 (2017-08-15)
-------------------
**Improvements**
- Error messages for invalid headers now include the header name for
easier debugging
**Dependencies**
- We now support idna v2.6.
2.18.3 (2017-08-02)
-------------------
**Improvements**
- Running `$ python -m requests.help` now includes the installed
version of idna.
**Bugfixes**
- Fixed issue where Requests would raise `ConnectionError` instead of
`SSLError` when encountering SSL problems when using urllib3 v1.22.
2.18.2 (2017-07-25)
-------------------
**Bugfixes**
- `requests.help` no longer fails on Python 2.6 due to the absence of
`ssl.OPENSSL_VERSION_NUMBER`.
**Dependencies**
- We now support urllib3 v1.22.
2.18.1 (2017-06-14)
-------------------
**Bugfixes**
- Fix an error in the packaging whereby the `*.whl` contained
incorrect data that regressed the fix in v2.17.3.
2.18.0 (2017-06-14)
-------------------
**Improvements**
- `Response` is now a context manager, so can be used directly in a
`with` statement without first having to be wrapped by
`contextlib.closing()`.
**Bugfixes**
- Resolve installation failure if multiprocessing is not available
- Resolve tests crash if multiprocessing is not able to determine the
number of CPU cores
- Resolve error swallowing in utils set\_environ generator
2.17.3 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.2 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.1 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.0 (2017-05-29)
-------------------
**Improvements**
- Removal of the 301 redirect cache. This improves thread-safety.
2.16.5 (2017-05-28)
-------------------
- Improvements to `$ python -m requests.help`.
2.16.4 (2017-05-27)
-------------------
- Introduction of the `$ python -m requests.help` command, for
debugging with maintainers!
2.16.3 (2017-05-27)
-------------------
- Further restored the `requests.packages` namespace for compatibility
reasons.
2.16.2 (2017-05-27)
-------------------
- Further restored the `requests.packages` namespace for compatibility
reasons.
No code modification (noted below) should be necessary any longer.
2.16.1 (2017-05-27)
-------------------
- Restored the `requests.packages` namespace for compatibility
reasons.
- Bugfix for `urllib3` version parsing.
**Note**: code that was written to import against the
`requests.packages` namespace previously will have to import code that
rests at this module-level now.
For example:
from requests.packages.urllib3.poolmanager import PoolManager
Will need to be re-written to be:
from requests.packages import urllib3
urllib3.poolmanager.PoolManager
Or, even better:
from urllib3.poolmanager import PoolManager
2.16.0 (2017-05-26)
-------------------
- Unvendor ALL the things!
2.15.1 (2017-05-26)
-------------------
- Everyone makes mistakes.
2.15.0 (2017-05-26)
-------------------
**Improvements**
- Introduction of the `Response.next` property, for getting the next
`PreparedResponse` from a redirect chain (when
`allow_redirects=False`).
- Internal refactoring of `__version__` module.
**Bugfixes**
- Restored once-optional parameter for
`requests.utils.get_environ_proxies()`.
2.14.2 (2017-05-10)
-------------------
**Bugfixes**
- Changed a less-than to an equal-to and an or in the dependency
markers to widen compatibility with older setuptools releases.
2.14.1 (2017-05-09)
-------------------
**Bugfixes**
- Changed the dependency markers to widen compatibility with older pip
releases.
2.14.0 (2017-05-09)
-------------------
**Improvements**
- It is now possible to pass `no_proxy` as a key to the `proxies`
dictionary to provide handling similar to the `NO_PROXY` environment
variable.
- When users provide invalid paths to certificate bundle files or
directories Requests now raises `IOError`, rather than failing at
the time of the HTTPS request with a fairly inscrutable certificate
validation error.
- The behavior of `SessionRedirectMixin` was slightly altered.
`resolve_redirects` will now detect a redirect by calling
`get_redirect_target(response)` instead of directly querying
`Response.is_redirect` and `Response.headers['location']`. Advanced
users will be able to process malformed redirects more easily.
- Changed the internal calculation of elapsed request time to have
higher resolution on Windows.
- Added `win_inet_pton` as conditional dependency for the `[socks]`
extra on Windows with Python 2.7.
- Changed the proxy bypass implementation on Windows: the proxy bypass
check doesn't use forward and reverse DNS requests anymore
- URLs with schemes that begin with `http` but are not `http` or
`https` no longer have their host parts forced to lowercase.
**Bugfixes**
- Much improved handling of non-ASCII `Location` header values in
redirects. Fewer `UnicodeDecodeErrors` are encountered on Python 2,
and Python 3 now correctly understands that Latin-1 is unlikely to
be the correct encoding.
- If an attempt to `seek` file to find out its length fails, we now
appropriately handle that by aborting our content-length
calculations.
- Restricted `HTTPDigestAuth` to only respond to auth challenges made
on 4XX responses, rather than to all auth challenges.
- Fixed some code that was firing `DeprecationWarning` on Python 3.6.
- The dismayed person emoticon (`/o\\`) no longer has a big head. I'm
sure this is what you were all worrying about most.
**Miscellaneous**
- Updated bundled urllib3 to v1.21.1.
- Updated bundled chardet to v3.0.2.
- Updated bundled idna to v2.5.
- Updated bundled certifi to 2017.4.17.
2.13.0 (2017-01-24)
-------------------
**Features**
- Only load the `idna` library when we've determined we need it. This
will save some memory for users.
**Miscellaneous**
- Updated bundled urllib3 to 1.20.
- Updated bundled idna to 2.2.
2.12.5 (2017-01-18)
-------------------
**Bugfixes**
- Fixed an issue with JSON encoding detection, specifically detecting
big-endian UTF-32 with BOM.
2.12.4 (2016-12-14)
-------------------
**Bugfixes**
- Fixed regression from 2.12.2 where non-string types were rejected in
the basic auth parameters. While support for this behaviour has been
re-added, the behaviour is deprecated and will be removed in the
future.
2.12.3 (2016-12-01)
-------------------
**Bugfixes**
- Fixed regression from v2.12.1 for URLs with schemes that begin with
"http". These URLs have historically been processed as though they
were HTTP-schemed URLs, and so have had parameters added. This was
removed in v2.12.2 in an overzealous attempt to resolve problems
with IDNA-encoding those URLs. This change was reverted: the other
fixes for IDNA-encoding have been judged to be sufficient to return
to the behaviour Requests had before v2.12.0.
2.12.2 (2016-11-30)
-------------------
**Bugfixes**
- Fixed several issues with IDNA-encoding URLs that are technically
invalid but which are widely accepted. Requests will now attempt to
IDNA-encode a URL if it can but, if it fails, and the host contains
only ASCII characters, it will be passed through optimistically.
This will allow users to opt-in to using IDNA2003 themselves if they
want to, and will also allow technically invalid but still common
hostnames.
- Fixed an issue where URLs with leading whitespace would raise
`InvalidSchema` errors.
- Fixed an issue where some URLs without the HTTP or HTTPS schemes
would still have HTTP URL preparation applied to them.
- Fixed an issue where Unicode strings could not be used in basic
auth.
- Fixed an issue encountered by some Requests plugins where
constructing a Response object would cause `Response.content` to
raise an `AttributeError`.
2.12.1 (2016-11-16)
-------------------
**Bugfixes**
- Updated setuptools 'security' extra for the new PyOpenSSL backend in
urllib3.
**Miscellaneous**
- Updated bundled urllib3 to 1.19.1.
2.12.0 (2016-11-15)
-------------------
**Improvements**
- Updated support for internationalized domain names from IDNA2003 to
IDNA2008. This updated support is required for several forms of IDNs
and is mandatory for .de domains.
- Much improved heuristics for guessing content lengths: Requests will
no longer read an entire `StringIO` into memory.
- Much improved logic for recalculating `Content-Length` headers for
`PreparedRequest` objects.
- Improved tolerance for file-like objects that have no `tell` method
but do have a `seek` method.
- Anything that is a subclass of `Mapping` is now treated like a
dictionary by the `data=` keyword argument.
- Requests now tolerates empty passwords in proxy credentials, rather
than stripping the credentials.
- If a request is made with a file-like object as the body and that
request is redirected with a 307 or 308 status code, Requests will
now attempt to rewind the body object so it can be replayed.
**Bugfixes**
- When calling `response.close`, the call to `close` will be
propagated through to non-urllib3 backends.
- Fixed issue where the `ALL_PROXY` environment variable would be
preferred over scheme-specific variables like `HTTP_PROXY`.
- Fixed issue where non-UTF8 reason phrases got severely mangled by
falling back to decoding using ISO 8859-1 instead.
- Fixed a bug where Requests would not correctly correlate cookies set
when using custom Host headers if those Host headers did not use the
native string type for the platform.
**Miscellaneous**
- Updated bundled urllib3 to 1.19.
- Updated bundled certifi certs to 2016.09.26.
2.11.1 (2016-08-17)
-------------------
**Bugfixes**
- Fixed a bug when using `iter_content` with `decode_unicode=True` for
streamed bodies would raise `AttributeError`. This bug was
introduced in 2.11.
- Strip Content-Type and Transfer-Encoding headers from the header
block when following a redirect that transforms the verb from
POST/PUT to GET.
2.11.0 (2016-08-08)
-------------------
**Improvements**
- Added support for the `ALL_PROXY` environment variable.
- Reject header values that contain leading whitespace or newline
characters to reduce risk of header smuggling.
**Bugfixes**
- Fixed occasional `TypeError` when attempting to decode a JSON
response that occurred in an error case. Now correctly returns a
`ValueError`.
- Requests would incorrectly ignore a non-CIDR IP address in the
`NO_PROXY` environment variables: Requests now treats it as a
specific IP.
- Fixed a bug when sending JSON data that could cause us to encounter
obscure OpenSSL errors in certain network conditions (yes, really).
- Added type checks to ensure that `iter_content` only accepts
integers and `None` for chunk sizes.
- Fixed issue where responses whose body had not been fully consumed
would have the underlying connection closed but not returned to the
connection pool, which could cause Requests to hang in situations
where the `HTTPAdapter` had been configured to use a blocking
connection pool.
**Miscellaneous**
- Updated bundled urllib3 to 1.16.
- Some previous releases accidentally accepted non-strings as
acceptable header values. This release does not.
2.10.0 (2016-04-29)
-------------------
**New Features**
- SOCKS Proxy Support! (requires PySocks;
`$ pip install requests[socks]`)
**Miscellaneous**
- Updated bundled urllib3 to 1.15.1.
2.9.2 (2016-04-29)
------------------
**Improvements**
- Change built-in CaseInsensitiveDict (used for headers) to use
OrderedDict as its underlying datastore.
**Bugfixes**
- Don't use redirect\_cache if allow\_redirects=False
- When passed objects that throw exceptions from `tell()`, send them
via chunked transfer encoding instead of failing.
- Raise a ProxyError for proxy related connection issues.
2.9.1 (2015-12-21)
------------------
**Bugfixes**
- Resolve regression introduced in 2.9.0 that made it impossible to
send binary strings as bodies in Python 3.
- Fixed errors when calculating cookie expiration dates in certain
locales.
**Miscellaneous**
- Updated bundled urllib3 to 1.13.1.
2.9.0 (2015-12-15)
------------------
**Minor Improvements** (Backwards compatible)
- The `verify` keyword argument now supports being passed a path to a
directory of CA certificates, not just a single-file bundle.
- Warnings are now emitted when sending files opened in text mode.
- Added the 511 Network Authentication Required status code to the
status code registry.
**Bugfixes**
- For file-like objects that are not sought to the very beginning, we
now send the content length for the number of bytes we will actually
read, rather than the total size of the file, allowing partial file
uploads.
- When uploading file-like objects, if they are empty or have no
obvious content length we set `Transfer-Encoding: chunked` rather
than `Content-Length: 0`.
- We correctly receive the response in buffered mode when uploading
chunked bodies.
- We now handle being passed a query string as a bytestring on Python
3, by decoding it as UTF-8.
- Sessions are now closed in all cases (exceptional and not) when
using the functional API rather than leaking and waiting for the
garbage collector to clean them up.
- Correctly handle digest auth headers with a malformed `qop`
directive that contains no token, by treating it the same as if no
`qop` directive was provided at all.
- Minor performance improvements when removing specific cookies by
name.
**Miscellaneous**
- Updated urllib3 to 1.13.
2.8.1 (2015-10-13)
------------------
**Bugfixes**
- Update certificate bundle to match `certifi` 2015.9.6.2's weak
certificate bundle.
- Fix a bug in 2.8.0 where requests would raise `ConnectTimeout`
instead of `ConnectionError`
- When using the PreparedRequest flow, requests will now correctly
respect the `json` parameter. Broken in 2.8.0.
- When using the PreparedRequest flow, requests will now correctly
handle a Unicode-string method name on Python 2. Broken in 2.8.0.
2.8.0 (2015-10-05)
------------------
**Minor Improvements** (Backwards Compatible)
- Requests now supports per-host proxies. This allows the `proxies`
dictionary to have entries of the form
`{'<scheme>://<hostname>': '<proxy>'}`. Host-specific proxies will
be used in preference to the previously-supported scheme-specific
ones, but the previous syntax will continue to work.
- `Response.raise_for_status` now prints the URL that failed as part
of the exception message.
- `requests.utils.get_netrc_auth` now takes an `raise_errors` kwarg,
defaulting to `False`. When `True`, errors parsing `.netrc` files
cause exceptions to be thrown.
- Change to bundled projects import logic to make it easier to
unbundle requests downstream.
- Changed the default User-Agent string to avoid leaking data on
Linux: now contains only the requests version.
**Bugfixes**
- The `json` parameter to `post()` and friends will now only be used
if neither `data` nor `files` are present, consistent with the
documentation.
- We now ignore empty fields in the `NO_PROXY` environment variable.
- Fixed problem where `httplib.BadStatusLine` would get raised if
combining `stream=True` with `contextlib.closing`.
- Prevented bugs where we would attempt to return the same connection
back to the connection pool twice when sending a Chunked body.
- Miscellaneous minor internal changes.
- Digest Auth support is now thread safe.
**Updates**
- Updated urllib3 to 1.12.
2.7.0 (2015-05-03)
------------------
This is the first release that follows our new release process. For
more, see [our
documentation](https://requests.readthedocs.io/en/latest/community/release-process/).
**Bugfixes**
- Updated urllib3 to 1.10.4, resolving several bugs involving chunked
transfer encoding and response framing.
2.6.2 (2015-04-23)
------------------
**Bugfixes**
- Fix regression where compressed data that was sent as chunked data
was not properly decompressed. (\#2561)
2.6.1 (2015-04-22)
------------------
**Bugfixes**
- Remove VendorAlias import machinery introduced in v2.5.2.
- Simplify the PreparedRequest.prepare API: We no longer require the
user to pass an empty list to the hooks keyword argument. (c.f.
\#2552)
- Resolve redirects now receives and forwards all of the original
arguments to the adapter. (\#2503)
- Handle UnicodeDecodeErrors when trying to deal with a unicode URL
that cannot be encoded in ASCII. (\#2540)
- Populate the parsed path of the URI field when performing Digest
Authentication. (\#2426)
- Copy a PreparedRequest's CookieJar more reliably when it is not an
instance of RequestsCookieJar. (\#2527)
2.6.0 (2015-03-14)
------------------
**Bugfixes**
- CVE-2015-2296: Fix handling of cookies on redirect. Previously a
cookie without a host value set would use the hostname for the
redirected URL exposing requests users to session fixation attacks
and potentially cookie stealing. This was disclosed privately by
Matthew Daley of [BugFuzz](https://bugfuzz.com). This affects all
versions of requests from v2.1.0 to v2.5.3 (inclusive on both ends).
- Fix error when requests is an `install_requires` dependency and
`python setup.py test` is run. (\#2462)
- Fix error when urllib3 is unbundled and requests continues to use
the vendored import location.
- Include fixes to `urllib3`'s header handling.
- Requests' handling of unvendored dependencies is now more
restrictive.
**Features and Improvements**
- Support bytearrays when passed as parameters in the `files`
argument. (\#2468)
- Avoid data duplication when creating a request with `str`, `bytes`,
or `bytearray` input to the `files` argument.
2.5.3 (2015-02-24)
------------------
**Bugfixes**
- Revert changes to our vendored certificate bundle. For more context
see (\#2455, \#2456, and <https://bugs.python.org/issue23476>)
2.5.2 (2015-02-23)
------------------
**Features and Improvements**
- Add sha256 fingerprint support.
([shazow/urllib3\#540](https://github.com/shazow/urllib3/pull/540))
- Improve the performance of headers.
([shazow/urllib3\#544](https://github.com/shazow/urllib3/pull/544))
**Bugfixes**
- Copy pip's import machinery. When downstream redistributors remove
requests.packages.urllib3 the import machinery will continue to let
those same symbols work. Example usage in requests' documentation
and 3rd-party libraries relying on the vendored copies of urllib3
will work without having to fallback to the system urllib3.
- Attempt to quote parts of the URL on redirect if unquoting and then
quoting fails. (\#2356)
- Fix filename type check for multipart form-data uploads. (\#2411)
- Properly handle the case where a server issuing digest
authentication challenges provides both auth and auth-int
qop-values. (\#2408)
- Fix a socket leak.
([shazow/urllib3\#549](https://github.com/shazow/urllib3/pull/549))
- Fix multiple `Set-Cookie` headers properly.
([shazow/urllib3\#534](https://github.com/shazow/urllib3/pull/534))
- Disable the built-in hostname verification.
([shazow/urllib3\#526](https://github.com/shazow/urllib3/pull/526))
- Fix the behaviour of decoding an exhausted stream.
([shazow/urllib3\#535](https://github.com/shazow/urllib3/pull/535))
**Security**
- Pulled in an updated `cacert.pem`.
- Drop RC4 from the default cipher list.
([shazow/urllib3\#551](https://github.com/shazow/urllib3/pull/551))
2.5.1 (2014-12-23)
------------------
**Behavioural Changes**
- Only catch HTTPErrors in raise\_for\_status (\#2382)
**Bugfixes**
- Handle LocationParseError from urllib3 (\#2344)
- Handle file-like object filenames that are not strings (\#2379)
- Unbreak HTTPDigestAuth handler. Allow new nonces to be negotiated
(\#2389)
2.5.0 (2014-12-01)
------------------
**Improvements**
- Allow usage of urllib3's Retry object with HTTPAdapters (\#2216)
- The `iter_lines` method on a response now accepts a delimiter with
which to split the content (\#2295)
**Behavioural Changes**
- Add deprecation warnings to functions in requests.utils that will be
removed in 3.0 (\#2309)
- Sessions used by the functional API are always closed (\#2326)
- Restrict requests to HTTP/1.1 and HTTP/1.0 (stop accepting HTTP/0.9)
(\#2323)
**Bugfixes**
- Only parse the URL once (\#2353)
- Allow Content-Length header to always be overridden (\#2332)
- Properly handle files in HTTPDigestAuth (\#2333)
- Cap redirect\_cache size to prevent memory abuse (\#2299)
- Fix HTTPDigestAuth handling of redirects after authenticating
successfully (\#2253)
- Fix crash with custom method parameter to Session.request (\#2317)
- Fix how Link headers are parsed using the regular expression library
(\#2271)
**Documentation**
- Add more references for interlinking (\#2348)
- Update CSS for theme (\#2290)
- Update width of buttons and sidebar (\#2289)
- Replace references of Gittip with Gratipay (\#2282)
- Add link to changelog in sidebar (\#2273)
2.4.3 (2014-10-06)
------------------
**Bugfixes**
- Unicode URL improvements for Python 2.
- Re-order JSON param for backwards compat.
- Automatically defrag authentication schemes from host/pass URIs.
([\#2249](https://github.com/psf/requests/issues/2249))
2.4.2 (2014-10-05)
------------------
**Improvements**
- FINALLY! Add json parameter for uploads!
([\#2258](https://github.com/psf/requests/pull/2258))
- Support for bytestring URLs on Python 3.x
([\#2238](https://github.com/psf/requests/pull/2238))
**Bugfixes**
- Avoid getting stuck in a loop
([\#2244](https://github.com/psf/requests/pull/2244))
- Multiple calls to iter\* fail with unhelpful error.
([\#2240](https://github.com/psf/requests/issues/2240),
[\#2241](https://github.com/psf/requests/issues/2241))
**Documentation**
- Correct redirection introduction
([\#2245](https://github.com/psf/requests/pull/2245/))
- Added example of how to send multiple files in one request.
([\#2227](https://github.com/psf/requests/pull/2227/))
- Clarify how to pass a custom set of CAs
([\#2248](https://github.com/psf/requests/pull/2248/))
2.4.1 (2014-09-09)
------------------
- Now has a "security" package extras set,
`$ pip install requests[security]`
- Requests will now use Certifi if it is available.
- Capture and re-raise urllib3 ProtocolError
- Bugfix for responses that attempt to redirect to themselves forever
(wtf?).
2.4.0 (2014-08-29)
------------------
**Behavioral Changes**
- `Connection: keep-alive` header is now sent automatically.
**Improvements**
- Support for connect timeouts! Timeout now accepts a tuple (connect,
read) which is used to set individual connect and read timeouts.
- Allow copying of PreparedRequests without headers/cookies.
- Updated bundled urllib3 version.
- Refactored settings loading from environment -- new
Session.merge\_environment\_settings.
- Handle socket errors in iter\_content.
2.3.0 (2014-05-16)
------------------
**API Changes**
- New `Response` property `is_redirect`, which is true when the
library could have processed this response as a redirection (whether
or not it actually did).
- The `timeout` parameter now affects requests with both `stream=True`
and `stream=False` equally.
- The change in v2.0.0 to mandate explicit proxy schemes has been
reverted. Proxy schemes now default to `http://`.
- The `CaseInsensitiveDict` used for HTTP headers now behaves like a
normal dictionary when references as string or viewed in the
interpreter.
**Bugfixes**
- No longer expose Authorization or Proxy-Authorization headers on
redirect. Fix CVE-2014-1829 and CVE-2014-1830 respectively.
- Authorization is re-evaluated each redirect.
- On redirect, pass url as native strings.
- Fall-back to autodetected encoding for JSON when Unicode detection
fails.
- Headers set to `None` on the `Session` are now correctly not sent.
- Correctly honor `decode_unicode` even if it wasn't used earlier in
the same response.
- Stop advertising `compress` as a supported Content-Encoding.
- The `Response.history` parameter is now always a list.
- Many, many `urllib3` bugfixes.
2.2.1 (2014-01-23)
------------------
**Bugfixes**
- Fixes incorrect parsing of proxy credentials that contain a literal
or encoded '\#' character.
- Assorted urllib3 fixes.
2.2.0 (2014-01-09)
------------------
**API Changes**
- New exception: `ContentDecodingError`. Raised instead of `urllib3`
`DecodeError` exceptions.
**Bugfixes**
- Avoid many many exceptions from the buggy implementation of
`proxy_bypass` on OS X in Python 2.6.
- Avoid crashing when attempting to get authentication credentials
from \~/.netrc when running as a user without a home directory.
- Use the correct pool size for pools of connections to proxies.
- Fix iteration of `CookieJar` objects.
- Ensure that cookies are persisted over redirect.
- Switch back to using chardet, since it has merged with charade.
2.1.0 (2013-12-05)
------------------
- Updated CA Bundle, of course.
- Cookies set on individual Requests through a `Session` (e.g. via
`Session.get()`) are no longer persisted to the `Session`.
- Clean up connections when we hit problems during chunked upload,
rather than leaking them.
- Return connections to the pool when a chunked upload is successful,
rather than leaking it.
- Match the HTTPbis recommendation for HTTP 301 redirects.
- Prevent hanging when using streaming uploads and Digest Auth when a
401 is received.
- Values of headers set by Requests are now always the native string
type.
- Fix previously broken SNI support.
- Fix accessing HTTP proxies using proxy authentication.
- Unencode HTTP Basic usernames and passwords extracted from URLs.
- Support for IP address ranges for no\_proxy environment variable
- Parse headers correctly when users override the default `Host:`
header.
- Avoid munging the URL in case of case-sensitive servers.
- Looser URL handling for non-HTTP/HTTPS urls.
- Accept unicode methods in Python 2.6 and 2.7.
- More resilient cookie handling.
- Make `Response` objects pickleable.
- Actually added MD5-sess to Digest Auth instead of pretending to like
last time.
- Updated internal urllib3.
- Fixed @Lukasa's lack of taste.
2.0.1 (2013-10-24)
------------------
- Updated included CA Bundle with new mistrusts and automated process
for the future
- Added MD5-sess to Digest Auth
- Accept per-file headers in multipart file POST messages.
- Fixed: Don't send the full URL on CONNECT messages.
- Fixed: Correctly lowercase a redirect scheme.
- Fixed: Cookies not persisted when set via functional API.
- Fixed: Translate urllib3 ProxyError into a requests ProxyError
derived from ConnectionError.
- Updated internal urllib3 and chardet.
2.0.0 (2013-09-24)
------------------
**API Changes:**
- Keys in the Headers dictionary are now native strings on all Python
versions, i.e. bytestrings on Python 2, unicode on Python 3.
- Proxy URLs now *must* have an explicit scheme. A `MissingSchema`
exception will be raised if they don't.
- Timeouts now apply to read time if `Stream=False`.
- `RequestException` is now a subclass of `IOError`, not
`RuntimeError`.
- Added new method to `PreparedRequest` objects:
`PreparedRequest.copy()`.
- Added new method to `Session` objects: `Session.update_request()`.
This method updates a `Request` object with the data (e.g. cookies)
stored on the `Session`.
- Added new method to `Session` objects: `Session.prepare_request()`.
This method updates and prepares a `Request` object, and returns the
corresponding `PreparedRequest` object.
- Added new method to `HTTPAdapter` objects:
`HTTPAdapter.proxy_headers()`. This should not be called directly,
but improves the subclass interface.
- `httplib.IncompleteRead` exceptions caused by incorrect chunked
encoding will now raise a Requests `ChunkedEncodingError` instead.
- Invalid percent-escape sequences now cause a Requests `InvalidURL`
exception to be raised.
- HTTP 208 no longer uses reason phrase `"im_used"`. Correctly uses
`"already_reported"`.
- HTTP 226 reason added (`"im_used"`).
**Bugfixes:**
- Vastly improved proxy support, including the CONNECT verb. Special
thanks to the many contributors who worked towards this improvement.
- Cookies are now properly managed when 401 authentication responses
are received.
- Chunked encoding fixes.
- Support for mixed case schemes.
- Better handling of streaming downloads.
- Retrieve environment proxies from more locations.
- Minor cookies fixes.
- Improved redirect behaviour.
- Improved streaming behaviour, particularly for compressed data.
- Miscellaneous small Python 3 text encoding bugs.
- `.netrc` no longer overrides explicit auth.
- Cookies set by hooks are now correctly persisted on Sessions.
- Fix problem with cookies that specify port numbers in their host
field.
- `BytesIO` can be used to perform streaming uploads.
- More generous parsing of the `no_proxy` environment variable.
- Non-string objects can be passed in data values alongside files.
1.2.3 (2013-05-25)
------------------
- Simple packaging fix
1.2.2 (2013-05-23)
------------------
- Simple packaging fix
1.2.1 (2013-05-20)
------------------
- 301 and 302 redirects now change the verb to GET for all verbs, not
just POST, improving browser compatibility.
- Python 3.3.2 compatibility
- Always percent-encode location headers
- Fix connection adapter matching to be most-specific first
- new argument to the default connection adapter for passing a block
argument
- prevent a KeyError when there's no link headers
1.2.0 (2013-03-31)
------------------
- Fixed cookies on sessions and on requests
- Significantly change how hooks are dispatched - hooks now receive
all the arguments specified by the user when making a request so
hooks can make a secondary request with the same parameters. This is
especially necessary for authentication handler authors
- certifi support was removed
- Fixed bug where using OAuth 1 with body `signature_type` sent no
data
- Major proxy work thanks to @Lukasa including parsing of proxy
authentication from the proxy url
- Fix DigestAuth handling too many 401s
- Update vendored urllib3 to include SSL bug fixes
- Allow keyword arguments to be passed to `json.loads()` via the
`Response.json()` method
- Don't send `Content-Length` header by default on `GET` or `HEAD`
requests
- Add `elapsed` attribute to `Response` objects to time how long a
request took.
- Fix `RequestsCookieJar`
- Sessions and Adapters are now picklable, i.e., can be used with the
multiprocessing library
- Update charade to version 1.0.3
The change in how hooks are dispatched will likely cause a great deal of
issues.
1.1.0 (2013-01-10)
------------------
- CHUNKED REQUESTS
- Support for iterable response bodies
- Assume servers persist redirect params
- Allow explicit content types to be specified for file data
- Make merge\_kwargs case-insensitive when looking up keys
1.0.3 (2012-12-18)
------------------
- Fix file upload encoding bug
- Fix cookie behavior
1.0.2 (2012-12-17)
------------------
- Proxy fix for HTTPAdapter.
1.0.1 (2012-12-17)
------------------
- Cert verification exception bug.
- Proxy fix for HTTPAdapter.
1.0.0 (2012-12-17)
------------------
- Massive Refactor and Simplification
- Switch to Apache 2.0 license
- Swappable Connection Adapters
- Mountable Connection Adapters
- Mutable ProcessedRequest chain
- /s/prefetch/stream
- Removal of all configuration
- Standard library logging
- Make Response.json() callable, not property.
- Usage of new charade project, which provides python 2 and 3
simultaneous chardet.
- Removal of all hooks except 'response'
- Removal of all authentication helpers (OAuth, Kerberos)
This is not a backwards compatible change.
0.14.2 (2012-10-27)
-------------------
- Improved mime-compatible JSON handling
- Proxy fixes
- Path hack fixes
- Case-Insensitive Content-Encoding headers
- Support for CJK parameters in form posts
0.14.1 (2012-10-01)
-------------------
- Python 3.3 Compatibility
- Simply default accept-encoding
- Bugfixes
0.14.0 (2012-09-02)
-------------------
- No more iter\_content errors if already downloaded.
0.13.9 (2012-08-25)
-------------------
- Fix for OAuth + POSTs
- Remove exception eating from dispatch\_hook
- General bugfixes
0.13.8 (2012-08-21)
-------------------
- Incredible Link header support :)
0.13.7 (2012-08-19)
-------------------
- Support for (key, value) lists everywhere.
- Digest Authentication improvements.
- Ensure proxy exclusions work properly.
- Clearer UnicodeError exceptions.
- Automatic casting of URLs to strings (fURL and such)
- Bugfixes.
0.13.6 (2012-08-06)
-------------------
- Long awaited fix for hanging connections!
0.13.5 (2012-07-27)
-------------------
- Packaging fix
0.13.4 (2012-07-27)
-------------------
- GSSAPI/Kerberos authentication!
- App Engine 2.7 Fixes!
- Fix leaking connections (from urllib3 update)
- OAuthlib path hack fix
- OAuthlib URL parameters fix.
0.13.3 (2012-07-12)
-------------------
- Use simplejson if available.
- Do not hide SSLErrors behind Timeouts.
- Fixed param handling with urls containing fragments.
- Significantly improved information in User Agent.
- client certificates are ignored when verify=False
0.13.2 (2012-06-28)
-------------------
- Zero dependencies (once again)!
- New: Response.reason
- Sign querystring parameters in OAuth 1.0
- Client certificates no longer ignored when verify=False
- Add openSUSE certificate support
0.13.1 (2012-06-07)
-------------------
- Allow passing a file or file-like object as data.
- Allow hooks to return responses that indicate errors.
- Fix Response.text and Response.json for body-less responses.
0.13.0 (2012-05-29)
-------------------
- Removal of Requests.async in favor of
[grequests](https://github.com/kennethreitz/grequests)
- Allow disabling of cookie persistence.
- New implementation of safe\_mode
- cookies.get now supports default argument
- Session cookies not saved when Session.request is called with
return\_response=False
- Env: no\_proxy support.
- RequestsCookieJar improvements.
- Various bug fixes.
0.12.1 (2012-05-08)
-------------------
- New `Response.json` property.
- Ability to add string file uploads.
- Fix out-of-range issue with iter\_lines.
- Fix iter\_content default size.
- Fix POST redirects containing files.
0.12.0 (2012-05-02)
-------------------
- EXPERIMENTAL OAUTH SUPPORT!
- Proper CookieJar-backed cookies interface with awesome dict-like
interface.
- Speed fix for non-iterated content chunks.
- Move `pre_request` to a more usable place.
- New `pre_send` hook.
- Lazily encode data, params, files.
- Load system Certificate Bundle if `certify` isn't available.
- Cleanups, fixes.
0.11.2 (2012-04-22)
-------------------
- Attempt to use the OS's certificate bundle if `certifi` isn't
available.
- Infinite digest auth redirect fix.
- Multi-part file upload improvements.
- Fix decoding of invalid %encodings in URLs.
- If there is no content in a response don't throw an error the second
time that content is attempted to be read.
- Upload data on redirects.
0.11.1 (2012-03-30)
-------------------
- POST redirects now break RFC to do what browsers do: Follow up with
a GET.
- New `strict_mode` configuration to disable new redirect behavior.
0.11.0 (2012-03-14)
-------------------
- Private SSL Certificate support
- Remove select.poll from Gevent monkeypatching
- Remove redundant generator for chunked transfer encoding
- Fix: Response.ok raises Timeout Exception in safe\_mode
0.10.8 (2012-03-09)
-------------------
- Generate chunked ValueError fix
- Proxy configuration by environment variables
- Simplification of iter\_lines.
- New trust\_env configuration for disabling system/environment hints.
- Suppress cookie errors.
0.10.7 (2012-03-07)
-------------------
- encode\_uri = False
0.10.6 (2012-02-25)
-------------------
- Allow '=' in cookies.
0.10.5 (2012-02-25)
-------------------
- Response body with 0 content-length fix.
- New async.imap.
- Don't fail on netrc.
0.10.4 (2012-02-20)
-------------------
- Honor netrc.
0.10.3 (2012-02-20)
-------------------
- HEAD requests don't follow redirects anymore.
- raise\_for\_status() doesn't raise for 3xx anymore.
- Make Session objects picklable.
- ValueError for invalid schema URLs.
0.10.2 (2012-01-15)
-------------------
- Vastly improved URL quoting.
- Additional allowed cookie key values.
- Attempted fix for "Too many open files" Error
- Replace unicode errors on first pass, no need for second pass.
- Append '/' to bare-domain urls before query insertion.
- Exceptions now inherit from RuntimeError.
- Binary uploads + auth fix.
- Bugfixes.
0.10.1 (2012-01-23)
-------------------
- PYTHON 3 SUPPORT!
- Dropped 2.5 Support. (*Backwards Incompatible*)
0.10.0 (2012-01-21)
-------------------
- `Response.content` is now bytes-only. (*Backwards Incompatible*)
- New `Response.text` is unicode-only.
- If no `Response.encoding` is specified and `chardet` is available,
`Response.text` will guess an encoding.
- Default to ISO-8859-1 (Western) encoding for "text" subtypes.
- Removal of decode\_unicode. (*Backwards Incompatible*)
- New multiple-hooks system.
- New `Response.register_hook` for registering hooks within the
pipeline.
- `Response.url` is now Unicode.
0.9.3 (2012-01-18)
------------------
- SSL verify=False bugfix (apparent on windows machines).
0.9.2 (2012-01-18)
------------------
- Asynchronous async.send method.
- Support for proper chunk streams with boundaries.
- session argument for Session classes.
- Print entire hook tracebacks, not just exception instance.
- Fix response.iter\_lines from pending next line.
- Fix but in HTTP-digest auth w/ URI having query strings.
- Fix in Event Hooks section.
- Urllib3 update.
0.9.1 (2012-01-06)
------------------
- danger\_mode for automatic Response.raise\_for\_status()
- Response.iter\_lines refactor
0.9.0 (2011-12-28)
------------------
- verify ssl is default.
0.8.9 (2011-12-28)
------------------
- Packaging fix.
0.8.8 (2011-12-28)
------------------
- SSL CERT VERIFICATION!
- Release of Cerifi: Mozilla's cert list.
- New 'verify' argument for SSL requests.
- Urllib3 update.
0.8.7 (2011-12-24)
------------------
- iter\_lines last-line truncation fix
- Force safe\_mode for async requests
- Handle safe\_mode exceptions more consistently
- Fix iteration on null responses in safe\_mode
0.8.6 (2011-12-18)
------------------
- Socket timeout fixes.
- Proxy Authorization support.
0.8.5 (2011-12-14)
------------------
- Response.iter\_lines!
0.8.4 (2011-12-11)
------------------
- Prefetch bugfix.
- Added license to installed version.
0.8.3 (2011-11-27)
------------------
- Converted auth system to use simpler callable objects.
- New session parameter to API methods.
- Display full URL while logging.
0.8.2 (2011-11-19)
------------------
- New Unicode decoding system, based on over-ridable
Response.encoding.
- Proper URL slash-quote handling.
- Cookies with `[`, `]`, and `_` allowed.
0.8.1 (2011-11-15)
------------------
- URL Request path fix
- Proxy fix.
- Timeouts fix.
0.8.0 (2011-11-13)
------------------
- Keep-alive support!
- Complete removal of Urllib2
- Complete removal of Poster
- Complete removal of CookieJars
- New ConnectionError raising
- Safe\_mode for error catching
- prefetch parameter for request methods
- OPTION method
- Async pool size throttling
- File uploads send real names
- Vendored in urllib3
0.7.6 (2011-11-07)
------------------
- Digest authentication bugfix (attach query data to path)
0.7.5 (2011-11-04)
------------------
- Response.content = None if there was an invalid response.
- Redirection auth handling.
0.7.4 (2011-10-26)
------------------
- Session Hooks fix.
0.7.3 (2011-10-23)
------------------
- Digest Auth fix.
0.7.2 (2011-10-23)
------------------
- PATCH Fix.
0.7.1 (2011-10-23)
------------------
- Move away from urllib2 authentication handling.
- Fully Remove AuthManager, AuthObject, &c.
- New tuple-based auth system with handler callbacks.
0.7.0 (2011-10-22)
------------------
- Sessions are now the primary interface.
- Deprecated InvalidMethodException.
- PATCH fix.
- New config system (no more global settings).
0.6.6 (2011-10-19)
------------------
- Session parameter bugfix (params merging).
0.6.5 (2011-10-18)
------------------
- Offline (fast) test suite.
- Session dictionary argument merging.
0.6.4 (2011-10-13)
------------------
- Automatic decoding of unicode, based on HTTP Headers.
- New `decode_unicode` setting.
- Removal of `r.read/close` methods.
- New `r.faw` interface for advanced response usage.\*
- Automatic expansion of parameterized headers.
0.6.3 (2011-10-13)
------------------
- Beautiful `requests.async` module, for making async requests w/
gevent.
0.6.2 (2011-10-09)
------------------
- GET/HEAD obeys allow\_redirects=False.
0.6.1 (2011-08-20)
------------------
- Enhanced status codes experience `\o/`
- Set a maximum number of redirects (`settings.max_redirects`)
- Full Unicode URL support
- Support for protocol-less redirects.
- Allow for arbitrary request types.
- Bugfixes
0.6.0 (2011-08-17)
------------------
- New callback hook system
- New persistent sessions object and context manager
- Transparent Dict-cookie handling
- Status code reference object
- Removed Response.cached
- Added Response.request
- All args are kwargs
- Relative redirect support
- HTTPError handling improvements
- Improved https testing
- Bugfixes
0.5.1 (2011-07-23)
------------------
- International Domain Name Support!
- Access headers without fetching entire body (`read()`)
- Use lists as dicts for parameters
- Add Forced Basic Authentication
- Forced Basic is default authentication type
- `python-requests.org` default User-Agent header
- CaseInsensitiveDict lower-case caching
- Response.history bugfix
0.5.0 (2011-06-21)
------------------
- PATCH Support
- Support for Proxies
- HTTPBin Test Suite
- Redirect Fixes
- settings.verbose stream writing
- Querystrings for all methods
- URLErrors (Connection Refused, Timeout, Invalid URLs) are treated as
explicitly raised
`r.requests.get('hwe://blah'); r.raise_for_status()`
0.4.1 (2011-05-22)
------------------
- Improved Redirection Handling
- New 'allow\_redirects' param for following non-GET/HEAD Redirects
- Settings module refactoring
0.4.0 (2011-05-15)
------------------
- Response.history: list of redirected responses
- Case-Insensitive Header Dictionaries!
- Unicode URLs
0.3.4 (2011-05-14)
------------------
- Urllib2 HTTPAuthentication Recursion fix (Basic/Digest)
- Internal Refactor
- Bytes data upload Bugfix
0.3.3 (2011-05-12)
------------------
- Request timeouts
- Unicode url-encoded data
- Settings context manager and module
0.3.2 (2011-04-15)
------------------
- Automatic Decompression of GZip Encoded Content
- AutoAuth Support for Tupled HTTP Auth
0.3.1 (2011-04-01)
------------------
- Cookie Changes
- Response.read()
- Poster fix
0.3.0 (2011-02-25)
------------------
- Automatic Authentication API Change
- Smarter Query URL Parameterization
- Allow file uploads and POST data together
-
New Authentication Manager System
: - Simpler Basic HTTP System
- Supports all built-in urllib2 Auths
- Allows for custom Auth Handlers
0.2.4 (2011-02-19)
------------------
- Python 2.5 Support
- PyPy-c v1.4 Support
- Auto-Authentication tests
- Improved Request object constructor
0.2.3 (2011-02-15)
------------------
-
New HTTPHandling Methods
: - Response.\_\_nonzero\_\_ (false if bad HTTP Status)
- Response.ok (True if expected HTTP Status)
- Response.error (Logged HTTPError if bad HTTP Status)
- Response.raise\_for\_status() (Raises stored HTTPError)
0.2.2 (2011-02-14)
------------------
- Still handles request in the event of an HTTPError. (Issue \#2)
- Eventlet and Gevent Monkeypatch support.
- Cookie Support (Issue \#1)
0.2.1 (2011-02-14)
------------------
- Added file attribute to POST and PUT requests for multipart-encode
file uploads.
- Added Request.url attribute for context and redirects
0.2.0 (2011-02-14)
------------------
- Birth!
0.0.1 (2011-02-13)
------------------
- Frustration
- Conception
| /requesttest3-1.0.2.tar.gz/requesttest3-1.0.2/HISTORY.md | 0.829906 | 0.803444 | HISTORY.md | pypi |
r"""
The ``codes`` object defines a mapping from common names for HTTP statuses
to their numerical codes, accessible either as attributes or as dictionary
items.
Example::
>>> import requests
>>> requests.codes['temporary_redirect']
307
>>> requests.codes.teapot
418
>>> requests.codes['\o/']
200
Some codes have multiple names, and both upper- and lower-case versions of
the names are allowed. For example, ``codes.ok``, ``codes.OK``, and
``codes.okay`` all correspond to the HTTP status code 200.
"""
from .structures import LookupDict
_codes = {
# Informational.
100: ("continue",),
101: ("switching_protocols",),
102: ("processing",),
103: ("checkpoint",),
122: ("uri_too_long", "request_uri_too_long"),
200: ("ok", "okay", "all_ok", "all_okay", "all_good", "\\o/", "✓"),
201: ("created",),
202: ("accepted",),
203: ("non_authoritative_info", "non_authoritative_information"),
204: ("no_content",),
205: ("reset_content", "reset"),
206: ("partial_content", "partial"),
207: ("multi_status", "multiple_status", "multi_stati", "multiple_stati"),
208: ("already_reported",),
226: ("im_used",),
# Redirection.
300: ("multiple_choices",),
301: ("moved_permanently", "moved", "\\o-"),
302: ("found",),
303: ("see_other", "other"),
304: ("not_modified",),
305: ("use_proxy",),
306: ("switch_proxy",),
307: ("temporary_redirect", "temporary_moved", "temporary"),
308: (
"permanent_redirect",
"resume_incomplete",
"resume",
), # "resume" and "resume_incomplete" to be removed in 3.0
# Client Error.
400: ("bad_request", "bad"),
401: ("unauthorized",),
402: ("payment_required", "payment"),
403: ("forbidden",),
404: ("not_found", "-o-"),
405: ("method_not_allowed", "not_allowed"),
406: ("not_acceptable",),
407: ("proxy_authentication_required", "proxy_auth", "proxy_authentication"),
408: ("request_timeout", "timeout"),
409: ("conflict",),
410: ("gone",),
411: ("length_required",),
412: ("precondition_failed", "precondition"),
413: ("request_entity_too_large",),
414: ("request_uri_too_large",),
415: ("unsupported_media_type", "unsupported_media", "media_type"),
416: (
"requested_range_not_satisfiable",
"requested_range",
"range_not_satisfiable",
),
417: ("expectation_failed",),
418: ("im_a_teapot", "teapot", "i_am_a_teapot"),
421: ("misdirected_request",),
422: ("unprocessable_entity", "unprocessable"),
423: ("locked",),
424: ("failed_dependency", "dependency"),
425: ("unordered_collection", "unordered"),
426: ("upgrade_required", "upgrade"),
428: ("precondition_required", "precondition"),
429: ("too_many_requests", "too_many"),
431: ("header_fields_too_large", "fields_too_large"),
444: ("no_response", "none"),
449: ("retry_with", "retry"),
450: ("blocked_by_windows_parental_controls", "parental_controls"),
451: ("unavailable_for_legal_reasons", "legal_reasons"),
499: ("client_closed_request",),
# Server Error.
500: ("internal_server_error", "server_error", "/o\\", "✗"),
501: ("not_implemented",),
502: ("bad_gateway",),
503: ("service_unavailable", "unavailable"),
504: ("gateway_timeout",),
505: ("http_version_not_supported", "http_version"),
506: ("variant_also_negotiates",),
507: ("insufficient_storage",),
509: ("bandwidth_limit_exceeded", "bandwidth"),
510: ("not_extended",),
511: ("network_authentication_required", "network_auth", "network_authentication"),
}
codes = LookupDict(name="status_codes")
def _init():
for code, titles in _codes.items():
for title in titles:
setattr(codes, title, code)
if not title.startswith(("\\", "/")):
setattr(codes, title.upper(), code)
def doc(code):
names = ", ".join(f"``{n}``" for n in _codes[code])
return "* %d: %s" % (code, names)
global __doc__
__doc__ = (
__doc__ + "\n" + "\n".join(doc(code) for code in sorted(_codes))
if __doc__ is not None
else None
)
_init() | /requiest-2.28.2-py3-none-any.whl/requests/status_codes.py | 0.846308 | 0.566258 | status_codes.py | pypi |
from . import sessions
def request(method, url, **kwargs):
"""Constructs and sends a :class:`Request <Request>`.
:param method: method for the new :class:`Request` object: ``GET``, ``OPTIONS``, ``HEAD``, ``POST``, ``PUT``, ``PATCH``, or ``DELETE``.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
to add for the file.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How many seconds to wait for the server to send data
before giving up, as a float, or a :ref:`(connect timeout, read
timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
:param verify: (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to ``True``.
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
:return: :class:`Response <Response>` object
:rtype: requests.Response
Usage::
>>> import requests
>>> req = requests.request('GET', 'https://httpbin.org/get')
>>> req
<Response [200]>
"""
# By using the 'with' statement we are sure the session is closed, thus we
# avoid leaving sockets open which can trigger a ResourceWarning in some
# cases, and look like a memory leak in others.
with sessions.Session() as session:
return session.request(method=method, url=url, **kwargs)
def get(url, params=None, **kwargs):
r"""Sends a GET request.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("get", url, params=params, **kwargs)
def options(url, **kwargs):
r"""Sends an OPTIONS request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("options", url, **kwargs)
def head(url, **kwargs):
r"""Sends a HEAD request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes. If
`allow_redirects` is not provided, it will be set to `False` (as
opposed to the default :meth:`request` behavior).
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
kwargs.setdefault("allow_redirects", False)
return request("head", url, **kwargs)
def post(url, data=None, json=None, **kwargs):
r"""Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("post", url, data=data, json=json, **kwargs)
def put(url, data=None, **kwargs):
r"""Sends a PUT request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("put", url, data=data, **kwargs)
def patch(url, data=None, **kwargs):
r"""Sends a PATCH request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("patch", url, data=data, **kwargs)
def delete(url, **kwargs):
r"""Sends a DELETE request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("delete", url, **kwargs) | /requiest-2.28.2-py3-none-any.whl/requests/api.py | 0.853486 | 0.411466 | api.py | pypi |
from collections import OrderedDict
from .compat import Mapping, MutableMapping
class CaseInsensitiveDict(MutableMapping):
"""A case-insensitive ``dict``-like object.
Implements all methods and operations of
``MutableMapping`` as well as dict's ``copy``. Also
provides ``lower_items``.
All keys are expected to be strings. The structure remembers the
case of the last key to be set, and ``iter(instance)``,
``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()``
will contain case-sensitive keys. However, querying and contains
testing is case insensitive::
cid = CaseInsensitiveDict()
cid['Accept'] = 'application/json'
cid['aCCEPT'] == 'application/json' # True
list(cid) == ['Accept'] # True
For example, ``headers['content-encoding']`` will return the
value of a ``'Content-Encoding'`` response header, regardless
of how the header name was originally stored.
If the constructor, ``.update``, or equality comparison
operations are given keys that have equal ``.lower()``s, the
behavior is undefined.
"""
def __init__(self, data=None, **kwargs):
self._store = OrderedDict()
if data is None:
data = {}
self.update(data, **kwargs)
def __setitem__(self, key, value):
# Use the lowercased key for lookups, but store the actual
# key alongside the value.
self._store[key.lower()] = (key, value)
def __getitem__(self, key):
return self._store[key.lower()][1]
def __delitem__(self, key):
del self._store[key.lower()]
def __iter__(self):
return (casedkey for casedkey, mappedvalue in self._store.values())
def __len__(self):
return len(self._store)
def lower_items(self):
"""Like iteritems(), but with all lowercase keys."""
return ((lowerkey, keyval[1]) for (lowerkey, keyval) in self._store.items())
def __eq__(self, other):
if isinstance(other, Mapping):
other = CaseInsensitiveDict(other)
else:
return NotImplemented
# Compare insensitively
return dict(self.lower_items()) == dict(other.lower_items())
# Copy is required
def copy(self):
return CaseInsensitiveDict(self._store.values())
def __repr__(self):
return str(dict(self.items()))
class LookupDict(dict):
"""Dictionary lookup object."""
def __init__(self, name=None):
self.name = name
super().__init__()
def __repr__(self):
return f"<lookup '{self.name}'>"
def __getitem__(self, key):
# We allow fall-through here, so values default to None
return self.__dict__.get(key, None)
def get(self, key, default=None):
return self.__dict__.get(key, default) | /requiest-2.28.2-py3-none-any.whl/requests/structures.py | 0.926893 | 0.4231 | structures.py | pypi |
from __future__ import absolute_import
import inspect
import os
import six
# TODO load more than 1x?
def load_module_at(absolute_path):
'''Loads Python module at specified absolute path. If path points to a
package, this will load the `__init__.py` (if it exists) for that package.
:param str absolute_path: Absolute path to the desired Python module.
:return: Imported Python module
:rtype: types.ModuleType
Usage::
>>> import require
>>> require.load_module_at('/absolute/path/to/module')
Module
'''
import importlib.util
if os.path.isdir(absolute_path):
absolute_path = os.path.join(absolute_path, '__init__.py')
if not os.path.exists(absolute_path):
raise ImportError('No module at {}'.format(absolute_path))
spec = importlib.util.spec_from_file_location(absolute_path, absolute_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
def load_py2_module_at(absolute_path):
'''Loads Python 2 module at specified absolute path. If path points to a
package, this will load the `__init__.py` (if it exists) for that package.
:param str absolute_path: Absolute path to the desired Python module.
:return: Imported Python module
:rtype: types.ModuleType
Usage::
>>> import require
>>> require.load_module_at('/absolute/path/to/module')
Module
'''
import imp
if os.path.isdir(absolute_path):
absolute_path = os.path.join(absolute_path, '__init__.py')
if not os.path.exists(absolute_path):
raise ImportError('No module at {}'.format(absolute_path))
# compute directory path and filename without extension
dirpath, filename = os.path.split(absolute_path)
filename_noext, _ = os.path.splitext(filename)
spec = imp.find_module(os.path.splitext(filename)[0], [dirpath])
return imp.load_module(os.path.splitext(absolute_path)[0], *spec)
def resolve_path(path, upstack=0):
'''Resolve a path to an absolute path by taking it to be relative to the source
code of the caller's stackframe shifted up by `upstack` frames.
:param str path: Filesystem path
:param int upstack: Number of stackframes upwards from caller's stackframe
to act as relative point.
#: TODO Usage example is not great on REPL...
Usage::
>>> import require # at /home/require
>>> require.resolve_path('file.txt')
'/home/require/file.txt'
'''
if os.path.isabs(path):
return path
# get absolute path by rooting path with calling script directory
# TODO guard rails for upstack?
caller_relative_filepath = inspect.stack()[upstack + 1][1]
caller_root = os.path.dirname(os.path.abspath(caller_relative_filepath))
return os.path.abspath(os.path.join(caller_root, path))
def require(path):
'''Imports Python module at specified path (relative to calling script).
:param str path: Relative path to the desired Python module. Should be
relative to the path of the calling script.
:return: Loaded module
:rtype: types.ModuleType
Usage::
>>> from require import require3 # at /home/user
>>> foo = require3('./foo.py') # imports /home/user/foo.py
>>> foo
Module
>>> bar = require3('../arbitrary/path/bar.py') # imports /home/arbitrary/path/bar.py
>>> bar
Module
>>> baz = require3('/absolute/path/bar.py') # imports /absolute/path/baz.py
>>> baz
Module
'''
absolute_path = resolve_path(path, upstack=1)
if six.PY2:
return load_py2_module_at(absolute_path)
return load_module_at(absolute_path) | /require.py-2.0.1.tar.gz/require.py-2.0.1/require/require.py | 0.544801 | 0.189484 | require.py | pypi |
import os
import re
import subprocess
import tempfile
from abc import ABC, abstractmethod
from logging import getLogger
from os import PathLike
from pathlib import Path
from typing import BinaryIO, Union
from urllib.parse import urljoin
import bs4
import requests
try:
from requests_file import FileAdapter
except ImportError: # pragma: no cover
FileAdapter = None
LOGGER = getLogger('required-files')
LOGGER.setLevel('INFO')
class Required(ABC):
@abstractmethod
def check(self) -> Union[str, Path]:
"""
This method fires of the downloading/checking.
It should be implemented in any class that is considered 'Required'
:returns: string, depending on the implementation, but mostly a filename or a path.
"""
class RequiredCommand(Required):
"""Checks if we can execute a certain command."""
def __init__(self, *command):
self.command = command
def check(self) -> Union[str, Path]:
try:
subprocess.run(self.command)
except Exception as e:
raise ValueError(str(e))
return self.command[0]
class RequiredFile(Required):
def __init__(self, url: str, save_as: Union[str, os.PathLike]):
self.url = url
self.filename = Path(save_as)
self._create_directories()
def _create_directories(self):
os.makedirs(os.path.dirname(self.filename), exist_ok=True)
def _is_file_present(self):
return os.path.exists(self.filename)
@staticmethod
def _download_to_tmpfile(url: str):
tmp_fp = tempfile.TemporaryFile('wb+')
try:
RequiredFile._download(url, tmp_fp)
except ValueError:
tmp_fp.close()
raise
tmp_fp.seek(0)
return tmp_fp
@staticmethod
def _download(url, save_to: Union[str, os.PathLike, BinaryIO]) -> None:
with requests.Session() as s:
if FileAdapter:
s.mount('file://', FileAdapter())
r = s.get(url)
if not r:
raise ValueError(r.content.decode('utf8'))
if isinstance(save_to, str) or isinstance(save_to, PathLike):
with open(save_to, 'wb') as fp:
fp.write(r.content)
else:
save_to.write(r.content)
def _return_result(self):
return Path(self.filename).absolute()
def check(self) -> Union[str, Path]:
if not self._is_file_present():
self._download(self.url, self.filename)
return self._return_result()
class ZipfileMixin:
# Multiple inheritance can't handle different arguments to __init__, so I prefer to do it this way.
# That way it's clear which arguments are needed.
def _zip_init(self, file_to_check):
self.file_to_check = file_to_check
@staticmethod
def _has_initial_dir(all_files: list) -> bool:
"""
Checks if the first reference in the zip file is the initial dir for ALL the files?
"""
initial_dir = all_files[0]
if initial_dir[-1] != '/': # First entry is NOT a dir?
return False
initial_dir_l = len(initial_dir)
for fname in all_files:
if fname[0:initial_dir_l] != initial_dir:
return False
return True
@staticmethod
def _process_zip(zip_in, into_dir: Path, skip_initial_dir: bool = True) -> None:
"""
Extracts all contents of a zip file into a directory.
:param zip_in: Pathlike|file pointer|file name
:param into_dir: where to extract it in.
:param skip_initial_dir: skip the initial dir or not?
:return:
"""
import zipfile
with zipfile.ZipFile(zip_in) as zip_ref:
all_files = zip_ref.namelist()
if skip_initial_dir and __class__._has_initial_dir(all_files):
initial_dir_l = len(all_files[0])
for filename in all_files[1:]:
tgt = into_dir / filename[initial_dir_l:]
if filename[-1] == '/':
os.makedirs(tgt, exist_ok=True)
else:
with zip_ref.open(filename, mode='r') as zip_fp:
with open(tgt, mode='wb') as fp:
fp.write(zip_fp.read())
else:
zip_ref.extractall(into_dir)
zip_in.close()
def _create_directories(self):
os.makedirs(self.filename, exist_ok=True)
def _is_file_present(self):
return os.path.exists(Path(self.filename) / self.file_to_check)
class RequiredZipFile(ZipfileMixin, RequiredFile):
"""
Download a ZIP file from a certain URL.
"""
def __init__(self, url, save_as, file_to_check, skip_initial_dir=True):
"""
Download a zip and extract it.
:param url: The URL to download
:param save_as: Save into this directory
:param file_to_check: To quickly check if we already downloaded this zip?
:param skip_initial_dir: Oftentimes in a zip there is a single root directory. Ignore this when extracting?
"""
super().__init__(url, save_as)
self._zip_init(file_to_check)
self.skip_initial_dir = skip_initial_dir
def check(self) -> Union[str, Path]:
if not self._is_file_present():
self._process_zip(
self._download_to_tmpfile(self.url), into_dir=self.filename, skip_initial_dir=self.skip_initial_dir
)
return self._return_result()
class RequiredLatestFromWebMixin(ABC):
@abstractmethod
def _get_real_url(self, soup: bs4.BeautifulSoup):
"""Returns the real URL based on a parsed HTML file."""
@abstractmethod
def _should_i_skip_this_filename(self, filename):
"""Checks if the filename is OK to use."""
def figure_out_url(self, url):
r = requests.get(url)
soup = bs4.BeautifulSoup(r.content, features='lxml')
return self._get_real_url(soup)
def check(self) -> Union[str, Path]:
if not self._is_file_present():
self.url = self.figure_out_url(self.url)
return super().check()
class BitBucketURLRetrieverMixin:
def _get_real_url(self, soup: bs4.BeautifulSoup):
for entry in soup.select('tr.iterable-item td.name a'):
filename = entry.get_text().strip()
if self._should_i_skip_this_filename(filename):
continue
new_url = urljoin(self.url, entry.get('href'))
LOGGER.info(f'Found new BitBucket url: {new_url}')
return new_url
raise ValueError("Couldn't find an URL for this release??")
class GithubURLRetrieverMixin:
def _get_real_url(self, soup: bs4.BeautifulSoup):
for details in soup.select('details div.Box div.d-flex'):
span = details.select('span')
if not span:
continue
filename = span[0].get_text().strip()
if self._should_i_skip_this_filename(filename):
continue
new_url = urljoin(self.url, details.select('a')[0].get('href'))
LOGGER.info(f'Found new Github url: {new_url}')
return new_url
raise ValueError("Couldn't find github url for this release??")
class RequiredLatestBitbucketFile(BitBucketURLRetrieverMixin, RequiredLatestFromWebMixin, RequiredFile):
"""
This class fetches a file from Bitbucket according to a pattern
"""
def __init__(self, url, save_as, file_regex):
super().__init__(url, save_as)
self.file_regex = re.compile(file_regex)
def _should_i_skip_this_filename(self, filename):
return not self.file_regex.match(filename)
class RequiredLatestGithubZipFile(GithubURLRetrieverMixin, RequiredLatestFromWebMixin, RequiredZipFile):
"""
This class fetches a ZIP file from Github and extracts it.
"""
def _should_i_skip_this_filename(self, filename):
retVal = not filename.lower().endswith('.zip')
LOGGER.debug(f'RequiredLatestGithubZipFile._should_i_skip_this_filename:: {retVal}')
return retVal | /required_files-1.0.2-py3-none-any.whl/required_files/required_files.py | 0.690559 | 0.186354 | required_files.py | pypi |
# Requirement Walker
A simple python package which makes it easy to crawl/parse/walk over the requirements within a `requirements.txt` file. It can handle nested requirement files, i.e. `-r ./nested_path/other_reqs.txt` and handle paths to local pip packages (but cannot currently parse their requirements): `./pip_package/my_pip_package # requirement-walk: local-package-name=my-package`. Comments within the requirement files can also be preserved.
## Installation
```bash
pip install requirement-walker
```
## Arguments
Arguments for `requirement-walker` are parsed from the comments within the `requirements.txt` files.
Arguments should follow the pattern of:
```python
flat-earth==1.1.1 # requirement-walker: {arg1_name}={arg1_val}
bigfoot==0.0.1 # He is real requirement-walker: {arg1_name}={arg1_val}|{arg2_name}={arg2_val1},{arg2_val2}
```
Available arguments:
| Name | Expect # of Values | Discription |
| - | -| -|
| local-package-name | 0 or 1 | If a requirement is a path to a local pip package, then provide this argument to tell the walker that its local. You can optionally tell provide the name of the pip package which can be used when filtering requirements. (See [Example Workflow](#example-workflow)) |
| root-relative | 1 | Can be provided along with `local-package-name` or can be stand alone with any `-r` requirements. When the walker sees a relative path for a requirement, it will use this provided value instead of the value actually in that line of the `requirements.txt` file when saving to a file. |
## Example Workflow
Lets walk through a complex example. Note, I am only doing the `requirement.txt` files like this to give a detailed example. I do NOT recommend you do requirements like this.
### Folder Structure
```text
walk_requirements.py
example_application
│ README.md
│ project_requirements.txt
│
└───lambdas
│ │ generic_reqs.txt
│ │
│ └───s3_event_lambda
│ │ │ s3_lambda_reqs.txt
│ │ │ ...
│ │ │
│ │ └───src
│ │ │ ...
│ │
│ └───api_lambda
│ │ api_lambda_reqs.txt
│ │ ...
│ │
│ └───src
│ │ ...
│
└───pip_packages
└───orm_models
│ setup.py
│
└───orm_models
│ | ...
│
└───tests
| ...
```
**NOTE:** This package CANNOT currently parse a setup.py file to walk its requirements but we can keep track of the path to the local requirement.
### walk_requirements.py
Assuming `requirement-walker` is already installed in a virtual environment or locally such that it can be imported.
These files can also be found in `./test/examples/example_application`.
```python
""" Example Script """
# Assuming I am running this script in the directory it is within above.
# Built In
import logging
# 3rd Party
from requirement_walker import RequirementFile
# Owned
if __name__ == '__main__':
FORMAT = '[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s'
logging.basicConfig(format=FORMAT, level=logging.DEBUG)
req_file = RequirementFile('./example_application/project_requirements.txt')
# RequirementFile has a magic method __iter__ written for it so it can be iterated over.
# Outputs found down below
print("Output 1:", *req_file, sep='\n') # This will print the file basically as is
print("---------------------------------------------")
print("Output 2:", *req_file.iter_recursive(), sep='\n') # This will print all reqs in without -r
# You can also send the reqs to a single file via:
# req_file.to_single_file(path_to_output_to)
# That method accepts, no_empty_lines and no_comment_only_lines as arguments.
```
### project_requirements.txt
```python
# One-lining just to show multiple -r works on one line, -r is the only thing that works on one line.
-r ./lambdas/s3_event_lambda/s3_lambda_reqs.txt --requirement=./lambdas/api_lambda/api_lambda_reqs.txt # comment
./pip_packages/orm_models # requirement-walker: local-package-name=orm-models
orm @ git+ssh://git@github.com/ORG/orm.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
orm2 @ git+https://github.com/ORG/orm2.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
orm3 @ git+http://github.com/ORG/orm3.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
```
### generic_reqs.txt
```python
moto==1.3.16.dev67
pytest==6.1.2
pytest-cov==2.10.1
pylint==2.6.0
docker==4.4.0
coverage==4.5.4
# Some other stuff
# Add empty line
```
### s3_lambda_reqs.txt
```python
-r ./../generic_reqs.txt
./../../pip_packages/orm_models # requirement-walker: local-package-name|root-relative=./pip_packages/orm_models
```
### api_lambda_reqs.txt
```python
-r ./../generic_reqs.txt
./../../pip_packages/orm_models # requirement-walker: local-package-name|root-relative=./pip_packages/orm_models
```
### Output
```text
... Logs omitted ...
Output 1:
# One-lining just to show multiple -r works on one line, -r is the only thing that works on one line.
-r C:\Users\{UserName}\Repos\3mcloud\requirement-walker\tests\examples\example_application\lambdas\s3_event_lambda\s3_lambda_reqs.txt # comment
-r C:\Users\{UserName}\Repos\3mcloud\requirement-walker\tests\examples\example_application\lambdas\api_lambda\api_lambda_reqs.txt # comment
./pip_packages/orm_models # requirement-walker: local-package-name=orm-models
orm@ git+ssh://git@github.com/ORG/orm.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
orm2@ git+https://github.com/ORG/orm2.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
orm3@ git+http://github.com/ORG/orm3.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
---------------------------------------------
Output 2:
# One-lining just to show multiple -r works on one line, -r is the only thing that works on one line.
moto==1.3.16.dev67
pytest==6.1.2
pytest-cov==2.10.1
pylint==2.6.0
docker==4.4.0
coverage==4.5.4
# Some other stuff
# Add empty line
./pip_packages/orm_models # requirement-walker: local-package-name|root-relative=./pip_packages/orm_models
moto==1.3.16.dev67
pytest==6.1.2
pytest-cov==2.10.1
pylint==2.6.0
docker==4.4.0
coverage==4.5.4
# Some other stuff
# Add empty line
./pip_packages/orm_models # requirement-walker: local-package-name|root-relative=./pip_packages/orm_models
./pip_packages/orm_models # requirement-walker: local-package-name=orm-models
orm@ git+ssh://git@github.com/ORG/orm.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
orm2@ git+https://github.com/ORG/orm2.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
orm3@ git+http://github.com/ORG/orm3.git@5e2b6d14f00ffbd473dfe8b8602b79e37266568c # git link
```
**NOTE**: Duplicates are NOT filtered out. You can do this on your own if you want using `entry.requirement.name` to filter them out as you iterate.
## Failed Parsing
Sometimes the requirement parser fails. For example, maybe it tries parsing a `-e` or maybe you do a local pip package but don't provide `local-package-name`. If this happens, please open an issue; however, you should still be able to code yourself around the issue or use the walker till a fix is implemented. The walker aims to store as much information as it can, even in cases of failure. See the following example.
### requirements.txt
```python
astroid==2.4.2
attrs==20.3.0
aws-xray-sdk==2.6.0
boto==2.49.0
./local_pips/my_package # This will cause a failed requirement step
boto3==1.16.2
botocore==1.19.28
certifi==2020.11.8
cffi==1.14.4
./pip_packages/orm_models # requirement-walker: local-package-name
```
### Code
```python
""" Example Script """
# Built In
import logging
# 3rd Party
from requirement_walker import RequirementFile
# Owned
if __name__ == '__main__':
FORMAT = '[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s'
logging.basicConfig(format=FORMAT, level=logging.DEBUG)
entries = RequirementFile('./requirements.txt')
print(*entries, sep='\n')
```
### Code Output
```text
... logs omitted ...
astroid==2.4.2
attrs==20.3.0
aws-xray-sdk==2.6.0
boto==2.49.0
./local_pips/my_package # This will cause a failed requirement step
boto3==1.16.2
botocore==1.19.28
certifi==2020.11.8
cffi==1.14.4
./pip_packages/orm_models # requirement-walker: local-package-name
```
Note that it still printed correctly, but if you look at the logs you will see what happened:
```text
WARNING requirement_walker.walker:walker.py:148 Unable to parse requirement. Doing simple FailedRequirement where name=failed_req and url=./local_pips/my_package. Will still output.
```
If you want, you can refine requirements by looking at class instances:
```python
""" Example Script """
# Built In
import logging
# 3rd Party
from requirement_walker import RequirementFile, LocalPackageRequirement, FailedRequirement
# Owned
if __name__ == '__main__':
FORMAT = '[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s'
logging.basicConfig(format=FORMAT, level=logging.ERROR)
for entry in RequirementFile('./requirements.txt'):
# `requirement` can be one of: `None, FailedRequirement, LocalPackageRequirement`
if isinstance(entry.requirement, FailedRequirement):
print("This requirement was a failed req.", entry)
elif isinstance(entry.requirement, LocalPackageRequirement):
print("This requirement was a local req.", entry)
# If a entry is a requirement file, `requirement` will be None
# and `requirement_file` will have a value other then None.
elif isinstance(entry.requirement_file, RequirementFile):
print("This entry is another requirement file.", entry)
# Ouput:
# This requirement was a failed req. ./local_pips/my_package # This will cause a failed requirement step
# This requirement was a local req. ./pip_packages/orm_models # requirement-walker: local-package-name
```
## What is an Entry?
We define an entry as a single line within a requirements.txt file which could be empty, only have a comment, only have a requirement, be a reference to another requirement file, or have a mixture of a requirement/requirement file and a comment.
An Entry object has four main attributes but will not have them all at the same time:
- `comment: Union[Comment, None]`
- `requirement: Union[pkg_resources.Requirement, FailedRequirement, LocalPackageRequirement, None]`
- `proxy_requirement: Union[_ProxyRequirement, None]`
- `requirement_file: [RequirementFile, None]`.
When attributes have values:
- If all of these attributes are set to `None` then the line the entry represents was an empty line.
- If `requirement` has a value then `proxy_requirement` will as well but `requirement_file` will NOT.
- If `requirement_file` has a value then `requirement` and `proxy_requirement` will NOT.
- A `comment` can exist on its own (a line with only a comment) or a comment can exist with either `requirement` or `requirement_file`.
Note, you will mainly work with `requirement` NOT `proxy_requirement`, but there may be cases where the package does not behave properly, in which cases `proxy_requirement` will hold all the other information pulled by the walker than you can use to code your way out of the mess.
| /requirement-walker-0.0.9.tar.gz/requirement-walker-0.0.9/README.md | 0.487551 | 0.819569 | README.md | pypi |
import logging
from pathlib import Path
from typing import Union, Generator, Tuple
from pkg_resources import Requirement
# 3rd Party
# Owned
from .requirment_types import LocalPackageRequirement, FailedRequirement
from .regex_expressions import (
LINE_COMMENT_PATTERN, # Serpate a requirement from its comments.
REQ_OPTION_PATTERN, # Extract -r and --requirement from a requirement.
ARG_EXTRACT_PATTERN, # Extract package arguments from the requirement comments.
GIT_PROTOCOL, # Extract git protocal from git requirements.
COMMENT_ONLY_PATTERN, # Extract lines that are only comments.
)
LOGGER = logging.getLogger(__name__)
class RequirementFileError(Exception):
"""
An exception raised when we try to parse a requirement which is
a -r or --requirement flag.
"""
class Comment:
"""
Class which represents a commment in the requirements file.
"""
def __init__(self, comment_str: Union[str, None]):
"""
Constructor
ARGS:
comment_str (str): A string which represent a comment in the requirements.txt
file. The comment should start with ` #`.
"""
self._comment_str = comment_str
# Stripping for good measure
self.comment = comment_str.strip() if isinstance(comment_str, str) else None
self.arguments = self._extract_arguments() if self.comment else {}
LOGGER.debug("Arguments pulled from comment: %s", self.arguments)
def __bool__(self):
"""
A Comment is true if it is not `None` (an empty string comment would still return True)
"""
return self.comment is not None
def __repr__(self) -> str:
""" TODO """
if self:
return f"Comment(comment='{self.comment}')"
return "Comment(comment=None)"
def __str__(self) -> str:
"""
Return string representation of comment (i.e. what was before) including the #.
If there was no comment, then this will just return an empty string
"""
if self:
return self.comment
return ''
def _extract_arguments(self) -> None:
"""
Given a comment string, returns any requirement-walker arguments
Example comments:
# requirement-walker: local-package-name=my-local-package
Or two arguments
# requirement-walker: local-package-name=my-local-package|ignore-some=1,2,3
Return a dict where each key is the name of an argument provided and the value
is the value provided.
Example:
{
"local-package-name": "my-local-package"
}
{
"local-package-name: "my-local-package",
"ignore-some": "1,2,3"
}
"""
supported_args = {
# Arg name mapped to something (TBD)
# For pip installing a local package. Value should be the name of the package so we can
# name the requirement properly.
'local-package-name',
'root-relative',
}
extracted_args = {}
search = ARG_EXTRACT_PATTERN.search(self.comment)
if search:
arg_str = search.group('args')
for argument in arg_str.split('|'):
name, *val = argument.split('=') # Pull the argument name from any assigned values.
if name not in supported_args:
LOGGER.error("Unknown argument provided for requirement-walker: %s", name)
continue
extracted_args[name] = val[0] if val else None
return extracted_args
class _ProxyRequirement: # pylint: disable=too-few-public-methods
"""
Shoud resemble the pkg_resources.Requirement object. We either use that object or make one
that looks similar when the parse for that one fails.
"""
def __init__(self, requirement_str: Union[str, None], arguments: dict):
"""
Constructor
ARGS:
requirement_str (str): The string which contains the requirement specification.
Should NOT contain any comments.
arguments (dict): A dictionary of requirement-walker arguments that were optionally
added to the comments of this requirement.
"""
self._requirement_str = requirement_str
# Stripping for good measure
self.requirement_str = requirement_str.strip() if isinstance(requirement_str, str) else None
self.arguments = arguments
LOGGER.debug("Arguments for requirements. Requirements %s - Arguments %s",
self.requirement_str, self.arguments)
if self.requirement_str:
try:
self.requirement = Requirement.parse(self.requirement_str)
except Exception as err: # pylint: disable=broad-except
LOGGER.info(
"Was unable to use pkg_resources to parse requirement. "
"Attempting too parse using custom code. Exception for reference:"
" %s", err
)
if REQ_OPTION_PATTERN.search(self.requirement_str):
# Line had -r or --requirement flags
raise RequirementFileError(
"This requirement is a requirement file, parse serperately.") from None
if 'local-package-name' in self.arguments:
# Else lets see if local-package-name argument was added
self.requirement = LocalPackageRequirement(
self.arguments.get('root-relative', self.requirement_str),
self.arguments.get('local-package-name')
)
else:
# Couldn't parse it with our current logic.
LOGGER.warning(
"Unable to parse requirement. Doing simple "
"FailedRequirement where name=%s and url=%s. Will still output.",
'failed_req', self.requirement_str)
self.requirement = FailedRequirement(full_req=self.requirement_str)
def __bool__(self):
""" Returns False if None or empty string was passed as the requirement string """
return bool(self.requirement_str)
def __repr__(self):
""" Return object representation """
return (
"Requirement(requirement_str="
f"{repr(self._requirement_str)}, arguments={self.arguments})"
)
def __str__(self):
"""
Returns the string string representation of a requirement.
"""
if isinstance(self.requirement, FailedRequirement):
return self.requirement.url
if isinstance(self.requirement, LocalPackageRequirement):
return self.requirement.url
return str(self.requirement) # Fall back to the string representation of a Requirement
class Entry: # pylint: disable=too-few-public-methods
"""
We define an `Entry` as a line within the requirement file.
An entry can be a:
- requirement + a comment
- requirement only
- comment only
- empty line
- requirement file (multiple can be in one line but it will be flattened)
- requirement file + a comment
Ideally, if you iterate over each entry and add each one to a file you will
end with all your requirements in a single file with the same formatting they were pulled as.
"""
def __init__(self,
*_, # Not going to allow positional arguments.
proxy_requirement: Union['_ProxyRequirement', None] = None,
comment: ['Comment', None] = None,
requirement_file: Union['RequirementFile', None] = None):
self.proxy_requirement = proxy_requirement if proxy_requirement else None
self.requirement = proxy_requirement.requirement if proxy_requirement else None
self.comment = comment if comment else None
self.requirement_file = requirement_file
def __str__(self):
""" String magic method overload to print out an entry as it appeared before. """
root_relative = self.comment.arguments.get('root-relative', None) if self.comment else None
# pylint: disable=line-too-long
str_map = {
# proxy_requirement, comment, requirement_file
(False, False, False): '', # Was just an empty line
(True, False, False): f"{self.proxy_requirement}",
(False, True, False): f"{self.comment}",
(False, False, True): f"-r {root_relative}" if root_relative else f"-r {self.requirement_file}",
(True, True, False): f"{self.proxy_requirement} {self.comment}",
(False, True, True): f"-r {root_relative} {self.comment}" if root_relative else f"-r {self.requirement_file} {self.comment}",
}
# pylint: enable=line-too-long
key = (bool(self.proxy_requirement), bool(self.comment), bool(self.requirement_file))
try:
return str_map[key]
except KeyError:
LOGGER.exception(
"Exception occured. Unknown pattern of arguments passed to Entry object.Continueing"
)
return ''
def __bool__(self):
"""
A Entry is considered False if it was just an empty line or a line with nothing
but spaces.
"""
for attr in (self.proxy_requirement, self.comment, self.requirement_file):
if attr is not None:
return True
return False
def is_git(self, return_protocol: bool = False) -> Union[bool, Tuple[bool, str]]:
"""
Returns true if the requirement for this entry is a requirement to a git URL.
ARGS:
return_protocol (bool): If set to True instead of just return a bool, this method
will also return the protocol used for git: ['http', 'https', 'ssh', '']
"""
if self.requirement and self.requirement.url:
result = GIT_PROTOCOL.search(self.requirement.url)
if result:
return (True, result.group('protocol')) if return_protocol else True
return (False, '') if return_protocol else False
def is_comment_only(self):
""" Returns true if this entry was a comment and nothing else. """
for attr in (self.proxy_requirement, self.requirement_file):
if attr is not None:
return False
if self.comment is None:
return False
return True
class RequirementFile:
""" A class which represents a requirement file. """
def __init__(self, requirement_file_path: str):
"""
Constructor.
ARGS:
requirement_file_path (str): Path, absolute or relative, to a `requirements.txt` file.
"""
self.sub_req_files = {}
self.requirement_file_path = Path(requirement_file_path)
self._entries = None
@property
def entries(self):
""" Property, returns a list of all entries. """
if self._entries is None:
self._entries = list(self)
return self._entries
def to_single_file(self,
path: str,
no_duplicate_lines: bool = False,
no_empty_lines: bool = False,
no_comment_only_lines: bool = False) -> None:
"""
Output all requirements to the provided path. Creates/overwrites the provided file path.
Good for removing `-r` or `--requirement` flags.
ARGS:
path (str): Path to the file which will be written to.
no_duplicate_lines (bool): Skips duplicate lines
(Entire line must be a duplicate with another).
no_empty_lines (bool): Don't add lines that were empty or just had spaces.
no_comment_only_lines (bool): Don't add lines which were only comments with
no requirements.
"""
file_path = Path(path)
file_path.parent.mkdir(parents=True, exist_ok=True) # Make the directory if it doesn't exist
entries_to_write = []
for entry in self.iter_recursive():
if no_empty_lines and not entry:
continue
if no_comment_only_lines and entry.is_comment_only():
continue
entries_to_write.append(entry)
if no_duplicate_lines:
entries_to_write = {str(val): None for val in entries_to_write} # Dict to retain order.
with open(file_path.absolute(), 'w') as output_file:
print(*entries_to_write, sep='\n', file=output_file)
def __iter__(self) -> Generator[Entry, None, None]:
"""
If no entries have been parsed yet, walks a requirement file path but if the class already
has entries, then yields from existing entries. Yields a GENERATOR of Entry objects.
"""
if isinstance(self._entries, list):
LOGGER.debug("Yielding from cached entries.")
for entry in self._entries:
yield entry
return
LOGGER.info("Iterating requirements file: %s", self.requirement_file_path.absolute())
with open(self.requirement_file_path.absolute()) as input_file:
for line in input_file:
# Strip off the newlines to make things easier
line = line.strip()
if not line:
yield Entry() # Empty Line
continue
# Check for a comment only line first:
comment_match = COMMENT_ONLY_PATTERN.match(line)
if not comment_match:
# Pull out the requirement (seperated from any comments)
match = LINE_COMMENT_PATTERN.match(line)
if not match:
LOGGER.error(
"Could not properly match the following line (continuing): %s",
line
)
continue
if comment_match:
req_str, comment = None, comment_match.group('comment')
else:
req_str, comment = match.group('reqs'), match.group('comment')
comment = Comment(comment)
try:
requirement = _ProxyRequirement(req_str, comment.arguments)
yield Entry(proxy_requirement=requirement, comment=comment)
except RequirementFileError:
LOGGER.debug(
"Parsed requirement appears to be -r argument, make entry a req file.")
for result in REQ_OPTION_PATTERN.finditer(req_str):
new_path = result.group('file_path')
full_relative_path = self.requirement_file_path.parent.absolute() / new_path
LOGGER.debug(
"Parent File: %s - Child requirement file "
"path: %s - New Child Path: %s",
self.requirement_file_path.parent.absolute(),
new_path,
full_relative_path
)
yield Entry(
requirement_file=RequirementFile(full_relative_path),
comment=comment
)
return
def __repr__(self):
""" Object Representation """
return f"RequirementFile(requirement_file_path='{self.requirement_file_path.absolute()}')"
def __str__(self):
""" String Overload, returns the absolute path to the req file. """
return str(self.requirement_file_path.absolute())
def iter_recursive(self,
no_empty_lines: bool = False,
no_comment_only_lines: bool = False) -> Generator[Entry, None, None]:
"""
Iterates through requirements. If another requirement file is hit, it will yield
from that generator.
ARGS:
path (str): Path to the file which will be written to.
no_empty_lines (bool): Don't return lines that were empty or just had spaces.
no_comment_only_lines (bool): Don't return lines which were only comments with
no requirements.
"""
for entry in self:
if isinstance(entry.requirement_file, RequirementFile):
yield from entry.requirement_file.iter_recursive()
else:
if no_empty_lines and not entry:
continue
if no_comment_only_lines and entry.is_comment_only():
continue
yield entry | /requirement-walker-0.0.9.tar.gz/requirement-walker-0.0.9/requirement_walker/walker.py | 0.693369 | 0.217348 | walker.py | pypi |
import re
from pathlib import Path
from typing import List, Union
import toml
from .exceptions import CouldNotParseRequirements, RequirementsNotFound
from .handle_setup import from_setup_py
from .poetry_semver import parse_constraint
from .requirement import DetectedRequirement
__all__ = [
"find_requirements",
"from_requirements_txt",
"from_requirements_dir",
"from_requirements_blob",
"from_pyproject_toml",
"from_setup_py",
"RequirementsNotFound",
"CouldNotParseRequirements",
]
_PIP_OPTIONS = (
"-i",
"--index-url",
"--extra-index-url",
"--no-index",
"-f",
"--find-links",
"-r",
)
P = Union[str, Path]
def find_requirements(path: P) -> List[DetectedRequirement]:
"""
This method tries to determine the requirements of a particular project
by inspecting the possible places that they could be defined.
It will attempt, in order:
1) to parse setup.py in the root for an install_requires value
2) to read a requirements.txt file or a requirements.pip in the root
3) to read all .txt files in a folder called 'requirements' in the root
4) to read files matching "*requirements*.txt" and "*reqs*.txt" in the root,
excluding any starting or ending with 'test'
If one of these succeeds, then a list of pkg_resources.Requirement's
will be returned. If none can be found, then a RequirementsNotFound
will be raised
"""
requirements = []
if isinstance(path, str):
path = Path(path)
setup_py = path / "setup.py"
if setup_py.exists() and setup_py.is_file():
try:
requirements = from_setup_py(setup_py)
requirements.sort()
return requirements
except CouldNotParseRequirements:
pass
poetry_toml = path / "pyproject.toml"
if poetry_toml.exists() and poetry_toml.is_file():
try:
requirements = from_pyproject_toml(poetry_toml)
if len(requirements) > 0:
requirements.sort()
return requirements
except CouldNotParseRequirements:
pass
for reqfile_name in ("requirements.txt", "requirements.pip"):
reqfile = path / reqfile_name
if reqfile.exists and reqfile.is_file():
try:
requirements += from_requirements_txt(reqfile)
except CouldNotParseRequirements as e:
pass
requirements_dir = path / "requirements"
if requirements_dir.exists() and requirements_dir.is_dir():
from_dir = from_requirements_dir(requirements_dir)
if from_dir is not None:
requirements += from_dir
from_blob = from_requirements_blob(path)
if from_blob is not None:
requirements += from_blob
requirements = list(set(requirements))
if len(requirements) > 0:
requirements.sort()
return requirements
raise RequirementsNotFound
def from_pyproject_toml(toml_file: P) -> List[DetectedRequirement]:
requirements = []
if isinstance(toml_file, str):
toml_file = Path(toml_file)
parsed = toml.load(toml_file)
poetry_section = parsed.get("tool", {}).get("poetry", {})
dependencies = poetry_section.get("dependencies", {})
dependencies.update(poetry_section.get("dev-dependencies", {}))
for name, spec in dependencies.items():
if name.lower() == "python":
continue
if isinstance(spec, dict):
if "version" in spec:
spec = spec["version"]
else:
req = DetectedRequirement.parse(f"{name}", toml_file)
if req is not None:
requirements.append(req)
continue
parsed_spec = str(parse_constraint(spec))
if "," not in parsed_spec and "<" not in parsed_spec and ">" not in parsed_spec and "=" not in parsed_spec:
parsed_spec = f"=={parsed_spec}"
req = DetectedRequirement.parse(f"{name}{parsed_spec}", toml_file)
if req is not None:
requirements.append(req)
return requirements
def from_requirements_txt(requirements_file: P) -> List[DetectedRequirement]:
# see http://www.pip-installer.org/en/latest/logic.html
requirements = []
if isinstance(requirements_file, str):
requirements_file = Path(requirements_file)
with requirements_file.open() as f:
for req in f.readlines():
if req.strip() == "":
# empty line
continue
if req.strip().startswith("#"):
# this is a comment
continue
if req.strip().split()[0] in _PIP_OPTIONS:
# this is a pip option
continue
detected = DetectedRequirement.parse(req, requirements_file)
if detected is None:
continue
requirements.append(detected)
return requirements
def from_requirements_dir(path: P) -> List[DetectedRequirement]:
requirements = []
if isinstance(path, str):
path = Path(path)
for entry in path.iterdir():
if not entry.is_file():
continue
if entry.name.endswith(".txt") or entry.name.endswith(".pip"):
requirements += from_requirements_txt(entry)
return list(set(requirements))
def from_requirements_blob(path: P) -> List[DetectedRequirement]:
requirements = []
if isinstance(path, str):
path = Path(path)
for entry in path.iterdir():
if not entry.is_file():
continue
m = re.match(r"^(\w*)req(uirement)?s(\w*)\.txt$", entry.name)
if m is None:
continue
if m.group(1).startswith("test") or m.group(3).endswith("test"):
continue
requirements += from_requirements_txt(entry)
return requirements | /requirements_detector-1.2.2-py3-none-any.whl/requirements_detector/detect.py | 0.507568 | 0.202719 | detect.py | pypi |
from typing import List
import semver
from .empty_constraint import EmptyConstraint
from .version_constraint import VersionConstraint
class VersionUnion(VersionConstraint):
"""
A version constraint representing a union of multiple disjoint version
ranges.
An instance of this will only be created if the version can't be represented
as a non-compound value.
"""
def __init__(self, *ranges):
self._ranges = list(ranges)
@property
def ranges(self):
return self._ranges
@classmethod
def of(cls, *ranges):
from .version_range import VersionRange
flattened = []
for constraint in ranges:
if constraint.is_empty():
continue
if isinstance(constraint, VersionUnion):
flattened += constraint.ranges
continue
flattened.append(constraint)
if not flattened:
return EmptyConstraint()
if any([constraint.is_any() for constraint in flattened]):
return VersionRange()
# Only allow Versions and VersionRanges here so we can more easily reason
# about everything in flattened. _EmptyVersions and VersionUnions are
# filtered out above.
for constraint in flattened:
if isinstance(constraint, VersionRange):
continue
raise ValueError("Unknown VersionConstraint type {}.".format(constraint))
flattened.sort()
merged = []
for constraint in flattened:
# Merge this constraint with the previous one, but only if they touch.
if not merged or (not merged[-1].allows_any(constraint) and not merged[-1].is_adjacent_to(constraint)):
merged.append(constraint)
else:
merged[-1] = merged[-1].union(constraint)
if len(merged) == 1:
return merged[0]
return VersionUnion(*merged)
def is_empty(self):
return False
def is_any(self):
return False
def allows(self, version): # type: (semver.Version) -> bool
return any([constraint.allows(version) for constraint in self._ranges])
def allows_all(self, other): # type: (VersionConstraint) -> bool
our_ranges = iter(self._ranges)
their_ranges = iter(self._ranges_for(other))
our_current_range = next(our_ranges, None)
their_current_range = next(their_ranges, None)
while our_current_range and their_current_range:
if our_current_range.allows_all(their_current_range):
their_current_range = next(their_ranges, None)
else:
our_current_range = next(our_ranges, None)
return their_current_range is None
def allows_any(self, other): # type: (VersionConstraint) -> bool
our_ranges = iter(self._ranges)
their_ranges = iter(self._ranges_for(other))
our_current_range = next(our_ranges, None)
their_current_range = next(their_ranges, None)
while our_current_range and their_current_range:
if our_current_range.allows_any(their_current_range):
return True
if their_current_range.allows_higher(our_current_range):
our_current_range = next(our_ranges, None)
else:
their_current_range = next(their_ranges, None)
return False
def intersect(self, other): # type: (VersionConstraint) -> VersionConstraint
our_ranges = iter(self._ranges)
their_ranges = iter(self._ranges_for(other))
new_ranges = []
our_current_range = next(our_ranges, None)
their_current_range = next(their_ranges, None)
while our_current_range and their_current_range:
intersection = our_current_range.intersect(their_current_range)
if not intersection.is_empty():
new_ranges.append(intersection)
if their_current_range.allows_higher(our_current_range):
our_current_range = next(our_ranges, None)
else:
their_current_range = next(their_ranges, None)
return VersionUnion.of(*new_ranges)
def union(self, other): # type: (VersionConstraint) -> VersionConstraint
return VersionUnion.of(self, other)
def difference(self, other): # type: (VersionConstraint) -> VersionConstraint
our_ranges = iter(self._ranges)
their_ranges = iter(self._ranges_for(other))
new_ranges = []
state = {
"current": next(our_ranges, None),
"their_range": next(their_ranges, None),
}
def their_next_range():
state["their_range"] = next(their_ranges, None)
if state["their_range"]:
return True
new_ranges.append(state["current"])
our_current = next(our_ranges, None)
while our_current:
new_ranges.append(our_current)
our_current = next(our_ranges, None)
return False
def our_next_range(include_current=True):
if include_current:
new_ranges.append(state["current"])
our_current = next(our_ranges, None)
if not our_current:
return False
state["current"] = our_current
return True
while True:
if state["their_range"] is None:
break
if state["their_range"].is_strictly_lower(state["current"]):
if not their_next_range():
break
continue
if state["their_range"].is_strictly_higher(state["current"]):
if not our_next_range():
break
continue
difference = state["current"].difference(state["their_range"])
if isinstance(difference, VersionUnion):
assert len(difference.ranges) == 2
new_ranges.append(difference.ranges[0])
state["current"] = difference.ranges[-1]
if not their_next_range():
break
elif difference.is_empty():
if not our_next_range(False):
break
else:
state["current"] = difference
if state["current"].allows_higher(state["their_range"]):
if not their_next_range():
break
else:
if not our_next_range():
break
if not new_ranges:
return EmptyConstraint()
if len(new_ranges) == 1:
return new_ranges[0]
return VersionUnion.of(*new_ranges)
def _ranges_for(self, constraint): # type: (VersionConstraint) -> List[semver.VersionRange]
from .version_range import VersionRange
if constraint.is_empty():
return []
if isinstance(constraint, VersionUnion):
return constraint.ranges
if isinstance(constraint, VersionRange):
return [constraint]
raise ValueError("Unknown VersionConstraint type {}".format(constraint))
def _excludes_single_version(self): # type: () -> bool
from .version import Version
from .version_range import VersionRange
return isinstance(VersionRange().difference(self), Version)
def __eq__(self, other):
if not isinstance(other, VersionUnion):
return False
return self._ranges == other.ranges
def __str__(self):
from .version_range import VersionRange
if self._excludes_single_version():
return "!={}".format(VersionRange().difference(self))
return " || ".join([str(r) for r in self._ranges])
def __repr__(self):
return "<VersionUnion {}>".format(str(self)) | /requirements_detector-1.2.2-py3-none-any.whl/requirements_detector/poetry_semver/version_union.py | 0.892937 | 0.319214 | version_union.py | pypi |
import re
from .empty_constraint import EmptyConstraint
from .patterns import (
BASIC_CONSTRAINT,
CARET_CONSTRAINT,
TILDE_CONSTRAINT,
TILDE_PEP440_CONSTRAINT,
X_CONSTRAINT,
)
from .version import Version
from .version_constraint import VersionConstraint
from .version_range import VersionRange
from .version_union import VersionUnion
__version__ = "0.1.0"
def parse_constraint(constraints): # type: (str) -> VersionConstraint
if constraints == "*":
return VersionRange()
or_constraints = re.split(r"\s*\|\|?\s*", constraints.strip())
or_groups = []
for constraints in or_constraints:
and_constraints = re.split("(?<!^)(?<![=>< ,]) *(?<!-)[, ](?!-) *(?!,|$)", constraints)
constraint_objects = []
if len(and_constraints) > 1:
for constraint in and_constraints:
constraint_objects.append(parse_single_constraint(constraint))
else:
constraint_objects.append(parse_single_constraint(and_constraints[0]))
if len(constraint_objects) == 1:
constraint = constraint_objects[0]
else:
constraint = constraint_objects[0]
for next_constraint in constraint_objects[1:]:
constraint = constraint.intersect(next_constraint)
or_groups.append(constraint)
if len(or_groups) == 1:
return or_groups[0]
else:
return VersionUnion.of(*or_groups)
def parse_single_constraint(constraint): # type: (str) -> VersionConstraint
m = re.match(r"(?i)^v?[xX*](\.[xX*])*$", constraint)
if m:
return VersionRange()
# Tilde range
m = TILDE_CONSTRAINT.match(constraint)
if m:
version = Version.parse(m.group(1))
high = version.stable.next_minor
if len(m.group(1).split(".")) == 1:
high = version.stable.next_major
return VersionRange(version, high, include_min=True, always_include_max_prerelease=True)
# PEP 440 Tilde range (~=)
m = TILDE_PEP440_CONSTRAINT.match(constraint)
if m:
precision = 1
if m.group(3):
precision += 1
if m.group(4):
precision += 1
version = Version.parse(m.group(1))
if precision == 2:
low = version
high = version.stable.next_major
else:
low = Version(version.major, version.minor, version.patch)
high = version.stable.next_minor
return VersionRange(low, high, include_min=True, always_include_max_prerelease=True)
# Caret range
m = CARET_CONSTRAINT.match(constraint)
if m:
version = Version.parse(m.group(1))
return VersionRange(
version,
version.next_breaking,
include_min=True,
always_include_max_prerelease=True,
)
# X Range
m = X_CONSTRAINT.match(constraint)
if m:
op = m.group(1)
major = int(m.group(2))
minor = m.group(3)
if minor is not None:
version = Version(major, int(minor), 0)
result = VersionRange(
version,
version.next_minor,
include_min=True,
always_include_max_prerelease=True,
)
else:
if major == 0:
result = VersionRange(max=Version(1, 0, 0))
else:
version = Version(major, 0, 0)
result = VersionRange(
version,
version.next_major,
include_min=True,
always_include_max_prerelease=True,
)
if op == "!=":
result = VersionRange().difference(result)
return result
# Basic comparator
m = BASIC_CONSTRAINT.match(constraint)
if m:
op = m.group(1)
version = m.group(2)
if version == "dev":
version = "0.0-dev"
try:
version = Version.parse(version)
except ValueError:
raise ValueError("Could not parse version constraint: {}".format(constraint))
if op == "<":
return VersionRange(max=version)
elif op == "<=":
return VersionRange(max=version, include_max=True)
elif op == ">":
return VersionRange(min=version)
elif op == ">=":
return VersionRange(min=version, include_min=True)
elif op == "!=":
return VersionUnion(VersionRange(max=version), VersionRange(min=version))
else:
return version
raise ValueError("Could not parse version constraint: {}".format(constraint)) | /requirements_detector-1.2.2-py3-none-any.whl/requirements_detector/poetry_semver/__init__.py | 0.40439 | 0.261322 | __init__.py | pypi |
import click
# The exit(1) is used to indicate error in pre-commit
NOT_OK = 1
@click.command()
@click.option("--filename1", default="requirements.txt")
@click.option("--filename2", default="requirements-private.txt")
def rqf(filename1, filename2):
requirements = open_file(filename1)
private_txt = open_file(filename2)
at_sign_set = create_set(private_txt, "@")
requirements_without_at_sign = remove_common_elements(
requirements, at_sign_set
)
# the filename1 will be overwritten
write_file(filename1, requirements_without_at_sign)
def remove_common_elements(package_list, remove_set):
"""
Remove the common elements between package_list and remove_set.
Note that this is *not* an XOR operation: packages that do not
exist in remove_set (but exists in remove_set) are not included.
Parameters
----------
package_list : list
List with string elements representing the packages from the
requirements file. Assumes that the list has "==" to denote
package versions.
remove_set : set
Set with the names of packages to be removed from requirements.
Returns
-------
list
List of packages not presented in remove_set.
"""
package_not_in_remove_set = []
for package in package_list:
package_name = package.split("==")[0].strip()
if package_name not in remove_set:
package_not_in_remove_set.append(package)
return package_not_in_remove_set
def open_file(filename):
"""
Open txt file.
Parameters
----------
filename : str
Name of the file to be opened.
Returns
-------
list
List of strings with the packages names and versions.
"""
try:
with open(filename) as file_object:
requirements = file_object.readlines()
return requirements
except FileNotFoundError:
print(f"{filename} not found.")
exit(NOT_OK)
def create_set(package_list, delimiter):
"""
Create a set of packages to be excluded.
This function receives a list of strings, takes the packages' names
and transforms them to a set.
If the list contains packages with @ but the delimiter input is '==',
then the package is ignored.
Parameters
----------
package_list : list
List of strings with each element representing a package name
and version.
Returns
-------
set
Set with the package names.
"""
list_of_package_names = []
for package in package_list:
if delimiter in package:
package_name = package.split(delimiter)
list_of_package_names.append(package_name[0].strip())
package_set = set(list_of_package_names)
return package_set
def write_file(filename, information):
"""
Write string information into a file.
Parameters
----------
file1 : str
Name of the file where the information will be saved.
It should have the filepath and the file extension (.txt).
information : str
Information to be saved.
"""
with open(filename, "w") as file_save:
for line in information:
file_save.write(line)
if __name__ == "__main__":
rqf() | /requirements_filter-0.0.0.tar.gz/requirements_filter-0.0.0/requirements_filter/requirements_filter.py | 0.606964 | 0.229104 | requirements_filter.py | pypi |
import datetime
import json
import os
from functools import cached_property
from pathlib import Path
from typing import TYPE_CHECKING, TypedDict, Optional, Union, List, Tuple
from platformdirs import user_cache_dir
from requests import __version__
from requirements_rating.sourcerank import SourceRankBreakdown
if TYPE_CHECKING:
from requirements_rating.packages import Package
RATING_CACHE_DIR = Path(user_cache_dir()) / "requirements-rating" / "rating"
MAX_CACHE_AGE = datetime.timedelta(days=7)
class PackageRatingParams(TypedDict):
sourcerank_breakdown: SourceRankBreakdown
class PackageRatingCache(TypedDict):
package_name: str
updated_at: str
schema_version: str
params: PackageRatingParams
class ScoreBase:
def __add__(self, other: "ScoreBase"):
raise NotImplementedError
def __int__(self) -> int:
raise NotImplementedError
def __repr__(self) -> str:
raise NotImplementedError
class ScoreValue(ScoreBase):
def __init__(self, value: int):
self.value = value
def __add__(self, other: "ScoreBase") -> "ScoreBase":
if isinstance(other, ScoreValue):
return ScoreValue(int(self) + int(other))
elif isinstance(other, Max):
return other + self
def __int__(self) -> int:
return self.value
def __repr__(self) -> str:
return f"{self.value}"
class Max(ScoreBase):
def __init__(self, max_score: int, current_score: int = 0):
self.max_score = max_score
self.current_score = current_score
def __add__(self, other: ScoreBase):
if isinstance(other, ScoreValue):
score = self.current_score + int(other)
self.current_score = max(self.max_score, score)
if isinstance(other, Max) and other.max_score < self.max_score:
other.current_score = self.current_score
return other
return self
def __int__(self) -> int:
return self.current_score
def __repr__(self) -> str:
return f"<Max current: {self.current_score} max: {self.max_score}>"
class PackageBreakdown:
def __init__(self, breakdown_key: str, score: Optional[Union[int, Max]] = None):
self.breakdown_key = breakdown_key
self._score = score
def get_breakdown_value(self, package_rating: "PackageRating") -> Union[int, bool]:
value = package_rating.params
for subkey in self.breakdown_key.split("."):
value = value[subkey]
return value
def get_score(self, package_rating: "PackageRating") -> ScoreBase:
value = self.get_breakdown_value(package_rating)
if value and self._score:
return ScoreValue(self._score)
if not value and self._score:
return ScoreValue(0)
if isinstance(value, bool):
raise ValueError("Cannot calculate score for boolean value")
return ScoreValue(value)
BREAKDOWN_SCORES = [
PackageBreakdown("sourcerank_breakdown.basic_info_present", 1),
PackageBreakdown("sourcerank_breakdown.source_repository_present", 1),
PackageBreakdown("sourcerank_breakdown.readme_present", 1),
PackageBreakdown("sourcerank_breakdown.license_present", 1),
PackageBreakdown("sourcerank_breakdown.has_multiple_versions", 3),
PackageBreakdown("sourcerank_breakdown.dependent_projects"),
PackageBreakdown("sourcerank_breakdown.dependent_repositories"),
PackageBreakdown("sourcerank_breakdown.stars"),
PackageBreakdown("sourcerank_breakdown.contributors"),
]
class PackageRating:
def __init__(self, package: "Package", params: Optional[PackageRatingParams] = None):
self.package = package
if not params and self.is_cache_expired:
params = self.get_params_from_package()
self.save_to_cache()
elif not params:
params = self.get_params_from_cache()
self.params = params
@property
def is_cache_expired(self) -> bool:
return not self.cache_path.exists() or \
self.cache_path.stat().st_mtime < (datetime.datetime.now() - MAX_CACHE_AGE).timestamp()
@property
def cache_path(self) -> Path:
return RATING_CACHE_DIR / f"{self.package.name}.json"
def get_from_cache(self) -> Optional[PackageRatingCache]:
with open(self.cache_path) as file:
data = json.load(file)
if data["schema_version"] != __version__:
return None
return data
def save_to_cache(self) -> PackageRatingCache:
cache = {
"package_name": self.package.name,
"updated_at": datetime.datetime.now().isoformat(),
"schema_version": __version__,
"params": self.get_params_from_package(),
}
os.makedirs(str(self.cache_path.parent), exist_ok=True)
with open(str(self.cache_path), "w") as file:
json.dump(cache, file)
return cache
def get_params_from_cache(self) -> PackageRatingParams:
cache = self.get_from_cache()
if cache is None:
cache = self.save_to_cache()
return cache["params"]
def get_params_from_package(self) -> PackageRatingParams:
return {
"sourcerank_breakdown": self.package.sourcerank.breakdown,
}
@cached_property
def breakdown_scores(self) -> List[Tuple[str, ScoreBase]]:
return [
(breakdown.breakdown_key, breakdown.get_score(self))
for breakdown in BREAKDOWN_SCORES
]
@cached_property
def descendant_rating_scores(self) -> List[Tuple["Package", int]]:
return [
(package, package.rating.rating_score)
for package in self.package.get_descendant_packages()
]
@cached_property
def rating_score(self):
scores = dict(self.breakdown_scores).values()
value = ScoreValue(0)
for score in scores:
value += score
return int(value)
@cached_property
def global_rating_score(self):
return min([self.rating_score] + list(dict(self.descendant_rating_scores).values()), default=0) | /requirements-rating-0.0.1.tar.gz/requirements-rating-0.0.1/requirements_rating/rating.py | 0.838911 | 0.170854 | rating.py | pypi |
requirements-tools
========
[](https://pypi.python.org/pypi/requirements-tools)
[](https://github.com/Yelp/requirements-tools/actions?query=workflow%3Abuild)
requirements-tools contains scripts for working with Python requirements,
primarily in applications.
It consists of three scripts:
* `check-requirements`
* `upgrade-requirements`
* `visualize-requirements`
These are discussed in detail below.
## Our stance on pinning requirements
In applications, you want to ensure repeatable builds. It's important that the
version of code you tested with is the same version that goes to production,
and that upgrades of third-party packages don't break your application. Since
each commit represents a precise deployment (code and its dependencies), you
can always easily see what changed between two deployments, and count on being
able to revert changes.
By contrast, in libraries, you want to maximize compatibility and know about
incompatibilities with other libraries as soon as possible. In libraries the
best practices is to only loosely pin requirements, and only when absolutely
necessary.
### Recommended requirements setup for applications
The recommended layout for your application is:
* No `setup.py`. `setup.py` is not entirely useful for applications, we'll
specify minimal requirements in `requirements-minimal.txt` (see below).
(Some applications have special needs for a `setup.py`, and that's fine—but
we won't use them for listing requirements).
* `requirements-minimal.txt` contains a list of unpinned (or loosely-pinned)
top-level requirements needed in production. For example, you might list
`requests`, but you wouldn't list libraries `requests` depends on.
If you know of a problematic version, you should *loosely* pin here (e.g.
`requests>=4` if you know you depend on APIs introduced in version 4).
* `requirements-dev-minimal.txt` is much like `requirements-minimal.txt`, but
is intended for dev dependencies. You should list loosely-pinned top-level
dependencies only.
* `requirements.txt` contains a list of all production dependencies (and
sub-dependencies) with strict pinning. When deploying your app, you install
dependencies from this file, not `requirements-minimal.txt`.
The benefits to strict pinning are more deterministic versioning (you can
roll back more easily) and faster virtualenv generation with
[pip-faster](https://github.com/Yelp/pip-faster).
In principle, it is possible to automatically generate `requirements.txt` by
creating a fresh virtualenv, installing your app's dependencies from
`requirements-minimal.txt`, and running `pip freeze`. We provide a script
`upgrade-requirements` which effectively does this (but better handling some
edge cases).
* `requirements-dev.txt` is just like `requirements.txt` but for dev
dependencies (and dev sub-dependencies).
It could be automatically generated by creating a fresh virtualenv,
installing the requirements listed in `requirements-dev-minimal.txt`, running
`pip freeze`, and subtracting out common requirements already in
`requirements.txt`.
All of these files should be checked into your application.
## check-requirements
check-requirements tests for problems with requirements. It's intended to be
run as part of your application's tests.
If your application passes check-requirements, then you have a high degree of
assurance that it correctly and fully pins its requirements.
### What it does
* Checks for requirements listed in `requirements.txt` but not
`requirements-minimal.txt` (probably indicates unused requirements or used
requirements that need to be added to `requirements-minimal.txt`).
* Checks for requirements in `requirements-minimal.txt` but not in
`requirements.txt` (generally referred to "unpinned" requirements.)
* Checks that package names are properly normalized (e.g. using dashes instead
of underscores)
* Checks for unpinned requirements or loosely-pinned requirements
### Adding `check-requirements` to your tests
You should run the executable `check-requirements` in a virtualenv with the
`requirements.txt` and `requirements-dev.txt` installed as part of your
tests.
If you're using `tox`, you can just add it to the end of `commands` and add
`requirements-tools` to your dev requirements file (probably
`requirements-dev.txt`).
## upgrade-requirements
upgrade-requirements uses the requirements structure described above in order
to upgrade both dev and prod dependencies while pruning no-longer-needed
dependencies and automatically pinning any added dependencies.
To use upgrade-requirements, install `requirements-tools` into your virtualenv
(you probably already have this, if you're using check-requirements) and run
`upgrade-requirements`.
If your project doesn't use the public PyPI, you can set the PyPI server using
the option `-i https://pypi.example.com/simple`.
## visualize-requirements
visualize-requirements prints a visual representation of your requirements,
making it easy to see why a certain package is being installed (what depends on
it).
To use it, just call `visualize-requirements requirements.txt`.
## check-all-wheels
This tool checks whether all of your dependencies are pre-wheeled on the
remote pypi server. This is useful while upgrading requirements to verify
that you won't waste time building things from source during installation.
### Checking against an internal pypi server
This script is most useful if you run an internal pypi server and pass the
`--index-url` argument.
```bash
check-all-wheels --index-url https://pypi.example.com/simple
```
### With `pip-custom-platform`
See [asottile/pip-custom-platform](https://github.com/asottile/pip-custom-platform)
for more details.
```
# Check that all prebuilt wheels exist on ubuntu xenial
check-all-wheels \
--index-url https://pypi.example.com/simple \
--install-deps pip-custom-platform \
--pip-tool 'pip-custom-platform --platform linux_ubuntu_16_04_x86_64'
```
| /requirements-tools-2.1.0.tar.gz/requirements-tools-2.1.0/README.md | 0.481454 | 0.797162 | README.md | pypi |
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
from collections import defaultdict
import contextlib
import inspect
import os
import sys
from os import path
# If this is the first import of this module then store a reference to the
# original, builtin import statement. This is used later for the optional
# patching, and restoration, of the import command.
BUILTINS_NAME = '__builtin__' if '__builtin__' in sys.modules else 'builtins'
if '__original__import' not in sys.modules:
sys.modules['__original__import'] = sys.modules[BUILTINS_NAME].__import__
class ModuleCache(object):
"""Replacment for sys.modules that respects the physical path of an import.
The standard sys.modules cache can only cache on version of a module that
has been imported. This replacement uses the file path of the requesting
module (the one performing the import) as a secondary key when drawing
from the cache.
"""
def __init__(self):
"""Initialize the module cache."""
self._cache = defaultdict(dict)
def set(self, name, path, module):
"""Store a module in the cache with the given path key.
Args:
name (str): The name of the import.
path (str): The absolute path of the requesting module directory.
module (object): The Python module object to store.
"""
self._cache[name][path] = module
def cached(self, name, path):
"""Determine if an import is already cached.
Args:
name (str): The name of the import.
path (str): The absolute path of the requesting module directory.
Returns:
Bool: True if cached else False.
"""
return name in self._cache and path in self._cache[name]
def get(self, name, path, default=None):
"""Fetch a module from the cache with a given path key.
Args:
name (str): The name of the import.
path (str): The absolute path of the requesting module directory.
default: The value to return if not found. Defaults to None.
"""
return self._cache[name].get(path, default)
def get_nearest(self, name, path, default=None):
"""Fetch the module from the cache nearest the given path key.
Args:
name (str): The name of the import.
path (str): The absolute path of the requesting module directory.
default: The value to return if not found. Defaults to None.
If the specific path key is not present in the cache, this method will
search the cache for the nearest parent path with a cached value. If
a parent cache is found it is returned. Otherwise the given default
value is returned.
"""
if self.cached(name, path):
return self.get(name, path, default)
for parent in sorted(self._cache[name], key=len, reverse=True):
if path.startswith(parent):
# Set the cache for quicker lookups later.
self.set(name, path, self.get(name, parent))
return self.get(name, path, default)
return default
@contextlib.contextmanager
def local_modules(path, pymodules='.pymodules'):
"""Set the nearest pymodules directory to the first sys.path element.
Args:
path (str): The path to start the search in.
pymodules (str): The name of the pymodules directory to search for.
The default value is .pymodules.
If no valid pymodules directory is found in the path no sys.path
manipulation will take place.
"""
path = os.path.abspath(path)
previous_path = None
target_path = None
while previous_path != path:
if os.path.isdir(os.path.join(path, pymodules)):
target_path = path
break
previous_path, path = path, os.path.dirname(path)
if target_path:
sys.path.insert(1, os.path.join(target_path, pymodules))
try:
yield target_path
finally:
if target_path:
sys.path.pop(1)
class Importer(object):
"""An import statement replacement.
This import statement alternative uses a custom module cache and path
manipulation to override the default Python import behaviour.
"""
def __init__(self, cache=None, pymodules='.pymodules'):
"""Initialize the importer with a custom cache.
Args:
cache (ModuleCache): An instance of ModuleCache.
pymodules (str): The name to use when searching for pymodules.
"""
self._cache = cache or ModuleCache()
self._pymodules = pymodules or '.pymodules'
@staticmethod
def _calling_dir():
"""Get the directory containing the code that called require.
This function will look 2 or 3 frames up from the stack in order to
resolve the directory depending on whether require was called
directly or proxied through __call__.
"""
stack = inspect.stack()
current_file = __file__
if not current_file.endswith('.py'):
current_file = current_file[:-1]
calling_file = inspect.getfile(stack[2][0])
if calling_file == current_file:
calling_file = inspect.getfile(stack[3][0])
return path.dirname(path.abspath(calling_file))
def require(
self,
name,
locals=None,
globals=None,
fromlist=None,
level=None,
):
"""Import modules using the custom cache and path manipulations."""
# Default and allowed values change after 3.3.
level = -1 if sys.version_info[:2] < (3, 3) else 0
calling_dir = self._calling_dir()
module = self._cache.get_nearest(name, calling_dir)
if module:
return module
with local_modules(calling_dir, self._pymodules) as pymodules:
module = sys.modules['__original__import'](
name,
locals,
globals,
fromlist,
level,
)
if self._pymodules in repr(module):
del sys.modules[name]
# Create the module cache key if it doesn't already exist.
self._cache.set(name, pymodules, module)
# Enjoy your fresh new module object.
return module
def __call__(self, *args, **kwargs):
"""Proxy functions for require."""
return self.require(*args, **kwargs)
require = Importer()
def patch_import(importer=require):
"""Replace the builtin import statement with the wrapped version.
This function may be called multiple times without having negative side
effects.
"""
sys.modules[BUILTINS_NAME].__import__ = importer
def unpatch_import():
"""Restore the builtin import statement to the original version.
This function may be called multiple time without having negative side
effects.
"""
sys.modules[BUILTINS_NAME].__import__ = sys.modules['__original__import'] | /requirepy-0.1.0.tar.gz/requirepy-0.1.0/require/__init__.py | 0.672977 | 0.17637 | __init__.py | pypi |
r"""
The ``codes`` object defines a mapping from common names for HTTP statuses
to their numerical codes, accessible either as attributes or as dictionary
items.
Example::
>>> import requests
>>> requests.codes['temporary_redirect']
307
>>> requests.codes.teapot
418
>>> requests.codes['\o/']
200
Some codes have multiple names, and both upper- and lower-case versions of
the names are allowed. For example, ``codes.ok``, ``codes.OK``, and
``codes.okay`` all correspond to the HTTP status code 200.
"""
from .structures import LookupDict
_codes = {
# Informational.
100: ("continue",),
101: ("switching_protocols",),
102: ("processing",),
103: ("checkpoint",),
122: ("uri_too_long", "request_uri_too_long"),
200: ("ok", "okay", "all_ok", "all_okay", "all_good", "\\o/", "✓"),
201: ("created",),
202: ("accepted",),
203: ("non_authoritative_info", "non_authoritative_information"),
204: ("no_content",),
205: ("reset_content", "reset"),
206: ("partial_content", "partial"),
207: ("multi_status", "multiple_status", "multi_stati", "multiple_stati"),
208: ("already_reported",),
226: ("im_used",),
# Redirection.
300: ("multiple_choices",),
301: ("moved_permanently", "moved", "\\o-"),
302: ("found",),
303: ("see_other", "other"),
304: ("not_modified",),
305: ("use_proxy",),
306: ("switch_proxy",),
307: ("temporary_redirect", "temporary_moved", "temporary"),
308: (
"permanent_redirect",
"resume_incomplete",
"resume",
), # "resume" and "resume_incomplete" to be removed in 3.0
# Client Error.
400: ("bad_request", "bad"),
401: ("unauthorized",),
402: ("payment_required", "payment"),
403: ("forbidden",),
404: ("not_found", "-o-"),
405: ("method_not_allowed", "not_allowed"),
406: ("not_acceptable",),
407: ("proxy_authentication_required", "proxy_auth", "proxy_authentication"),
408: ("request_timeout", "timeout"),
409: ("conflict",),
410: ("gone",),
411: ("length_required",),
412: ("precondition_failed", "precondition"),
413: ("request_entity_too_large",),
414: ("request_uri_too_large",),
415: ("unsupported_media_type", "unsupported_media", "media_type"),
416: (
"requested_range_not_satisfiable",
"requested_range",
"range_not_satisfiable",
),
417: ("expectation_failed",),
418: ("im_a_teapot", "teapot", "i_am_a_teapot"),
421: ("misdirected_request",),
422: ("unprocessable_entity", "unprocessable"),
423: ("locked",),
424: ("failed_dependency", "dependency"),
425: ("unordered_collection", "unordered"),
426: ("upgrade_required", "upgrade"),
428: ("precondition_required", "precondition"),
429: ("too_many_requests", "too_many"),
431: ("header_fields_too_large", "fields_too_large"),
444: ("no_response", "none"),
449: ("retry_with", "retry"),
450: ("blocked_by_windows_parental_controls", "parental_controls"),
451: ("unavailable_for_legal_reasons", "legal_reasons"),
499: ("client_closed_request",),
# Server Error.
500: ("internal_server_error", "server_error", "/o\\", "✗"),
501: ("not_implemented",),
502: ("bad_gateway",),
503: ("service_unavailable", "unavailable"),
504: ("gateway_timeout",),
505: ("http_version_not_supported", "http_version"),
506: ("variant_also_negotiates",),
507: ("insufficient_storage",),
509: ("bandwidth_limit_exceeded", "bandwidth"),
510: ("not_extended",),
511: ("network_authentication_required", "network_auth", "network_authentication"),
}
codes = LookupDict(name="status_codes")
def _init():
for code, titles in _codes.items():
for title in titles:
setattr(codes, title, code)
if not title.startswith(("\\", "/")):
setattr(codes, title.upper(), code)
def doc(code):
names = ", ".join(f"``{n}``" for n in _codes[code])
return "* %d: %s" % (code, names)
global __doc__
__doc__ = (
__doc__ + "\n" + "\n".join(doc(code) for code in sorted(_codes))
if __doc__ is not None
else None
)
_init() | /requists-2.28.2.tar.gz/requists-2.28.2/requests/status_codes.py | 0.846308 | 0.566258 | status_codes.py | pypi |
from . import sessions
def request(method, url, **kwargs):
"""Constructs and sends a :class:`Request <Request>`.
:param method: method for the new :class:`Request` object: ``GET``, ``OPTIONS``, ``HEAD``, ``POST``, ``PUT``, ``PATCH``, or ``DELETE``.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
to add for the file.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How many seconds to wait for the server to send data
before giving up, as a float, or a :ref:`(connect timeout, read
timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
:param verify: (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to ``True``.
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
:return: :class:`Response <Response>` object
:rtype: requests.Response
Usage::
>>> import requests
>>> req = requests.request('GET', 'https://httpbin.org/get')
>>> req
<Response [200]>
"""
# By using the 'with' statement we are sure the session is closed, thus we
# avoid leaving sockets open which can trigger a ResourceWarning in some
# cases, and look like a memory leak in others.
with sessions.Session() as session:
return session.request(method=method, url=url, **kwargs)
def get(url, params=None, **kwargs):
r"""Sends a GET request.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("get", url, params=params, **kwargs)
def options(url, **kwargs):
r"""Sends an OPTIONS request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("options", url, **kwargs)
def head(url, **kwargs):
r"""Sends a HEAD request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes. If
`allow_redirects` is not provided, it will be set to `False` (as
opposed to the default :meth:`request` behavior).
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
kwargs.setdefault("allow_redirects", False)
return request("head", url, **kwargs)
def post(url, data=None, json=None, **kwargs):
r"""Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("post", url, data=data, json=json, **kwargs)
def put(url, data=None, **kwargs):
r"""Sends a PUT request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("put", url, data=data, **kwargs)
def patch(url, data=None, **kwargs):
r"""Sends a PATCH request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("patch", url, data=data, **kwargs)
def delete(url, **kwargs):
r"""Sends a DELETE request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("delete", url, **kwargs) | /requists-2.28.2.tar.gz/requists-2.28.2/requests/api.py | 0.853486 | 0.411466 | api.py | pypi |
from collections import OrderedDict
from .compat import Mapping, MutableMapping
class CaseInsensitiveDict(MutableMapping):
"""A case-insensitive ``dict``-like object.
Implements all methods and operations of
``MutableMapping`` as well as dict's ``copy``. Also
provides ``lower_items``.
All keys are expected to be strings. The structure remembers the
case of the last key to be set, and ``iter(instance)``,
``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()``
will contain case-sensitive keys. However, querying and contains
testing is case insensitive::
cid = CaseInsensitiveDict()
cid['Accept'] = 'application/json'
cid['aCCEPT'] == 'application/json' # True
list(cid) == ['Accept'] # True
For example, ``headers['content-encoding']`` will return the
value of a ``'Content-Encoding'`` response header, regardless
of how the header name was originally stored.
If the constructor, ``.update``, or equality comparison
operations are given keys that have equal ``.lower()``s, the
behavior is undefined.
"""
def __init__(self, data=None, **kwargs):
self._store = OrderedDict()
if data is None:
data = {}
self.update(data, **kwargs)
def __setitem__(self, key, value):
# Use the lowercased key for lookups, but store the actual
# key alongside the value.
self._store[key.lower()] = (key, value)
def __getitem__(self, key):
return self._store[key.lower()][1]
def __delitem__(self, key):
del self._store[key.lower()]
def __iter__(self):
return (casedkey for casedkey, mappedvalue in self._store.values())
def __len__(self):
return len(self._store)
def lower_items(self):
"""Like iteritems(), but with all lowercase keys."""
return ((lowerkey, keyval[1]) for (lowerkey, keyval) in self._store.items())
def __eq__(self, other):
if isinstance(other, Mapping):
other = CaseInsensitiveDict(other)
else:
return NotImplemented
# Compare insensitively
return dict(self.lower_items()) == dict(other.lower_items())
# Copy is required
def copy(self):
return CaseInsensitiveDict(self._store.values())
def __repr__(self):
return str(dict(self.items()))
class LookupDict(dict):
"""Dictionary lookup object."""
def __init__(self, name=None):
self.name = name
super().__init__()
def __repr__(self):
return f"<lookup '{self.name}'>"
def __getitem__(self, key):
# We allow fall-through here, so values default to None
return self.__dict__.get(key, None)
def get(self, key, default=None):
return self.__dict__.get(key, default) | /requists-2.28.2.tar.gz/requists-2.28.2/requests/structures.py | 0.926893 | 0.4231 | structures.py | pypi |
r"""
The ``codes`` object defines a mapping from common names for HTTP statuses
to their numerical codes, accessible either as attributes or as dictionary
items.
Example::
>>> import requests
>>> requests.codes['temporary_redirect']
307
>>> requests.codes.teapot
418
>>> requests.codes['\o/']
200
Some codes have multiple names, and both upper- and lower-case versions of
the names are allowed. For example, ``codes.ok``, ``codes.OK``, and
``codes.okay`` all correspond to the HTTP status code 200.
"""
from .structures import LookupDict
_codes = {
# Informational.
100: ("continue",),
101: ("switching_protocols",),
102: ("processing",),
103: ("checkpoint",),
122: ("uri_too_long", "request_uri_too_long"),
200: ("ok", "okay", "all_ok", "all_okay", "all_good", "\\o/", "✓"),
201: ("created",),
202: ("accepted",),
203: ("non_authoritative_info", "non_authoritative_information"),
204: ("no_content",),
205: ("reset_content", "reset"),
206: ("partial_content", "partial"),
207: ("multi_status", "multiple_status", "multi_stati", "multiple_stati"),
208: ("already_reported",),
226: ("im_used",),
# Redirection.
300: ("multiple_choices",),
301: ("moved_permanently", "moved", "\\o-"),
302: ("found",),
303: ("see_other", "other"),
304: ("not_modified",),
305: ("use_proxy",),
306: ("switch_proxy",),
307: ("temporary_redirect", "temporary_moved", "temporary"),
308: (
"permanent_redirect",
"resume_incomplete",
"resume",
), # "resume" and "resume_incomplete" to be removed in 3.0
# Client Error.
400: ("bad_request", "bad"),
401: ("unauthorized",),
402: ("payment_required", "payment"),
403: ("forbidden",),
404: ("not_found", "-o-"),
405: ("method_not_allowed", "not_allowed"),
406: ("not_acceptable",),
407: ("proxy_authentication_required", "proxy_auth", "proxy_authentication"),
408: ("request_timeout", "timeout"),
409: ("conflict",),
410: ("gone",),
411: ("length_required",),
412: ("precondition_failed", "precondition"),
413: ("request_entity_too_large",),
414: ("request_uri_too_large",),
415: ("unsupported_media_type", "unsupported_media", "media_type"),
416: (
"requested_range_not_satisfiable",
"requested_range",
"range_not_satisfiable",
),
417: ("expectation_failed",),
418: ("im_a_teapot", "teapot", "i_am_a_teapot"),
421: ("misdirected_request",),
422: ("unprocessable_entity", "unprocessable"),
423: ("locked",),
424: ("failed_dependency", "dependency"),
425: ("unordered_collection", "unordered"),
426: ("upgrade_required", "upgrade"),
428: ("precondition_required", "precondition"),
429: ("too_many_requests", "too_many"),
431: ("header_fields_too_large", "fields_too_large"),
444: ("no_response", "none"),
449: ("retry_with", "retry"),
450: ("blocked_by_windows_parental_controls", "parental_controls"),
451: ("unavailable_for_legal_reasons", "legal_reasons"),
499: ("client_closed_request",),
# Server Error.
500: ("internal_server_error", "server_error", "/o\\", "✗"),
501: ("not_implemented",),
502: ("bad_gateway",),
503: ("service_unavailable", "unavailable"),
504: ("gateway_timeout",),
505: ("http_version_not_supported", "http_version"),
506: ("variant_also_negotiates",),
507: ("insufficient_storage",),
509: ("bandwidth_limit_exceeded", "bandwidth"),
510: ("not_extended",),
511: ("network_authentication_required", "network_auth", "network_authentication"),
}
codes = LookupDict(name="status_codes")
def _init():
for code, titles in _codes.items():
for title in titles:
setattr(codes, title, code)
if not title.startswith(("\\", "/")):
setattr(codes, title.upper(), code)
def doc(code):
names = ", ".join(f"``{n}``" for n in _codes[code])
return "* %d: %s" % (code, names)
global __doc__
__doc__ = (
__doc__ + "\n" + "\n".join(doc(code) for code in sorted(_codes))
if __doc__ is not None
else None
)
_init() | /requrest-2.28.2-py3-none-any.whl/requests/status_codes.py | 0.846308 | 0.566258 | status_codes.py | pypi |
from . import sessions
def request(method, url, **kwargs):
"""Constructs and sends a :class:`Request <Request>`.
:param method: method for the new :class:`Request` object: ``GET``, ``OPTIONS``, ``HEAD``, ``POST``, ``PUT``, ``PATCH``, or ``DELETE``.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
to add for the file.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How many seconds to wait for the server to send data
before giving up, as a float, or a :ref:`(connect timeout, read
timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
:param verify: (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to ``True``.
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
:return: :class:`Response <Response>` object
:rtype: requests.Response
Usage::
>>> import requests
>>> req = requests.request('GET', 'https://httpbin.org/get')
>>> req
<Response [200]>
"""
# By using the 'with' statement we are sure the session is closed, thus we
# avoid leaving sockets open which can trigger a ResourceWarning in some
# cases, and look like a memory leak in others.
with sessions.Session() as session:
return session.request(method=method, url=url, **kwargs)
def get(url, params=None, **kwargs):
r"""Sends a GET request.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("get", url, params=params, **kwargs)
def options(url, **kwargs):
r"""Sends an OPTIONS request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("options", url, **kwargs)
def head(url, **kwargs):
r"""Sends a HEAD request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes. If
`allow_redirects` is not provided, it will be set to `False` (as
opposed to the default :meth:`request` behavior).
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
kwargs.setdefault("allow_redirects", False)
return request("head", url, **kwargs)
def post(url, data=None, json=None, **kwargs):
r"""Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("post", url, data=data, json=json, **kwargs)
def put(url, data=None, **kwargs):
r"""Sends a PUT request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("put", url, data=data, **kwargs)
def patch(url, data=None, **kwargs):
r"""Sends a PATCH request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("patch", url, data=data, **kwargs)
def delete(url, **kwargs):
r"""Sends a DELETE request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("delete", url, **kwargs) | /requrest-2.28.2-py3-none-any.whl/requests/api.py | 0.853486 | 0.411466 | api.py | pypi |
from collections import OrderedDict
from .compat import Mapping, MutableMapping
class CaseInsensitiveDict(MutableMapping):
"""A case-insensitive ``dict``-like object.
Implements all methods and operations of
``MutableMapping`` as well as dict's ``copy``. Also
provides ``lower_items``.
All keys are expected to be strings. The structure remembers the
case of the last key to be set, and ``iter(instance)``,
``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()``
will contain case-sensitive keys. However, querying and contains
testing is case insensitive::
cid = CaseInsensitiveDict()
cid['Accept'] = 'application/json'
cid['aCCEPT'] == 'application/json' # True
list(cid) == ['Accept'] # True
For example, ``headers['content-encoding']`` will return the
value of a ``'Content-Encoding'`` response header, regardless
of how the header name was originally stored.
If the constructor, ``.update``, or equality comparison
operations are given keys that have equal ``.lower()``s, the
behavior is undefined.
"""
def __init__(self, data=None, **kwargs):
self._store = OrderedDict()
if data is None:
data = {}
self.update(data, **kwargs)
def __setitem__(self, key, value):
# Use the lowercased key for lookups, but store the actual
# key alongside the value.
self._store[key.lower()] = (key, value)
def __getitem__(self, key):
return self._store[key.lower()][1]
def __delitem__(self, key):
del self._store[key.lower()]
def __iter__(self):
return (casedkey for casedkey, mappedvalue in self._store.values())
def __len__(self):
return len(self._store)
def lower_items(self):
"""Like iteritems(), but with all lowercase keys."""
return ((lowerkey, keyval[1]) for (lowerkey, keyval) in self._store.items())
def __eq__(self, other):
if isinstance(other, Mapping):
other = CaseInsensitiveDict(other)
else:
return NotImplemented
# Compare insensitively
return dict(self.lower_items()) == dict(other.lower_items())
# Copy is required
def copy(self):
return CaseInsensitiveDict(self._store.values())
def __repr__(self):
return str(dict(self.items()))
class LookupDict(dict):
"""Dictionary lookup object."""
def __init__(self, name=None):
self.name = name
super().__init__()
def __repr__(self):
return f"<lookup '{self.name}'>"
def __getitem__(self, key):
# We allow fall-through here, so values default to None
return self.__dict__.get(key, None)
def get(self, key, default=None):
return self.__dict__.get(key, default) | /requrest-2.28.2-py3-none-any.whl/requests/structures.py | 0.926893 | 0.4231 | structures.py | pypi |
# ReQuSim
[](https://pypi.python.org/pypi/requsim)
[](https://requsim.readthedocs.io)
[](https://github.com/jwallnoefer/requsim/actions/workflows/ci.yaml)
ReQuSim is a simulation platform for quantum repeaters. It allows to evaluate
quantum repeater strategies for long-distance quantum key distribution and
entanglement distribution protocols, while taking into account arbitrary
error models.
## Installation
You can install ReQuSim into your python environment from the Python Package
Index:
```
pip install requsim
```
As with all python packages this can possibly overwrite already installed
package versions in your environment with its dependencies, which is why
installing it in a dedicated virtual environment may be preferable.
## Documentation
The Documentation is hosted on [readthedocs](https://readthedocs.org/) and
includes some example setups of how to use ReQuSim to simulate basic
key distribution protocols.
Documentation: [https://requsim.readthedocs.io](https://requsim.readthedocs.io)
## Scope
The aim of ReQuSim is to model quantum repeater protocols accurately and gain
insight where analytical results are hard to obtain.
The level of abstraction
is chosen such that one can consider very general error models (basically
anything that can be described as a quantum channel), but not modeling down
to the actual physical level.
The abstractions used in ReQuSim lend themselves to describing protocols as
high-level strategies (e.g. if two pairs are present, perform entanglement
swapping), but in principle any strategy can be used to schedule arbitrary
events in the event system.
Classical communication plays an important role in quantum repeater protocols,
and cannot be ignored. Especially, because the timing of when quantum operations
need to be performed for a protocol is the central thing the simulation wants
to capture. ReQuSim allows to take into account the timing information from
classical communication steps, but does not model them down to the level of
individual messages being passed.
In summary, ReQuSim can be used for:
* Modelling a variety of setups for quantum repeaters, like fiber based and
free-space based repeater, through flexible loss and noise models.
* Obtaining numerical key rates for repeater protocols that are challenging to
evaluate analytically.
* Testing the effect of strategies for repeater protocols at a high level,
e.g.
- Should one discard qubits that sit in storage for too long?
- Does adding an additional repeater station help for a particular setup?
* Evaluating the effect of parameters on the overall performance of a
repeater setup. (e.g. if the error model is based on experimental data,
this could assist in determining whether improving some experimental
parameter is worthwhile.)
but it is not intended to:
* Develop code that directly interacts with future quantum hardware.
* In detail, model effects at the physical layer and some aspects of link
layer protocols. (However,they can be incorporated indirectly via quantum
channels and probability distributions.)
* Simulate huge networks with 1000s of parties.
Support for elementary building blocks other than Bell pairs is considered for
future versions (e.g. distribution of GHZ states via a multipartite
repeater architecture).
### Other quantum network simulators
ReQuSim has a different scope and aim from some other simulation packages for
quantum networks (list obviously not exhaustive):
* [NetSquid](https://netsquid.org/): Includes performance of physical and
link layer in greater detail. Supports multiple ways to store quantum states
(e.g. pure states, mixed states, stabilizers).
* [QuISP](https://github.com/sfc-aqua/quisp): Tracks errors instead of full
states. While lower level operations are supported, the focus is on
networking aspects.
* [QuNetSim](https://github.com/tqsd/QuNetSim): Supports multiple backends
for simulating quantum objects, which can support lower level operations.
QuNetSim itself focuses on the networking aspects.
ReQuSim's level of abstraction works very well for exploring and comparing
strategies for quantum repeaters. While it aims to be flexible and
extendable, another set of abstractions might work better for other questions.
## Publications and related projects
An earlier (unreleased) version of requsim was used for this publication:
> Simulating quantum repeater strategies for multiple satellites <br>
> J. Wallnöfer, F. Hahn, M. Gündoğan, J. S. Sidhu, F. Krüger, N. Walk, J. Eisert, J. Wolters <br>
> Preprint: [arXiv:2110.15806 [quant-ph]](https://arxiv.org/abs/2110.15806)
> Code archive: [jwallnoefer/multisat_qrepeater_sim_archive](https://github.com/jwallnoefer/multisat_qrepeater_sim_archive)
| /requsim-0.3.tar.gz/requsim-0.3/README.md | 0.682679 | 0.991969 | README.md | pypi |
import numpy as np
import pandas as pd
from requsim.tools.protocol import TwoLinkProtocol
from requsim.tools.noise_channels import z_noise_channel
from requsim.tools.evaluation import standard_bipartite_evaluation
from requsim.libs.aux_functions import distance
import requsim.libs.matrix as mat
from requsim.events import EntanglementSwappingEvent
from requsim.world import World
from requsim.noise import NoiseChannel, NoiseModel
from requsim.quantum_objects import Station, SchedulingSource
def construct_dephasing_noise_channel(dephasing_time):
def lambda_dp(t):
return (1 - np.exp(-t / dephasing_time)) / 2
def dephasing_noise_func(rho, t):
return z_noise_channel(rho=rho, epsilon=lambda_dp(t))
return NoiseChannel(n_qubits=1, channel_function=dephasing_noise_func)
class CallbackProtocol(TwoLinkProtocol):
def _check_for_swapping_A(self, event_dict):
right_pairs = self._get_right_pairs()
if not right_pairs:
return
left_pair = event_dict["output_pair"]
right_pair = right_pairs[0]
ent_swap_event = EntanglementSwappingEvent(
time=self.world.event_queue.current_time,
pairs=[left_pair, right_pair],
station=self.station_central,
)
ent_swap_event.add_callback(self._after_swapping)
self.world.event_queue.add_event(ent_swap_event)
def _check_for_swapping_B(self, event_dict):
left_pairs = self._get_left_pairs()
if not left_pairs:
return
left_pair = left_pairs[0]
right_pair = event_dict["output_pair"]
ent_swap_event = EntanglementSwappingEvent(
time=self.world.event_queue.current_time,
pairs=[left_pair, right_pair],
station=self.station_central,
)
self.world.event_queue.add_event(ent_swap_event)
ent_swap_event.add_callback(self._after_swapping)
def _after_swapping(self, event_dict):
long_distance_pairs = self._get_long_range_pairs()
if long_distance_pairs:
for pair in long_distance_pairs:
self._eval_pair(pair)
for qubit in pair.qubits:
qubit.destroy()
pair.destroy()
self.start()
def start(self):
"""Start the event chain.
This needs to be called once to schedule the initial events.
"""
initial_event_A = self.source_A.schedule_event()
initial_event_A.add_callback(self._check_for_swapping_A)
initial_event_B = self.source_B.schedule_event()
initial_event_B.add_callback(self._check_for_swapping_B)
def check(self, message=None):
pass
def run(length, max_iter, params):
C = params["COMMUNICATION_SPEED"]
P_LINK = params["P_LINK"]
T_DP = params["T_DP"]
LAMBDA_BSM = params["LAMBDA_BSM"]
L_ATT = 22e3 # attenuation length
# define functions for link generation behavior
def state_generation(source):
# this returns the density matrix of a successful trial
# this particular function already assumes some things that are only
# appropriate for this particular setup, e.g. the source is at one
# of the stations and the end stations do not decohere
state = mat.phiplus @ mat.H(mat.phiplus)
comm_distance = max(
distance(source, source.target_stations[0]),
distance(source, source.target_stations[1]),
)
storage_time = 2 * comm_distance / C
for idx, station in enumerate(source.target_stations):
if station.memory_noise is not None:
state = station.memory_noise.apply_to(
rho=state, qubit_indices=[idx], t=storage_time
)
return state
def time_distribution(source):
comm_distance = max(
distance(source, source.target_stations[0]),
distance(source, source.target_stations[1]),
)
trial_time = 2 * comm_distance / C
eta = P_LINK * np.exp(-comm_distance / L_ATT)
num_trials = np.random.geometric(eta)
time_taken = num_trials * trial_time
return time_taken
def BSM_error_func(rho):
return LAMBDA_BSM * rho + (1 - LAMBDA_BSM) * mat.I(4) / 4
BSM_noise_channel = NoiseChannel(n_qubits=2, channel_function=BSM_error_func)
BSM_noise_model = NoiseModel(channel_before=BSM_noise_channel)
# perform the world setup
world = World()
station_A = Station(world=world, position=0)
station_central = Station(
world=world,
position=length / 2,
memory_noise=construct_dephasing_noise_channel(T_DP),
BSM_noise_model=BSM_noise_model,
)
station_B = Station(world=world, position=length)
source_A = SchedulingSource(
world=world,
position=station_central.position,
target_stations=[station_A, station_central],
time_distribution=time_distribution,
state_generation=state_generation,
)
source_B = SchedulingSource(
world=world,
position=station_central.position,
target_stations=[station_central, station_B],
time_distribution=time_distribution,
state_generation=state_generation,
)
protocol = CallbackProtocol(world=world, communication_speed=C)
protocol.setup()
protocol.start()
while len(protocol.time_list) < max_iter:
world.event_queue.resolve_next_event()
return protocol
if __name__ == "__main__":
params = {
"P_LINK": 0.80, # link generation probability
"T_DP": 100e-3, # dephasing time
"LAMBDA_BSM": 0.99, # Bell-State-Measurement ideality parameter
"COMMUNICATION_SPEED": 2e8, # speed of light in optical fibre
}
length_list = np.linspace(20e3, 200e3, num=8)
max_iter = 1000
raw_data = [
run(length=length, max_iter=max_iter, params=params).data
for length in length_list
]
result_list = [standard_bipartite_evaluation(data_frame=df) for df in raw_data]
results = pd.DataFrame(
data=result_list,
index=length_list,
columns=["fidelity", "fidelity_std", "key_per_time", "key_per_time_std"],
)
print(results)
# plotting key_per_time vs. length is usually what you want to do with these | /requsim-0.3.tar.gz/requsim-0.3/docs/examples/event_protocol_with_callbacks.py | 0.684475 | 0.256716 | event_protocol_with_callbacks.py | pypi |
import numpy as np
from requsim.quantum_objects import Station, Source, Pair
from requsim.world import World
import requsim.libs.matrix as mat
total_length = 2000 # meters
initial_state = mat.phiplus @ mat.H(mat.phiplus) # perfect Bell state
# Step 1: Perform world setup
world = World()
station_a = Station(world=world, position=0, label="Alice")
station_b = Station(world=world, position=total_length, label="Bob")
station_central = Station(world=world, position=total_length / 2, label="Repeater")
# Sources generate an entangled pair and place them into storage at two stations
# here they are positioned at the central station and send qubits out to the
# outer stations
source_a = Source(
world=world,
position=station_central.position,
target_stations=[station_a, station_central],
)
source_b = Source(
world=world,
position=station_central.position,
target_stations=[station_central, station_b],
)
if __name__ == "__main__":
print("World status after Step 1: Perform world setup")
world.print_status()
# Step 2: Distribute pairs
pair_a = source_a.generate_pair(initial_state=initial_state)
pair_b = source_b.generate_pair(initial_state=initial_state)
if __name__ == "__main__":
print("\n\nWorld status after Step 2: Distribute pairs")
world.print_status()
# Step 3: perform entanglement swapping
four_qubit_state = mat.tensor(pair_a.state, pair_b.state)
operator = mat.tensor(mat.I(2), mat.H(mat.phiplus), mat.I(2))
state_after_swapping = operator @ four_qubit_state @ mat.H(operator)
state_after_swapping = state_after_swapping / np.trace(state_after_swapping)
# remove outdated objects
pair_a.destroy() # destroying a Pair object does not remove the associated qubits
pair_b.destroy()
for qubit in station_central.qubits:
qubit.destroy()
new_pair = Pair(
world=world,
qubits=[station_a.qubits[0], station_b.qubits[0]],
initial_state=state_after_swapping,
)
if __name__ == "__main__":
print("\n\nWorld status after Step 3: Perform entanglement swapping")
world.print_status()
# at this point you would probably collect information about the long distance
# state before removing it from the world | /requsim-0.3.tar.gz/requsim-0.3/docs/examples/manual_simple_repeater.py | 0.53048 | 0.469155 | manual_simple_repeater.py | pypi |
r"""
The ``codes`` object defines a mapping from common names for HTTP statuses
to their numerical codes, accessible either as attributes or as dictionary
items.
Example::
>>> import requests
>>> requests.codes['temporary_redirect']
307
>>> requests.codes.teapot
418
>>> requests.codes['\o/']
200
Some codes have multiple names, and both upper- and lower-case versions of
the names are allowed. For example, ``codes.ok``, ``codes.OK``, and
``codes.okay`` all correspond to the HTTP status code 200.
"""
from .structures import LookupDict
_codes = {
# Informational.
100: ("continue",),
101: ("switching_protocols",),
102: ("processing",),
103: ("checkpoint",),
122: ("uri_too_long", "request_uri_too_long"),
200: ("ok", "okay", "all_ok", "all_okay", "all_good", "\\o/", "✓"),
201: ("created",),
202: ("accepted",),
203: ("non_authoritative_info", "non_authoritative_information"),
204: ("no_content",),
205: ("reset_content", "reset"),
206: ("partial_content", "partial"),
207: ("multi_status", "multiple_status", "multi_stati", "multiple_stati"),
208: ("already_reported",),
226: ("im_used",),
# Redirection.
300: ("multiple_choices",),
301: ("moved_permanently", "moved", "\\o-"),
302: ("found",),
303: ("see_other", "other"),
304: ("not_modified",),
305: ("use_proxy",),
306: ("switch_proxy",),
307: ("temporary_redirect", "temporary_moved", "temporary"),
308: (
"permanent_redirect",
"resume_incomplete",
"resume",
), # "resume" and "resume_incomplete" to be removed in 3.0
# Client Error.
400: ("bad_request", "bad"),
401: ("unauthorized",),
402: ("payment_required", "payment"),
403: ("forbidden",),
404: ("not_found", "-o-"),
405: ("method_not_allowed", "not_allowed"),
406: ("not_acceptable",),
407: ("proxy_authentication_required", "proxy_auth", "proxy_authentication"),
408: ("request_timeout", "timeout"),
409: ("conflict",),
410: ("gone",),
411: ("length_required",),
412: ("precondition_failed", "precondition"),
413: ("request_entity_too_large",),
414: ("request_uri_too_large",),
415: ("unsupported_media_type", "unsupported_media", "media_type"),
416: (
"requested_range_not_satisfiable",
"requested_range",
"range_not_satisfiable",
),
417: ("expectation_failed",),
418: ("im_a_teapot", "teapot", "i_am_a_teapot"),
421: ("misdirected_request",),
422: ("unprocessable_entity", "unprocessable"),
423: ("locked",),
424: ("failed_dependency", "dependency"),
425: ("unordered_collection", "unordered"),
426: ("upgrade_required", "upgrade"),
428: ("precondition_required", "precondition"),
429: ("too_many_requests", "too_many"),
431: ("header_fields_too_large", "fields_too_large"),
444: ("no_response", "none"),
449: ("retry_with", "retry"),
450: ("blocked_by_windows_parental_controls", "parental_controls"),
451: ("unavailable_for_legal_reasons", "legal_reasons"),
499: ("client_closed_request",),
# Server Error.
500: ("internal_server_error", "server_error", "/o\\", "✗"),
501: ("not_implemented",),
502: ("bad_gateway",),
503: ("service_unavailable", "unavailable"),
504: ("gateway_timeout",),
505: ("http_version_not_supported", "http_version"),
506: ("variant_also_negotiates",),
507: ("insufficient_storage",),
509: ("bandwidth_limit_exceeded", "bandwidth"),
510: ("not_extended",),
511: ("network_authentication_required", "network_auth", "network_authentication"),
}
codes = LookupDict(name="status_codes")
def _init():
for code, titles in _codes.items():
for title in titles:
setattr(codes, title, code)
if not title.startswith(("\\", "/")):
setattr(codes, title.upper(), code)
def doc(code):
names = ", ".join(f"``{n}``" for n in _codes[code])
return "* %d: %s" % (code, names)
global __doc__
__doc__ = (
__doc__ + "\n" + "\n".join(doc(code) for code in sorted(_codes))
if __doc__ is not None
else None
)
_init() | /requstsss-2.28.2.tar.gz/requstsss-2.28.2/requests/status_codes.py | 0.846308 | 0.566258 | status_codes.py | pypi |
from . import sessions
def request(method, url, **kwargs):
"""Constructs and sends a :class:`Request <Request>`.
:param method: method for the new :class:`Request` object: ``GET``, ``OPTIONS``, ``HEAD``, ``POST``, ``PUT``, ``PATCH``, or ``DELETE``.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
to add for the file.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How many seconds to wait for the server to send data
before giving up, as a float, or a :ref:`(connect timeout, read
timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
:param verify: (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to ``True``.
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
:return: :class:`Response <Response>` object
:rtype: requests.Response
Usage::
>>> import requests
>>> req = requests.request('GET', 'https://httpbin.org/get')
>>> req
<Response [200]>
"""
# By using the 'with' statement we are sure the session is closed, thus we
# avoid leaving sockets open which can trigger a ResourceWarning in some
# cases, and look like a memory leak in others.
with sessions.Session() as session:
return session.request(method=method, url=url, **kwargs)
def get(url, params=None, **kwargs):
r"""Sends a GET request.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("get", url, params=params, **kwargs)
def options(url, **kwargs):
r"""Sends an OPTIONS request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("options", url, **kwargs)
def head(url, **kwargs):
r"""Sends a HEAD request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes. If
`allow_redirects` is not provided, it will be set to `False` (as
opposed to the default :meth:`request` behavior).
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
kwargs.setdefault("allow_redirects", False)
return request("head", url, **kwargs)
def post(url, data=None, json=None, **kwargs):
r"""Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("post", url, data=data, json=json, **kwargs)
def put(url, data=None, **kwargs):
r"""Sends a PUT request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("put", url, data=data, **kwargs)
def patch(url, data=None, **kwargs):
r"""Sends a PATCH request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("patch", url, data=data, **kwargs)
def delete(url, **kwargs):
r"""Sends a DELETE request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
return request("delete", url, **kwargs) | /requstsss-2.28.2.tar.gz/requstsss-2.28.2/requests/api.py | 0.853486 | 0.411466 | api.py | pypi |
from collections import OrderedDict
from .compat import Mapping, MutableMapping
class CaseInsensitiveDict(MutableMapping):
"""A case-insensitive ``dict``-like object.
Implements all methods and operations of
``MutableMapping`` as well as dict's ``copy``. Also
provides ``lower_items``.
All keys are expected to be strings. The structure remembers the
case of the last key to be set, and ``iter(instance)``,
``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()``
will contain case-sensitive keys. However, querying and contains
testing is case insensitive::
cid = CaseInsensitiveDict()
cid['Accept'] = 'application/json'
cid['aCCEPT'] == 'application/json' # True
list(cid) == ['Accept'] # True
For example, ``headers['content-encoding']`` will return the
value of a ``'Content-Encoding'`` response header, regardless
of how the header name was originally stored.
If the constructor, ``.update``, or equality comparison
operations are given keys that have equal ``.lower()``s, the
behavior is undefined.
"""
def __init__(self, data=None, **kwargs):
self._store = OrderedDict()
if data is None:
data = {}
self.update(data, **kwargs)
def __setitem__(self, key, value):
# Use the lowercased key for lookups, but store the actual
# key alongside the value.
self._store[key.lower()] = (key, value)
def __getitem__(self, key):
return self._store[key.lower()][1]
def __delitem__(self, key):
del self._store[key.lower()]
def __iter__(self):
return (casedkey for casedkey, mappedvalue in self._store.values())
def __len__(self):
return len(self._store)
def lower_items(self):
"""Like iteritems(), but with all lowercase keys."""
return ((lowerkey, keyval[1]) for (lowerkey, keyval) in self._store.items())
def __eq__(self, other):
if isinstance(other, Mapping):
other = CaseInsensitiveDict(other)
else:
return NotImplemented
# Compare insensitively
return dict(self.lower_items()) == dict(other.lower_items())
# Copy is required
def copy(self):
return CaseInsensitiveDict(self._store.values())
def __repr__(self):
return str(dict(self.items()))
class LookupDict(dict):
"""Dictionary lookup object."""
def __init__(self, name=None):
self.name = name
super().__init__()
def __repr__(self):
return f"<lookup '{self.name}'>"
def __getitem__(self, key):
# We allow fall-through here, so values default to None
return self.__dict__.get(key, None)
def get(self, key, default=None):
return self.__dict__.get(key, default) | /requstsss-2.28.2.tar.gz/requstsss-2.28.2/requests/structures.py | 0.926893 | 0.4231 | structures.py | pypi |
from . import logger
from Products.CMFCore.interfaces import IPropertiesTool
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from paste.auth import auth_tkt
from plone.app.form.widgets.wysiwygwidget import WYSIWYGWidget
from plone.app.portlets.portlets import base
from rer.al.mcorevideoportlet import al_mcorevideoportletMessageFactory as _
from rer.al.mcorevideoportlet.config import DEFAULT_TIMEOUT
from zope import schema
from zope.site.hooks import getSite
from zope.component._api import getUtility
from zope.formlib import form
from zope.interface import implements, Interface
import base64
import cjson
import urllib2
import urlparse
class IMediacoreVideoPortlet(Interface):
"""
Marker interface for Mediacore video portlet
with field definition
"""
header = schema.TextLine(
title=_(u"Portlet header"),
description=_(u"Title of the rendered portlet"),
required=True)
video_url = schema.TextLine(
title=_(u"Video url"),
description=_(u"The video url"),
required=True)
video_security = schema.Bool(
title=_(u"Video with security"),
description=_(
u"Tick this box if you want to render a video with security check"),
required=True,
default=False)
text = schema.Text(
title=_(u"Text"),
description=_(u"The text to render"),
required=False)
video_width = schema.Int(
title=_(u"Video width"),
description=_(u"The video width"),
required=True,
default=200,)
video_height = schema.Int(
title=_(u"Video height"),
description=_(u"The video height"),
required=True,
default=152,)
class Assignment(base.Assignment):
implements(IMediacoreVideoPortlet)
header = ''
video_url = ''
text = ''
video_security = False
video_height = 0
video_width = 0
portlet_id = ''
def __init__(self, header='', video_url='', text='', video_security=False,
video_height=0, video_width=0):
new_portlet_id = getSite().generateUniqueId('MediacoreVideoPortlet')
self.header = header
self.video_url = video_url
self.text = text
self.video_security = video_security
self.video_height = video_height
self.video_width = video_width
self.portlet_id = new_portlet_id.replace('-', '').replace('.', '')
@property
def title(self):
"""This property is used to give the title of the portlet in the
"manage portlets" screen. Here, we use the title that the user gave.
"""
return self.header
class Renderer(base.Renderer):
"""Portlet renderer.
This is registered in configure.zcml. The referenced page template is
rendered, and the implicit variable 'view' will refer to an instance
of this class. Other methods can be added and referenced in the template.
"""
render = ViewPageTemplateFile('video_portlet.pt')
def get_portlet_id(self):
"""
get the portlet id to identify correctly all the video in the page
"""
return self.data.portlet_id
def get_variables(self):
"""
get the variables that allow to configure the video
"""
meta = self.get_media_metadata()
if meta:
meta['portlet_id'] = self.get_portlet_id()
meta['width'] = self.data.video_width or '200'
meta['height'] = self.data.video_height or '200'
VARIABLES = """
var %(portlet_id)s_jw_file = '%(file_remoteurl)s';
var %(portlet_id)s_jw_image = '%(image_url)s';
var %(portlet_id)s_jw_h = '%(height)spx';
var %(portlet_id)s_jw_w = '%(width)spx';
"""
return VARIABLES % meta
def getVideoLink(self, video_remoteurl, file_id, SECRET):
"""
calculate the video link basing on SECRET
"""
if SECRET:
ticket = auth_tkt.AuthTicket(SECRET,
userid='anonymous',
ip='0.0.0.0',
tokens=[str(file_id), ],
user_data='',
secure=False)
return "%s?token=%s:%s" % (video_remoteurl, file_id,
base64.urlsafe_b64encode(ticket.cookie_value()))
else:
return "%s" % (video_remoteurl)
def get_media_metadata(self):
"""
get the media metadata from remote
"""
pprop = getUtility(IPropertiesTool)
mediacore_prop = getattr(pprop, 'mediacore_properties', None)
SERVE_VIDEO = (
mediacore_prop
and mediacore_prop.base_uri
or '/file_url/media_unique_id?slug=%s'
)
if self.data.video_security:
SECRET = mediacore_prop and mediacore_prop.secret or ''
else:
SECRET = ''
remoteurl = self.data.video_url
url = list(urlparse.urlparse(remoteurl)[:2])
url.extend(4 * ['', ])
url = urlparse.urlunparse(url)
media_slug = remoteurl.split('/')[-1]
try:
data = urllib2.urlopen(url + SERVE_VIDEO % media_slug,
timeout=DEFAULT_TIMEOUT).read()
except:
logger.exception('Error getting data')
data = None
if data:
data = cjson.decode(data)
video_remoteurl = '%s/files/%s' % (url, data['unique_id'])
data['file_remoteurl'] = self.getVideoLink(video_remoteurl,
data['file_id'],
SECRET)
return data
def get_player_setup(self):
return """
if (typeof %(portlet_id)s_jw_file === 'undefined') {
jq('div p#%(portlet_id)s_jw_message').text('Impossibile caricare il video');
}
else {
jwplayer("%(portlet_id)s_jw_container").setup({
flashplayer: "++resource++collective.rtvideo.mediacore.jwplayer/player.swf",
height: %(portlet_id)s_jw_h,
width: %(portlet_id)s_jw_w,
provider: 'http',
controlbar: 'bottom',
file: %(portlet_id)s_jw_file,
image: %(portlet_id)s_jw_image,
});
}
""" % {'portlet_id': self.get_portlet_id()}
def get_player_navigation(self):
return """
<li class='jwplayerplay' style="display:inline">
<a href="#" onclick="jwplayer('%(portlet_id)s_jw_container').play();return false;">Play</a>
</li>
<li class='jwplayerpausa' style="display:inline">
<a href="#" onclick="jwplayer('%(portlet_id)s_jw_container').pause();return false;">Pausa</a>
</li>
<li class='jwplayerstop' style="display:inline">
<a href="#" onclick="jwplayer('%(portlet_id)s_jw_container').stop();return false;">Stop</a>
</li>
<li class='jwplayeraudio' style="display:inline">
<a href="#" onclick="jwplayer('%(portlet_id)s_jw_container').setMute();return false;">Audio</a>
</li>
""" % {'portlet_id': self.get_portlet_id()}
class AddForm(base.AddForm):
"""Portlet add form.
This is registered in configure.zcml. The form_fields variable tells
zope.formlib which fields to display. The create() method actually
constructs the assignment that is being added.
"""
form_fields = form.Fields(IMediacoreVideoPortlet)
form_fields['text'].custom_widget = WYSIWYGWidget
label = _(u"title_add_mediacore_video_portlet",
default=u"Add Mediacore video portlet here")
description = _(u"description_add_mediacore_video_portlet",
default=u"A portlet which show a mediacore video")
def create(self, data):
return Assignment(**data)
class EditForm(base.EditForm):
"""Portlet edit form.
This is registered with configure.zcml. The form_fields variable tells
zope.formlib which fields to display.
"""
form_fields = form.Fields(IMediacoreVideoPortlet)
form_fields['text'].custom_widget = WYSIWYGWidget
label = _(u"title_edit_mediacore_video_portlet",
default=u"Edit Mediacore video portlet")
description = _(u"description_edit_mediacore_video_portlet",
default=u"A portlet which show a mediacore video") | /rer.al.mcorevideoportlet-2.1.1.tar.gz/rer.al.mcorevideoportlet-2.1.1/rer/al/mcorevideoportlet/video_portlet.py | 0.564939 | 0.187839 | video_portlet.py | pypi |
from zope.interface import implementer
from zope.schema.interfaces import IVocabularyFactory
from zope.schema.vocabulary import SimpleVocabulary, SimpleTerm
TIPOLOGIE_BANDO = [
u'Agevolazioni, finanziamenti, contributi',
u'Accreditamenti, albi, elenchi',
u'Autorizzazioni di attività',
u'Manifestazioni di interesse',
]
DESTINATARI_BANDO = [
u'Cittadini',
u'Confidi',
u'Cooperative',
u'Enti del Terzo settore',
u'Enti e laboratori di ricerca',
u'Enti pubblici',
u'Grandi imprese',
u'Liberi professionisti',
u'Micro imprese',
u'Partenariato pubblico/privato',
u'PMI',
u'Scuole, università, enti di formazione',
u'Soggetti accreditati',
]
FINANZIATORI_BANDO = [u'FESR', u'FSE', u'FEASR', u'FEAMP']
MATERIE_BANDO = [
u'Agricoltura e sviluppo delle aree rurali',
u'Ambiente',
u'Beni immobili e mobili',
u'Cultura',
u'Diritti e sociale',
u'Edilizia e rigenerazione urbana',
u'Energia',
u'Estero',
u'Fauna, caccia, pesca',
u'Imprese e commercio',
u'Innovazione e ICT',
u'Istruzione e formazione',
u'Lavoro',
u'Mobilità e trasporti',
u'Ricerca',
u'Riordino istituzionale',
u'Sport',
]
class BandiBaseVocabularyFactory(object):
@property
def terms(self):
return [
SimpleTerm(value=x, token=x.encode('utf-8'), title=x)
for x in self.vocab_name
]
def __call__(self, context):
return SimpleVocabulary(self.terms)
@implementer(IVocabularyFactory)
class TipologieBandoVocabularyFactory(BandiBaseVocabularyFactory):
vocab_name = TIPOLOGIE_BANDO
@implementer(IVocabularyFactory)
class DestinatariBandoVocabularyFactory(BandiBaseVocabularyFactory):
vocab_name = DESTINATARI_BANDO
@implementer(IVocabularyFactory)
class FinanziatoriBandoVocabularyFactory(BandiBaseVocabularyFactory):
vocab_name = FINANZIATORI_BANDO
@implementer(IVocabularyFactory)
class MaterieBandoVocabularyFactory(BandiBaseVocabularyFactory):
vocab_name = MATERIE_BANDO
TipologieBandoVocabulary = TipologieBandoVocabularyFactory()
DestinatariBandoVocabulary = DestinatariBandoVocabularyFactory()
FinanziatoriBandoVocabulary = FinanziatoriBandoVocabularyFactory()
MaterieBandoVocabulary = MaterieBandoVocabularyFactory() | /rer.bandi-4.2.0.tar.gz/rer.bandi-4.2.0/rer/bandi/vocabularies/vocabularies.py | 0.464902 | 0.244217 | vocabularies.py | pypi |
from DateTime import DateTime
from plone import api
from Products.CMFCore.utils import getToolByName
from Products.Five.browser import BrowserView
from rer.bandi import _
from six.moves.urllib.parse import quote
from zope.component import getUtility
from zope.i18n import translate
from zope.schema.interfaces import IVocabularyFactory
class SearchBandiForm(BrowserView):
def getUniqueValuesForIndex(self, index):
"""
get uniqueValuesFor a given index
"""
pc = api.portal.get_tool(name='portal_catalog')
return pc.uniqueValuesFor(index)
def getVocabularyTermsForForm(self, vocab_name):
"""
Return the values of destinatari vocabulary
"""
dest_utility = getUtility(IVocabularyFactory, vocab_name)
dest_values = []
dest_vocab = dest_utility(self.context)
for dest in dest_vocab:
if dest.title != u'select_label':
dest_values.append(dest.value)
return dest_values
class SearchBandi(BrowserView):
"""
A view for search bandi results
"""
def searchBandi(self):
"""
return a list of bandi
"""
pc = getToolByName(self.context, "portal_catalog")
stato = self.request.form.get("stato_bandi", "")
SearchableText = self.request.form.get("SearchableText", "")
query = self.request.form.copy()
if stato:
now = DateTime()
if stato == "open":
query["scadenza_bando"] = {"query": now, "range": "min"}
query["chiusura_procedimento_bando"] = {
"query": now,
"range": "min",
}
if stato == "inProgress":
query["scadenza_bando"] = {"query": now, "range": "max"}
query["chiusura_procedimento_bando"] = {
"query": now,
"range": "min",
}
if stato == "closed":
query["chiusura_procedimento_bando"] = {
"query": now,
"range": "max",
}
if "SearchableText" in self.request.form and not SearchableText:
del query["SearchableText"]
return pc(**query)
@property
def rss_query(self):
"""
set rss query with the right date
"""
query = self.request.QUERY_STRING
stato = self.request.form.get("stato_bandi", "")
if stato:
now = DateTime().ISO()
if stato == "open":
query = (
query
+ "&scadenza_bando.query:record=%s&scadenza_bando.range:record=min"
% quote(now)
)
query = (
query
+ "&chiusura_procedimento_bando.query:record=%s&chiusura_procedimento_bando.range:record=min"
% quote(now)
)
if stato == "inProgress":
query = (
query
+ "&scadenza_bando.query:record=%s&scadenza_bando.range:record=max"
% quote(now)
)
query = (
query
+ "&chiusura_procedimento_bando.query:record=%s&chiusura_procedimento_bando.range:record=min"
% quote(now)
)
if stato == "closed":
query = (
query
+ "&chiusura_procedimento_bando.query:record=%s&chiusura_procedimento_bando.range:record=max"
% quote(now)
)
return query
def getBandoState(self, bando):
"""
"""
scadenza_bando = bando.scadenza_bando
chiusura_procedimento_bando = bando.chiusura_procedimento_bando
state = ("open", translate(_(u"Open"), context=self.request))
if scadenza_bando and scadenza_bando.isPast():
if (
chiusura_procedimento_bando
and chiusura_procedimento_bando.isPast()
):
state = (
"closed",
translate(_(u"Closed"), context=self.request),
)
else:
state = (
"inProgress",
translate(_(u"In progress"), context=self.request),
)
else:
if (
chiusura_procedimento_bando
and chiusura_procedimento_bando.isPast()
):
state = (
"closed",
translate(_(u"Closed"), context=self.request),
)
return state
def isValidDeadline(self, date):
"""
"""
if not date:
return False
if date.Date() == "2100/12/31":
# a default date for bandi that don't have a defined deadline
return False
return True
def getSearchResultsDescriptionLength(self):
length = api.portal.get_registry_record(
"plone.search_results_description_length"
)
return length
def getAllowAnonymousViewAbout(self):
return api.portal.get_registry_record("plone.allow_anon_views_about")
def getTypesUseViewActionInListings(self):
return api.portal.get_registry_record(
"plone.types_use_view_action_in_listings"
) | /rer.bandi-4.2.0.tar.gz/rer.bandi-4.2.0/rer/bandi/browser/search.py | 0.71721 | 0.185799 | search.py | pypi |
from DateTime import DateTime
from plone.restapi.search.handler import SearchHandler
from plone.restapi.search.utils import unflatten_dotted_dict
from plone.restapi.services.search.get import SearchGet
from zope.interface import implementer
from zope.publisher.interfaces import IPublishTraverse
@implementer(IPublishTraverse)
class SearchBandiGet(SearchGet):
def __init__(self, context, request):
super(SearchBandiGet, self).__init__(context, request)
@property
def query(self):
query = self.request.form.copy()
query = unflatten_dotted_dict(query)
# Questi parametri vengono aggiunti di base a tutte le query
base_query_parameters = {"portal_type": "Bando"}
query.update(base_query_parameters)
stato = query.get("stato_bandi")
if stato:
stato_query = self.query_stato(stato)
del query['stato_bandi']
query.update(stato_query)
return query
def query_stato(self, stato):
""" In base allo stato che ci viene passato in ingresso generiamo
la query corretta da passare poi al catalogo.
Valori possibili per stato: [open, inProgress, closed]
"""
query = {}
if stato:
now = DateTime()
if stato == "open":
query["scadenza_bando"] = {"query": now, "range": "min"}
query["chiusura_procedimento_bando"] = {
"query": now,
"range": "min",
}
if stato == "inProgress":
query["scadenza_bando"] = {"query": now, "range": "max"}
query["chiusura_procedimento_bando"] = {
"query": now,
"range": "min",
}
if stato == "closed":
query["chiusura_procedimento_bando"] = {
"query": now,
"range": "max",
}
return query
else:
return {}
def reply(self):
return SearchHandler(self.context, self.request).search(self.query) | /rer.bandi-4.2.0.tar.gz/rer.bandi-4.2.0/rer/bandi/services/search_bandi_rest/get.py | 0.637369 | 0.209935 | get.py | pypi |
from plone import api
from plone.restapi.services import Service
from rer.bandi import _
from zope.component import getUtility
from zope.globalrequest import getRequest
from zope.i18n import translate
from zope.interface import implementer
from zope.publisher.interfaces import IPublishTraverse
from zope.schema.interfaces import IVocabularyFactory
def getVocabularyTermsForForm(vocab_name, context):
"""
Return the values of vocabulary
"""
utility = getUtility(IVocabularyFactory, vocab_name)
values = []
vocab = utility(context)
for entry in vocab:
if entry.title != u"select_label":
values.append({"value": entry.value, "label": entry.title})
return values
def getSearchFields():
request = getRequest()
portal = api.portal.get()
return [
{
"id": "SearchableText",
"label": translate(
_("bandi_search_text_label", default=u"Search text"),
context=request,
),
"help": "",
"type": "text",
},
{
"id": "stato_bandi",
"label": translate(
_("bandi_search_state_label", default=u"State"),
context=request,
),
"help": "",
"type": "select",
"multivalued": False,
"options": [
{
"label": translate(
_("bando_state_all_select_label", default="All"),
context=request,
),
"value": "",
},
{
"label": translate(
_("bando_state_open_select_label", default="Open"),
context=request,
),
"value": "open",
},
{
"label": translate(
_(
"bando_state_inProgress_select_label",
default="In progress",
),
context=request,
),
"value": "inProgress",
},
{
"label": translate(
_(
"bando_state_closed_select_label",
default="Closed",
),
context=request,
),
"value": "closed",
},
],
},
{
"id": "tipologia_bando",
"label": translate(
_("bandi_search_type_label", default="Type"), context=request,
),
"help": "",
"type": "checkbox",
"options": getVocabularyTermsForForm(
context=portal, vocab_name="rer.bandi.tipologie.vocabulary"
),
},
{
"id": "destinatari",
"label": translate(
_("destinatari_label", default="Who can apply"),
context=request,
),
"help": "",
"type": "select",
"multivalued": True,
"options": getVocabularyTermsForForm(
context=portal, vocab_name="rer.bandi.destinatari.vocabulary"
),
},
{
"id": "finanziatori",
"label": translate(
_("finanziatori_label", default="Financed by EU programmes",),
context=request,
),
"help": "",
"type": "select",
"multivalued": True,
"options": getVocabularyTermsForForm(
context=portal, vocab_name="rer.bandi.finanziatori.vocabulary"
),
},
{
"id": "materie",
"label": translate(
_("materie_label", default="Topic"), context=request
),
"help": "",
"type": "select",
"multivalued": True,
"options": getVocabularyTermsForForm(
context=portal, vocab_name="rer.bandi.materie.vocabulary"
),
},
{
"id": "Subject",
"label": translate(
_("subject_label", default="Subjects"), context=request,
),
"help": "",
"type": "select",
"multivalued": True,
"options": getVocabularyTermsForForm(
context=portal, vocab_name="plone.app.vocabularies.Keywords"
),
},
]
@implementer(IPublishTraverse)
class SearchParametersGet(Service):
def __init__(self, context, request):
super(SearchParametersGet, self).__init__(context, request)
def reply(self):
return getSearchFields() | /rer.bandi-4.2.0.tar.gz/rer.bandi-4.2.0/rer/bandi/services/search_parameters/get.py | 0.621196 | 0.239994 | get.py | pypi |
import re
from collective.regjsonify.interfaces import IJSONFieldDumper
from collective.regjsonify.fields import Object
from zope.component.hooks import getSite
from zope.interface import implementer
from rer.cookieconsent.utils import get_url_to_dashboard
from plone import api
URL_MODEL = '<a href="{0}">{1}</a>{2}'
pattern_privacy_link = re.compile(r'(\$privacy_link)(\W|$)')
pattern_privacy_link_url = re.compile(r'(\$privacy_link_url)(\W|$)')
pattern_privacy_link_text = re.compile(r'(\$privacy_link_text)(\W|$)')
pattern_dashboard_link = re.compile(r'(\$dashboard_link)(\W|$)')
pattern_dashboard_link_url = re.compile(r'(\$dashboard_link_url)(\W|$)')
pattern_dashboard_link_text = re.compile(r'(\$dashboard_link_text)(\W|$)')
@implementer(IJSONFieldDumper)
class CookieBannerSettingsAdapter(Object):
"""
collective.regjsonify implementation for ICookieBannerEntry
Like basic Object adapter but we need to perfomr some string interpolation.
Also, some server-side only resources are removed.
"""
def __init__(self, field):
self.field = field
def data(self, record):
result = super(CookieBannerSettingsAdapter, self).data(record)
new_text = result['text']
# privacy_link_url can be a document path, not an URL
privacy_link_url = result['privacy_link_url']
if privacy_link_url and privacy_link_url.startswith('/'):
site = getSite()
privacy_link_url = site.absolute_url() + privacy_link_url
privacy_link_text = result['privacy_link_text'] or privacy_link_url
dashboard_link_url = get_url_to_dashboard()
dashboard_link_text = result['dashboard_link_text'] or dashboard_link_url
# privacy link
new_text = pattern_privacy_link.sub(URL_MODEL.format(privacy_link_url,
privacy_link_text,
r'\2'),
new_text)
new_text = pattern_privacy_link_url.sub(privacy_link_url + r'\2', new_text)
new_text = pattern_privacy_link_text.sub(privacy_link_text + r'\2', new_text)
# opt-out dashboard link
new_text = pattern_dashboard_link.sub(URL_MODEL.format(dashboard_link_url,
dashboard_link_text,
r'\2'),
new_text)
new_text = pattern_dashboard_link_url.sub(dashboard_link_url + r'\2', new_text)
new_text = pattern_dashboard_link_text.sub(dashboard_link_text + r'\2', new_text)
new_text = new_text.strip().replace("\n", "<br />\n")
result['text'] = self.cleanHTML(new_text)
result['privacy_link_url'] = privacy_link_url
del result['privacy_link_text']
del result['dashboard_link_text']
return result
def cleanHTML(self, text):
"""
clean text in the given data, so the user can't insert dangerous
html (for example cross-site scripting)
"""
pt = api.portal.get_tool('portal_transforms')
safe_text = pt.convert('safe_html', text)
return safe_text.getData()
@implementer(IJSONFieldDumper)
class OptOutSettingsAdapter(Object):
"""
collective.regjsonify implementation for ICookieBannerSettingsAdapter.
Do not return anything
"""
def __init__(self, field):
self.field = field
def data(self, record):
result = super(CookieBannerSettingsAdapter, self).data(record)
del result['optout_configuration']
return result | /rer.cookieconsent-0.4.6.tar.gz/rer.cookieconsent-0.4.6/rer/cookieconsent/jsconfiguration/fields_adapter.py | 0.517815 | 0.154312 | fields_adapter.py | pypi |
from AccessControl import Unauthorized
from datetime import datetime
from plone import api
from plone.restapi.batching import HypermediaBatch
from plone.restapi.search.utils import unflatten_dotted_dict
from plone.restapi.serializer.converters import json_compatible
from rer.customersatisfaction.interfaces import ICustomerSatisfactionStore
from rer.customersatisfaction.restapi.services.common import DataGet
from six import StringIO
from zope.component import getUtility
import csv
import logging
import six
logger = logging.getLogger(__name__)
class CustomerSatisfactionGet(DataGet):
"""
Called on context
"""
def reply(self):
if api.user.is_anonymous():
raise Unauthorized
results = self.get_data()
batch = HypermediaBatch(self.request, results)
data = {
"@id": batch.canonical_url,
"items": [self.fix_fields(x) for x in batch],
"items_total": batch.items_total,
}
links = batch.links
if links:
data["batching"] = links
return data
def fix_fields(self, data):
data["last_vote"] = json_compatible(data["last_vote"])
return data
def get_data(self):
tool = getUtility(ICustomerSatisfactionStore)
reviews = {}
query = unflatten_dotted_dict(self.request.form)
text = query.get("text", "")
if text:
query_res = tool.search(query={"text": text})
else:
query_res = tool.search()
for review in query_res:
uid = review._attrs.get("uid", "")
date = review._attrs.get("date", "")
vote = review._attrs.get("vote", "")
if uid not in reviews:
obj = self.get_commented_obj(record=review)
if not obj and not api.user.has_permission(
"rer.customersatisfaction: Show Deleted Feedbacks"
):
# only manager can list deleted object's reviews
continue
new_data = {
"ok": 0,
"nok": 0,
"comments": [],
"title": review._attrs.get("title", ""),
"uid": uid,
"review_ids": [],
}
if obj:
# can be changed
new_data["title"] = obj.Title()
new_data["url"] = obj.absolute_url()
reviews[uid] = new_data
data = reviews[uid]
if vote in ["ok", "nok"]:
data[vote] += 1
comment = review._attrs.get("comment", "")
if comment:
data["comments"].append(
{"comment": comment, "date": json_compatible(date), "vote": vote}
)
if not data.get("last_vote", None):
data["last_vote"] = date
else:
if data["last_vote"] < date:
data["last_vote"] = date
result = list(reviews.values())
sort_on = self.request.form.get("sort_on", "last_date")
sort_order = self.request.form.get("sort_order", "desc")
reverse = sort_order.lower() in ["desc", "descending", "reverse"]
if sort_on in ["ok", "nok", "title", "last_vote", "comments"]:
result = sorted(result, key=lambda k: k[sort_on], reverse=reverse)
return result
class CustomerSatisfactionCSVGet(DataGet):
""" """
type = "customer_satisfaction"
def render(self):
data = self.get_data()
if isinstance(data, dict):
if data.get("error", False):
self.request.response.setStatus(500)
return dict(
error=dict(
type="InternalServerError",
message="Unable export. Contact site manager.",
)
)
self.request.response.setHeader("Content-Type", "text/comma-separated-values")
now = datetime.now()
self.request.response.setHeader(
"Content-Disposition",
'attachment; filename="{type}_{date}.csv"'.format(
type=self.type, date=now.strftime("%d%m%Y-%H%M%S")
),
)
self.request.response.write(data)
def get_data(self):
if api.user.is_anonymous():
raise Unauthorized
tool = getUtility(ICustomerSatisfactionStore)
sbuf = StringIO()
rows = []
columns = [
"title",
"url",
"vote",
"comment",
"date",
]
for item in tool.search():
obj = self.get_commented_obj(record=item)
if not obj and not api.user.has_permission(
"rer.customersatisfaction: Show Deleted Feedbacks"
):
# only manager can list deleted object's reviews
continue
data = {}
for k, v in item.attrs.items():
if k not in columns:
continue
if isinstance(v, list):
v = ", ".join(v)
if isinstance(v, int):
v = str(v)
val = json_compatible(v)
if six.PY2:
val = val.encode("utf-8")
data[k] = val
if obj:
data["url"] = obj.absolute_url()
else:
data["url"] = ""
rows.append(data)
writer = csv.DictWriter(sbuf, fieldnames=columns, delimiter=",")
writer.writeheader()
for row in rows:
try:
writer.writerow(row)
except Exception as e:
logger.exception(e)
return {"error": True}
res = sbuf.getvalue()
sbuf.close()
if six.PY2:
return res
return res.encode() | /rer.customersatisfaction-2.2.4.tar.gz/rer.customersatisfaction-2.2.4/src/rer/customersatisfaction/restapi/services/customer_satisfaction/get.py | 0.453504 | 0.152316 | get.py | pypi |
from plone import api
from plone.protect.interfaces import IDisableCSRFProtection
from rer.customersatisfaction.interfaces import ICustomerSatisfactionStore
from rer.customersatisfaction.restapi.services.common import DataAdd
from rer.customersatisfaction.restapi.services.common import DataClear
from rer.customersatisfaction.restapi.services.common import DataDelete
from zExceptions import BadRequest
from zope.component import getUtility
from zope.interface import alsoProvides
import logging
logger = logging.getLogger(__name__)
class CustomerSatisfactionAdd(DataAdd):
"""
Called on context
"""
store = ICustomerSatisfactionStore
def validate_form(self, form_data):
"""
check all required fields and parameters
"""
for field in ["vote"]:
value = form_data.get(field, "")
if not value:
raise BadRequest("Campo obbligatorio mancante: {}".format(field))
if value not in ["ok", "nok"]:
raise BadRequest("Voto non valido: {}".format(value))
def extract_data(self, form_data):
data = super(CustomerSatisfactionAdd, self).extract_data(form_data)
context_state = api.content.get_view(
context=self.context,
request=self.request,
name="plone_context_state",
)
context = context_state.canonical_object()
data["uid"] = context.UID()
data["title"] = context.Title()
return data
class CustomerSatisfactionDelete(DataDelete):
""""""
store = ICustomerSatisfactionStore
def publishTraverse(self, request, id):
# Consume any path segments after /@addons as parameters
self.id = id
return self
def reply(self):
alsoProvides(self.request, IDisableCSRFProtection)
if not self.id:
raise BadRequest("Missing uid")
tool = getUtility(self.store)
reviews = tool.search(query={"uid": self.id})
for review in reviews:
res = tool.delete(id=review.intid)
if not res:
continue
if res.get("error", "") == "NotFound":
raise BadRequest('Unable to find item with id "{}"'.format(self.id))
self.request.response.setStatus(500)
return dict(
error=dict(
type="InternalServerError",
message="Unable to delete item. Contact site manager.",
)
)
return self.reply_no_content()
class CustomerSatisfactionClear(DataClear):
""""""
store = ICustomerSatisfactionStore | /rer.customersatisfaction-2.2.4.tar.gz/rer.customersatisfaction-2.2.4/src/rer/customersatisfaction/restapi/services/customer_satisfaction/crud.py | 0.414543 | 0.154376 | crud.py | pypi |
"""Definition of the Assessore content type
"""
from Products.ATContentTypes.content import schemata, folder
from Products.ATContentTypes.content.base import registerATCT
from Products.Archetypes import atapi
from rer.giunta import giuntaMessageFactory as _
from rer.giunta.config import PROJECTNAME
from rer.giunta.interfaces import IRERAssessore
from zope.interface import implements
from Products.validation.config import validation
from Products.validation.interfaces import ivalidator
from Products.validation.validators.SupplValidators import MaxSizeValidator
from Products.validation import V_REQUIRED
RERAssessoreSchema = folder.ATFolderSchema.copy() + atapi.Schema((
atapi.StringField('position',
required=True,
widget=atapi.StringWidget(label=_(u'rer_giunta_position', default=u'Position'),
description=_(u'rer_giunta_position_help', default=u"Insert the position of this alderman"),
)
),
atapi.TextField('referenceInfos',
searchable=True,
storage = atapi.AnnotationStorage(migrate=True),
validators = ('isTidyHtmlWithCleanup',),
default_output_type = 'text/x-html-safe',
widget = atapi.RichWidget(
label = _('rer_giunta_referenceinfos',default='References'),
description=_('rer_giunta_referenceinfos_help',default=u''),
rows = 25),
),
atapi.TextField('biography',
searchable=True,
storage = atapi.AnnotationStorage(migrate=True),
validators = ('isTidyHtmlWithCleanup',),
default_output_type = 'text/x-html-safe',
widget = atapi.RichWidget(
label = _('rer_giunta_biography',default='Biography'),
description=_('rer_giunta_biography_help',default=u''),
rows = 25),
),
atapi.StringField('delegations',
widget=atapi.TextAreaWidget(label=_(u'rer_giunta_delegations', default=u'Delegations'),
description=_(u'rer_giunta_delegation_help', default=u"Insert the delegations of this alderman"),
)
),
atapi.TextField('delegationsDescription',
searchable=True,
storage = atapi.AnnotationStorage(migrate=True),
validators = ('isTidyHtmlWithCleanup',),
default_output_type = 'text/x-html-safe',
widget = atapi.RichWidget(
label = _('rer_giunta_delegationsdescription',default='Delegations description'),
description=_('rer_giunta_delegationsdescriptions_help',default=u''),
rows = 25),
),
atapi.ImageField('imageDetail',
widget=atapi.ImageWidget(
label=_(u'rer_giunta_imagedetail', default=u'Alderman image'),
description=_(u'rer_giunta_imagedetail_help', default=u"Insert an image for the detail of this alderman"),
),
storage=atapi.AttributeStorage(),
max_size=(768,768),
sizes= {'large' : (768, 768),
'preview' : (400, 400),
'mini' : (200, 200),
'thumb' : (128, 128),
'tile' : (64, 64),
'icon' : (32, 32),
'listing' : (16, 16),
},
validators = (('isNonEmptyFile', V_REQUIRED)),
),
atapi.ImageField('imageCollection',
widget=atapi.ImageWidget(
label=_(u'rer_giunta_imagecollection', default=u'Image for collection of aldermans'),
description=_(u'rer_giunta_imagecollection_help', default=u"Insert an image for the collection view of aldermans"),
),
storage=atapi.AttributeStorage(),
max_size=(768,768),
sizes= {'large' : (768, 768),
'preview' : (400, 400),
'mini' : (200, 200),
'thumb' : (128, 128),
'tile' : (64, 64),
'icon' : (32, 32),
'listing' : (16, 16),
},
validators = (('isNonEmptyFile', V_REQUIRED)),
),
))
RERAssessoreSchema['title'].storage = atapi.AnnotationStorage()
RERAssessoreSchema['title'].widget.label = _('rer_giunta_nomecognome',
default='Fullname')
RERAssessoreSchema['title'].widget.description = _('rer_giunta_nomecognome_help',
default='Insert the fullname of this alderman')
RERAssessoreSchema['description'].storage = atapi.AnnotationStorage()
RERAssessoreSchema['description'].widget.visible = {'edit': 'hidden', 'view': 'hidden'}
schemata.finalizeATCTSchema(RERAssessoreSchema, moveDiscussion=False)
class RERAssessore(folder.ATFolder):
"""Folder for RERAssessore"""
implements(IRERAssessore)
meta_type = "RERAssessore"
schema = RERAssessoreSchema
registerATCT(RERAssessore, PROJECTNAME) | /rer.giunta-1.1.3.tar.gz/rer.giunta-1.1.3/rer/giunta/content/assessore.py | 0.482185 | 0.188026 | assessore.py | pypi |
from collective.volto.blocksfield.field import BlocksField
from plone import schema
from plone.app.contenttypes.interfaces import ICollection
from plone.supermodel import model
from rer.newsletter import _
from zope.interface import Interface
from zope.publisher.interfaces.browser import IDefaultBrowserLayer
import six
import uuid
def default_id_channel():
return six.text_type(uuid.uuid4())
class IShippableCollection(ICollection):
pass
class IRerNewsletterLayer(IDefaultBrowserLayer):
"""Marker interface that defines a browser layer."""
class IChannel(Interface):
"""Marker interface that define a channel of newsletter"""
class IChannelSchema(model.Schema):
"""a dexterity schema for channel of newsletter"""
sender_name = schema.TextLine(
title=_("sender_name", default="Sender Fullname"),
description=_("description_sender_name", default="Fullname of sender"),
required=False,
)
sender_email = schema.Email(
title=_("sender_email", default="Sender email"),
description=_("description_sender_email", default="Email of sender"),
required=True,
)
subject_email = schema.TextLine(
title=_("subject_email", default="Subject email"),
description=_(
"description_subject_mail", default="Subject for channel message"
),
required=False,
)
response_email = schema.Email(
title=_("response_email", default="Response email"),
description=_(
"description_response_email", default="Response email of channel"
),
required=False,
)
privacy = BlocksField(
title=_("privacy_channel", default="Informativa sulla privacy"),
description=_(
"description_privacy_channel",
default="Informativa sulla privacy per questo canale",
),
required=True,
)
header = schema.Text(
title=_("header_channel", default="Header of message"),
description=_(
"description_header_channel",
default="Header for message of this channel",
),
required=False,
default="",
)
footer = schema.Text(
title=_("footer_channel", default="Footer of message"),
description=_(
"description_footer_channel",
default="Footer for message of this channel",
),
required=False,
default="",
)
css_style = schema.Text(
title=_("css_style", default="CSS Style"),
description=_("description_css_style", default="style for mail"),
required=False,
default="",
)
# probabilemente un campo che va nascosto
id_channel = schema.TextLine(
title=_("idChannel", default="Channel ID"),
description=_("description_IDChannel", default="Channel ID"),
required=True,
defaultFactory=default_id_channel,
)
is_subscribable = schema.Bool(
title=_("is_subscribable", default="Is Subscribable"),
default=False,
required=False,
)
standard_unsubscribe = schema.Bool(
title=_("standard_unsubscribe", default="Standard unsubscribe link"),
description=_(
"descriptin_standard_unsubscribe",
default="Usa il link standard per l'unsubscribe",
),
default=True,
required=False,
)
class IMessage(Interface):
"""Marker interface that define a message"""
class IMessageSchema(model.Schema):
"""a dexterity schema for message""" | /rer.newsletter-3.0.0.tar.gz/rer.newsletter-3.0.0/src/rer/newsletter/interfaces.py | 0.59561 | 0.224799 | interfaces.py | pypi |
from plone import api
from plone.protect import interfaces
from plone.protect.authenticator import createToken
from plone.restapi.deserializer import json_body
from plone.restapi.services import Service
from rer.newsletter import logger
from rer.newsletter.adapter.subscriptions import IChannelSubscriptions
from rer.newsletter.utils import compose_sender
from rer.newsletter.utils import get_site_title
from rer.newsletter.utils import OK
from rer.newsletter.utils import UNHANDLED
from six import PY2
from zope.component import getMultiAdapter
from zope.interface import alsoProvides
class NewsletterUnsubscribe(Service):
def reply(self):
data = json_body(self.request)
if "IDisableCSRFProtection" in dir(interfaces):
alsoProvides(self.request, interfaces.IDisableCSRFProtection)
response, errors = self.handleUnsubscribe(data)
if errors:
response["errors"] = errors
return response
def getData(self, data):
errors = []
if not data.get("email", None):
errors.append("Indirizzo email non inserito o non valido")
return {
"email": data.get("email", None),
}, errors
def handleUnsubscribe(self, postData):
status = UNHANDLED
data, errors = self.getData(postData)
if errors:
return data, errors
email = data.get("email", None)
channel = getMultiAdapter(
(self.context, self.request), IChannelSubscriptions
)
status, secret = channel.unsubscribe(email)
if status != OK:
logger.exception("Error: {}".format(status))
if status == 4:
msg = "unsubscribe_inexistent_mail"
else:
msg = "unsubscribe_generic"
errors = msg
return {"@id": self.request.get("URL")}, errors
# creo il token CSRF
token = createToken()
# mando mail di conferma
url = self.context.absolute_url()
url += "/confirm-subscription?secret=" + secret
url += "&_authenticator=" + token
url += "&action=unsubscribe"
mail_template = self.context.restrictedTraverse(
"@@deleteuser_template"
)
parameters = {
"header": self.context.header,
"footer": self.context.footer,
"style": self.context.css_style,
"activationUrl": url,
}
mail_text = mail_template(**parameters)
portal = api.portal.get()
mail_text = portal.portal_transforms.convertTo("text/mail", mail_text)
response_email = compose_sender(channel=self.context)
channel_title = self.context.title
if PY2:
channel_title = self.context.title.encode("utf-8")
mailHost = api.portal.get_tool(name="MailHost")
mailHost.send(
mail_text.getData(),
mto=email,
mfrom=response_email,
subject="Conferma la cancellazione dalla newsletter"
" {channel} del portale {site}".format(
channel=channel_title, site=get_site_title()
),
charset="utf-8",
msg_type="text/html",
immediate=True,
)
return {
"@id": self.request.get("URL"),
"status": "user_unsubscribe_success" if not errors else "error",
"errors": errors if errors else None,
}, errors | /rer.newsletter-3.0.0.tar.gz/rer.newsletter-3.0.0/src/rer/newsletter/restapi/services/unsubscribe.py | 0.405213 | 0.178347 | unsubscribe.py | pypi |
from plone import api
from plone.protect import interfaces
from plone.protect.authenticator import createToken
from plone.restapi.deserializer import json_body
from plone.restapi.services import Service
from rer.newsletter import logger
from rer.newsletter.adapter.subscriptions import IChannelSubscriptions
from rer.newsletter.utils import compose_sender
from rer.newsletter.utils import get_site_title
from rer.newsletter.utils import SUBSCRIBED
from rer.newsletter.utils import UNHANDLED
from six import PY2
from zope.component import getMultiAdapter
from zope.interface import alsoProvides
class NewsletterSubscribe(Service):
def getData(self, data):
errors = []
if not data.get("email", None):
errors.append("invalid_email")
return {
"email": data.get("email", None),
}, errors
def handleSubscribe(self, postData):
status = UNHANDLED
data, errors = self.getData(postData)
if errors:
return data, errors
email = data.get("email", "").lower()
if self.context.is_subscribable:
channel = getMultiAdapter(
(self.context, self.request), IChannelSubscriptions
)
status, secret = channel.subscribe(email)
if status == SUBSCRIBED:
# creo il token CSRF
token = createToken()
# mando mail di conferma
url = self.context.absolute_url()
url += "/confirm-subscription?secret=" + secret
url += "&_authenticator=" + token
url += "&action=subscribe"
mail_template = self.context.restrictedTraverse(
"@@activeuser_template"
)
parameters = {
"title": self.context.title,
"header": self.context.header,
"footer": self.context.footer,
"style": self.context.css_style,
"activationUrl": url,
"portal_name": get_site_title(),
}
mail_text = mail_template(**parameters)
portal = api.portal.get()
mail_text = portal.portal_transforms.convertTo(
"text/mail", mail_text
)
sender = compose_sender(channel=self.context)
channel_title = self.context.title
if PY2:
channel_title = self.context.title.encode("utf-8")
mailHost = api.portal.get_tool(name="MailHost")
mailHost.send(
mail_text.getData(),
mto=email,
mfrom=sender,
subject="Conferma la tua iscrizione alla Newsletter {channel}"
" del portale {site}".format(
channel=channel_title, site=get_site_title()
),
charset="utf-8",
msg_type="text/html",
immediate=True,
)
return data, errors
else:
if status == 2:
logger.exception("user already subscribed")
errors.append("user_already_subscribed")
return data, errors
else:
logger.exception("unhandled error subscribe user")
errors.append("Problems...{0}".format(status))
return data, errors
def reply(self):
data = json_body(self.request)
if "IDisableCSRFProtection" in dir(interfaces):
alsoProvides(self.request, interfaces.IDisableCSRFProtection)
_data, errors = self.handleSubscribe(data)
return {
"@id": self.request.get("URL"),
"errors": errors if errors else None,
"status": "user_subscribe_success" if not errors else "error",
} | /rer.newsletter-3.0.0.tar.gz/rer.newsletter-3.0.0/src/rer/newsletter/restapi/services/subscribe.py | 0.426083 | 0.179046 | subscribe.py | pypi |
from plone import api
from plone import schema
from plone.autoform.form import AutoExtensibleForm
from plone.protect.authenticator import createToken
from plone.z3cform.layout import wrap_form
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from rer.newsletter import _
from rer.newsletter import logger
from rer.newsletter.adapter.subscriptions import IChannelSubscriptions
from rer.newsletter.utils import compose_sender
from rer.newsletter.utils import get_site_title
from rer.newsletter.utils import SUBSCRIBED
from rer.newsletter.utils import UNHANDLED
from six import PY2
from z3c.form import button
from z3c.form import form
from zope.component import getMultiAdapter
from zope.interface import Interface
class ISubscribeForm(Interface):
"""define field for channel subscription"""
email = schema.Email(
title=_("subscribe_user", default="Subscription Mail"),
description=_(
"subscribe_user_description",
default="Mail for subscribe to a channel",
),
required=True,
)
class SubscribeForm(AutoExtensibleForm, form.Form):
ignoreContext = True
schema = ISubscribeForm
def __init__(self, context, request):
self.context = context
self.request = request
def isVisible(self):
if self.context.is_subscribable:
return True
else:
return False
def getChannelPrivacyPolicy(self):
if self.context.privacy:
return self.context.privacy.output
def update(self):
super(SubscribeForm, self).update()
@button.buttonAndHandler(_("subscribe_submit_label", default="Subscribe"))
def handleSave(self, action):
status = UNHANDLED
data, errors = self.extractData()
if errors:
self.status = self.formErrorsMessage
if self.status:
self.status = (
"Indirizzo email non inserito o non "
+ "valido, oppure controllo di sicurezza non " # noqa
+ "inserito." # noqa
)
return
email = data.get("email", "").lower()
if self.context.is_subscribable:
channel = getMultiAdapter(
(self.context, self.request), IChannelSubscriptions
)
status, secret = channel.subscribe(email)
if status == SUBSCRIBED:
# creo il token CSRF
token = createToken()
# mando mail di conferma
url = self.context.absolute_url()
url += "/confirm-subscription?secret=" + secret
url += "&_authenticator=" + token
url += "&action=subscribe"
mail_template = self.context.restrictedTraverse(
"@@activeuser_template"
)
parameters = {
"title": self.context.title,
"header": self.context.header,
"footer": self.context.footer,
"style": self.context.css_style,
"activationUrl": url,
"portal_name": get_site_title(),
}
mail_text = mail_template(**parameters)
portal = api.portal.get()
mail_text = portal.portal_transforms.convertTo(
"text/mail", mail_text
)
sender = compose_sender(channel=self.context)
channel_title = self.context.title
if PY2:
channel_title = self.context.title.encode("utf-8")
mailHost = api.portal.get_tool(name="MailHost")
mailHost.send(
mail_text.getData(),
mto=email,
mfrom=sender,
subject="Conferma la tua iscrizione alla Newsletter {channel}"
" del portale {site}".format(
channel=channel_title, site=get_site_title()
),
charset="utf-8",
msg_type="text/html",
immediate=True,
)
api.portal.show_message(
message=_(
"status_user_subscribed",
default="Riceverai una e-mail per confermare "
"l'iscrizione alla newsletter.",
),
request=self.request,
type="info",
)
else:
if status == 2:
logger.exception("user already subscribed")
api.portal.show_message(
message=_(
"user_already_subscribed",
default="Sei già iscritto a questa newsletter, "
"oppure non hai ancora"
" confermato l'iscrizione.",
),
request=self.request,
type="error",
)
else:
logger.exception("unhandled error subscribe user")
api.portal.show_message(
message="Problems...{0}".format(status),
request=self.request,
type="error",
)
subscribe_view = wrap_form(
SubscribeForm, index=ViewPageTemplateFile("templates/subscribechannel.pt")
) | /rer.newsletter-3.0.0.tar.gz/rer.newsletter-3.0.0/src/rer/newsletter/browser/channel/subscribe.py | 0.474144 | 0.15876 | subscribe.py | pypi |
from plone.app.vocabularies.catalog import CatalogSource
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from plone.portlet.static import static
from rer.portlet.advanced_static import (
RERPortletAdvancedStaticMessageFactory as _,
)
from zope import schema
from zope.interface import implementer
import sys
from plone.memoize import view
from plone import api
class IRERPortletAdvancedStatic(static.IStaticPortlet):
"""
A custom static text portlet
"""
target_attr = schema.Bool(
title=_(u"Open links in a new window"),
description=_(
u"Tick this box if you want to open the header "
"and footer links in a new window"
),
required=False,
default=False,
)
image_ref = schema.Choice(
title=_(u"Background image"),
description=_(
u"Insert an image that will be shown as background of the header"
),
required=False,
source=CatalogSource(portal_type="Image"),
)
image_ref_height = schema.Int(
title=_(u"Background image height"),
description=_(
u"Specify image background's height (in pixels). If empty will"
" be used image's height."
),
required=False,
)
internal_url = schema.Choice(
title=_(u"Internal link"),
description=_(
u"Insert an internal link. This field override external link field"
),
required=False,
source=CatalogSource(),
)
portlet_class = schema.TextLine(
title=_(u"Portlet class"),
required=False,
description=_(u"CSS class to add at the portlet"),
)
css_style = schema.Choice(
title=_(u"Portlet style"),
description=_(u"Choose a CSS style for the portlet"),
required=False,
vocabulary="rer.portlet.advanced_static.CSSVocabulary",
)
@implementer(IRERPortletAdvancedStatic)
class Assignment(static.Assignment):
"""Portlet assignment.
This is what is actually managed through the portlets UI and associated
with columns.
"""
target_attr = False
image_ref = ""
image_ref_height = None
assignment_context_path = None
internal_url = ""
portlet_class = ""
css_style = ""
def __init__(
self,
header=u"",
text=u"",
omit_border=False,
footer=u"",
more_url="",
target_attr=False,
hide=False,
assignment_context_path=None,
image_ref="",
image_ref_height=None,
internal_url="",
portlet_class="",
css_style="",
):
self.header = header
self.text = text
self.omit_border = omit_border
self.footer = footer
self.more_url = more_url
self.target_attr = target_attr
self.image_ref = image_ref
self.image_ref_height = image_ref_height
self.assignment_context_path = assignment_context_path
self.internal_url = internal_url
self.portlet_class = portlet_class
self.css_style = css_style
if sys.version_info < (2, 6):
self.hide = hide
@property
def title(self):
"""This property is used to give the title of the portlet in the
"manage portlets" screen.
"""
if self.header:
return self.header
else:
return "RER portlet advanced static"
class Renderer(static.Renderer):
"""Portlet renderer.
"""
render = ViewPageTemplateFile("rerportletadvancedstatic.pt")
def getPortletClass(self):
classes = "portlet rerPortletAdvancedStatic"
if self.data.portlet_class:
classes += " %s" % self.data.portlet_class
if self.data.css_style:
classes += " %s" % self.data.css_style
return classes
def getImgUrl(self):
"""
return the image url
"""
if not self.image_object:
return ""
return self.image_object.absolute_url()
@property
@view.memoize
def image_object(self):
"""
get the image object
"""
imageUID = self.data.image_ref
if not imageUID:
return ""
return api.content.get(UID=imageUID)
def getImgHeight(self):
"""
return the image height
"""
image = self.image_object
if not image:
return ""
# compatibility with dexterity images
blobimage = getattr(image, "image", None)
if blobimage:
return blobimage.getImageSize()[1]
return str(image.getImage().height)
def getImageStyle(self):
"""
set background image, if present
"""
img_url = self.getImgUrl()
if not img_url:
return None
if self.data.image_ref_height:
height = self.data.image_ref_height
else:
height = self.getImgHeight()
style = "background-image:url(%s)" % img_url
if height:
style += ";height:%spx" % height
return style
def getPortletLink(self):
default_link = self.data.more_url or ""
if not self.data.internal_url:
return default_link
item = api.content.get(UID=self.data.internal_url)
if not item:
return default_link
return item.absolute_url()
def getLinkTitle(self):
if self.data.target_attr:
return _(u"Opens in a new window")
else:
return ""
class AddForm(static.AddForm):
"""Portlet add form.
This is registered in configure.zcml. The form_fields variable tells
zope.formlib which fields to display. The create() method actually
constructs the assignment that is being added.
"""
schema = IRERPortletAdvancedStatic
def create(self, data):
assignment_context_path = "/".join(
self.context.__parent__.getPhysicalPath()
)
return Assignment(
assignment_context_path=assignment_context_path, **data
)
class EditForm(static.EditForm):
"""Portlet edit form.
This is registered with configure.zcml. The form_fields variable tells
zope.formlib which fields to display.
"""
schema = IRERPortletAdvancedStatic | /rer.portlet.advanced_static-3.1.0.tar.gz/rer.portlet.advanced_static-3.1.0/rer/portlet/advanced_static/rerportletadvancedstatic.py | 0.536799 | 0.365372 | rerportletadvancedstatic.py | pypi |
from zope.interface import implements
from zope import schema
from plone.app.portlets.portlets import navigation
from plone.app.portlets.portlets import base
from zope.formlib import form
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from rer.portlet.er_navigation import ERPortletNavigationMessageFactory as _
from plone.app.form.widgets.uberselectionwidget import UberSelectionWidget
class IERPortletNavigation(navigation.INavigationPortlet):
"""A portlet that inherit from base navigation
"""
topLevel = schema.Int(
title=_(u"label_navigation_startlevel", default=u"Start level"),
description=_(u"help_navigation_start_level",
default=u"An integer value that specifies the number of folder "
"levels below the site root that must be exceeded "
"before the navigation tree will display. 0 means "
"that the navigation tree should be displayed "
"everywhere including pages in the root of the site. "
"1 means the tree only shows up inside folders "
"located in the root and downwards, never showing "
"at the top level."),
default=0,
required=False)
portlet_class = schema.TextLine(title=_(u"Portlet class"),
required=False,
description=_(u"Css class to add at the portlet"))
class Assignment(navigation.Assignment):
"""Portlet assignment.
"""
implements(IERPortletNavigation)
portlet_class= ''
def __init__(self, name=u"", root=None, currentFolderOnly=False,
includeTop=False, topLevel=0, bottomLevel=0,portlet_class= ''):
super(Assignment, self).__init__(name = name,
root = root,
currentFolderOnly = currentFolderOnly,
includeTop = includeTop,
topLevel = topLevel,
bottomLevel = bottomLevel)
self.portlet_class= portlet_class
@property
def title(self):
"""This property is used to give the title of the portlet in the
"manage portlets" screen.
"""
if self.name:
return "ER navigation: %s" %self.name
return _(u"ER navigation: no title")
class Renderer(navigation.Renderer):
"""Portlet renderer.
"""
_template = ViewPageTemplateFile('erportletnavigation.pt')
recurse = ViewPageTemplateFile('er_navigation_recurse.pt')
class AddForm(navigation.AddForm):
"""Portlet add form.
"""
form_fields = form.Fields(IERPortletNavigation)
form_fields['root'].custom_widget = UberSelectionWidget
def create(self, data):
return Assignment(name=data.get('name', u""),
root=data.get('root', u""),
currentFolderOnly=data.get('currentFolderOnly', False),
includeTop=data.get('includeTop', False),
topLevel=data.get('topLevel', 0),
bottomLevel=data.get('bottomLevel', 0),
portlet_class=data.get('portlet_class', u""))
class EditForm(base.EditForm):
"""Portlet edit form.
"""
form_fields = form.Fields(IERPortletNavigation)
form_fields['root'].custom_widget = UberSelectionWidget | /rer.portlet.er_navigation-1.0.7.zip/rer.portlet.er_navigation-1.0.7/rer/portlet/er_navigation/erportletnavigation.py | 0.640861 | 0.272408 | erportletnavigation.py | pypi |
from plone import api
from plone.api.exc import InvalidParameterError
from Products.CMFCore.utils import getToolByName
from rer.sitesearch.interfaces import ISiteSearchCustomFilters
from rer.sitesearch.utils import GROUP_ICONS
from zope.component import getGlobalSiteManager
from zope.component import getUtility
from zope.globalrequest import getRequest
from zope.i18n import translate
from zope.interface import implementer
from zope.schema.interfaces import IVocabularyFactory
from zope.schema.vocabulary import SimpleTerm
from zope.schema.vocabulary import SimpleVocabulary
from zope.site.hooks import getSite
try:
from rer.solrpush.interfaces.settings import IRerSolrpushSettings
HAS_SOLR = True
except ImportError:
HAS_SOLR = False
@implementer(IVocabularyFactory)
class IndexesVocabulary(object):
"""
Vocabulary factory for allowable indexes in catalog.
"""
def __call__(self, context):
site = getSite()
pc = getToolByName(site, "portal_catalog")
indexes = list(pc.indexes())
indexes.sort()
indexes = [SimpleTerm(i, i, i) for i in indexes]
return SimpleVocabulary(indexes)
@implementer(IVocabularyFactory)
class AdvancedFiltersVocabulary(object):
"""
Vocabulary factory for list of advanced filters
"""
def __call__(self, context):
sm = getGlobalSiteManager()
request = getRequest()
adapters = [
{
"name": x.name,
"label": translate(x.factory.label, context=request),
}
for x in sm.registeredAdapters()
if x.provided == ISiteSearchCustomFilters
]
terms = [
SimpleTerm(
value=i["name"],
token=i["name"],
title=i["label"],
)
for i in sorted(adapters, key=lambda i: i["label"])
]
return SimpleVocabulary(terms)
@implementer(IVocabularyFactory)
class GroupIconsVocabulary(object):
"""
Vocabulary factory for list of available icons
"""
def __call__(self, context):
request = getRequest()
terms = [
SimpleTerm(
value=i["id"],
token=i["id"],
title=translate(i["label"], context=request),
)
for i in GROUP_ICONS
]
return SimpleVocabulary(terms)
@implementer(IVocabularyFactory)
class GroupingTypesVocabulary(object):
""" """
def __call__(self, context):
voc_id = "plone.app.vocabularies.ReallyUserFriendlyTypes"
if HAS_SOLR:
try:
if api.portal.get_registry_record(
"active", interface=IRerSolrpushSettings
):
voc_id = "rer.solrpush.vocabularies.AvailablePortalTypes"
except (KeyError, InvalidParameterError):
pass
factory = getUtility(IVocabularyFactory, voc_id)
return factory(context)
AdvancedFiltersVocabularyFactory = AdvancedFiltersVocabulary()
GroupingTypesVocabularyFactory = GroupingTypesVocabulary()
GroupIconsVocabularyFactory = GroupIconsVocabulary()
IndexesVocabularyFactory = IndexesVocabulary() | /rer.sitesearch-4.3.1.tar.gz/rer.sitesearch-4.3.1/src/rer/sitesearch/vocabularies.py | 0.539469 | 0.15633 | vocabularies.py | pypi |
from plone.app.z3cform.widget import AjaxSelectFieldWidget
from plone.autoform import directives
from plone.supermodel import model
from rer.sitesearch import _
from zope import schema
class ITypesMappingRowSchema(model.Schema):
label = schema.List(
title=_("types_mapping_label_label", default=u"Label"),
description=_(
"types_mapping_label_help",
default=u"Insert the label for this group. One per row. "
u"If the site has only one language, type the simple name. "
u"If it has multiple languages, insert one row per language in "
u"the following format: lang|label. For example: it|Documenti",
),
required=True,
value_type=schema.TextLine(),
)
types = schema.Tuple(
title=_("types_mapping_types_label", default=u"Portal types"),
description=_(
"types_mapping_types_help",
default=u"Select which portal_types to show in this group.",
),
required=True,
value_type=schema.TextLine(),
)
icon = schema.Choice(
title=_("types_mapping_icon_label", default=u"Icon"),
description=_(
"types_mapping_icon_help",
default=u"Select an icon for this group.",
),
required=False,
vocabulary=u"rer.sitesearch.vocabularies.GroupIconsVocabulary",
)
advanced_filters = schema.Choice(
title=_("types_mapping_advanced_filters_label", default=u"Advanced filters"),
description=_(
"types_mapping_advanced_filters_help",
default=u"Select a preset of advanced filters for this group.",
),
required=False,
vocabulary=u"rer.sitesearch.vocabularies.AdvancedFiltersVocabulary",
default=u"",
)
directives.widget(
"types",
AjaxSelectFieldWidget,
vocabulary=u"rer.sitesearch.vocabularies.GroupingTypesVocabulary",
)
class IIndexesRowSchema(model.Schema):
label = schema.List(
title=_("available_indexes_label_label", default=u"Label"),
description=_(
"available_indexes_label_help",
default=u"Insert the label for this index. One per row. "
u"If the site has only one language, type the simple name. "
u"If it has multiple languages, insert one row per language in "
u"the following format: lang|label. For example: it|Keywords",
),
required=True,
value_type=schema.TextLine(),
)
index = schema.Choice(
title=_("available_indexes_index_label", default=u"Index"),
description=_(
"available_indexes_index_help",
default=u"Select which catalog index to use as filter.",
),
required=True,
vocabulary=u"rer.sitesearch.vocabularies.IndexesVocabulary",
)
class IRERSiteSearchSettings(model.Schema):
""" """
max_word_len = schema.Int(
title=_(u"Maximum number of characters in a single word"),
description=_(
"help_max_word_len",
default=u"Set what is the maximum length of a single search word. "
u"Longer words will be omitted from the search.",
),
default=128,
required=False,
)
max_words = schema.Int(
title=_(u"Maximum number of words in search query"),
description=_(
"help_max_words",
default=u"Set what is the maximum number of words in the search "
u"query. The other words will be omitted from the search.",
),
default=32,
required=False,
)
types_grouping = schema.SourceText(
title=_("types_grouping_label", default=u"Types grouping"),
description=_(
"types_grouping_help",
default=u"If you fill this field, you can group search results by "
u"content-types.",
),
required=False,
)
available_indexes = schema.SourceText(
title=_("available_indexes_label", default=u"Available indexes"),
description=_(
"available_indexes_help",
default=u"Select which additional filters to show in the column.",
),
required=False,
)
i18n_additional_domains = schema.List(
title=_(
"i18n_additional_domains_label", default=u"Additional translation domains"
),
description=_(
"i18n_additional_domains_help",
default=u"Insert a list of additional translations domains (other "
u'than the default one "rer.sitesearch"). One per line.'
u"Translation domains can be provided by some Plone add-ons to "
u"help translate some Indexes values.",
),
required=False,
value_type=schema.TextLine(),
default=[],
) | /rer.sitesearch-4.3.1.tar.gz/rer.sitesearch-4.3.1/src/rer/sitesearch/interfaces/settings.py | 0.684159 | 0.253624 | settings.py | pypi |
from copy import deepcopy
from plone import api
from plone.indexer.interfaces import IIndexableObject
from plone.restapi.interfaces import ISerializeToJson
from plone.restapi.search.utils import unflatten_dotted_dict
from plone.restapi.serializer.catalog import (
LazyCatalogResultSerializer as BaseSerializer,
)
from Products.ZCatalog.Lazy import Lazy
from rer.sitesearch import _
from rer.sitesearch.interfaces import IRERSiteSearchLayer
from rer.sitesearch.restapi.utils import get_indexes_mapping
from rer.sitesearch.restapi.utils import get_types_groups
from zope.component import adapter
from zope.component import queryMultiAdapter
from zope.i18n import translate
from zope.interface import implementer
import Missing
@implementer(ISerializeToJson)
@adapter(Lazy, IRERSiteSearchLayer)
class LazyCatalogResultSerializer(BaseSerializer):
def __call__(self, fullobjects=False):
data = super(LazyCatalogResultSerializer, self).__call__(
fullobjects=fullobjects
)
# add facets informations
data.update({"facets": self.extract_facets(brains=self.lazy_resultset)})
return data
def extract_facets(self, brains):
pc = api.portal.get_tool(name="portal_catalog")
facets = {
"groups": self.get_groups_facets(brains=brains),
"indexes": get_indexes_mapping(),
}
for brain in brains:
for index_id, index_settings in facets["indexes"].get("values", {}).items():
if index_settings.get("type", "") == "DateIndex":
# skip it, we need to set some dates in the interface
continue
try:
value = getattr(brain, index_id)
except AttributeError:
# index is not a brain's metadata. Load item object
# (could be painful)
item = brain.getObject()
adapter = queryMultiAdapter((item, pc), IIndexableObject)
value = getattr(adapter, index_id, None)
if not value or value == Missing.Value:
if not isinstance(value, bool) and not isinstance(value, int):
# bool and numbers can be False or 0
continue
if isinstance(value, list) or isinstance(value, tuple):
for single_value in value:
if single_value not in index_settings["values"]:
index_settings["values"][single_value] = 1
else:
index_settings["values"][single_value] += 1
else:
if value not in index_settings["values"]:
index_settings["values"][value] = 1
else:
index_settings["values"][value] += 1
return facets
def get_groups_facets(self, brains):
"""
We need to have the right count for groups facets because these are
not proper facets, and the number of results should be the same also
if we select a different group (groups only needs to show grouped
informations, not to filter).
If we are filtering by type, this means that we need to do an another
catalog search for get the proper counters for each group.
"""
query = deepcopy(self.request.form)
query = unflatten_dotted_dict(query)
groups = get_types_groups()
all_label = translate(
_("all_types_label", default=u"All content types"),
context=self.request,
)
for key, value in query.items():
if value in ["false", "False"]:
query[key] = False
if value in ["true", "True"]:
query[key] = True
for index in ["metadata_fields", "portal_type"]:
if index in query:
del query[index]
# fix portal types
types = query.get("portal_type", [])
if "query" in types:
types = types["query"]
query["portal_type"] = self.filter_types(types)
portal_catalog = api.portal.get_tool(name="portal_catalog")
brains_to_iterate = portal_catalog(**query)
for brain in brains_to_iterate:
for group in groups.get("values", {}).values():
if brain.portal_type in group.get("types", []):
group["count"] += 1
groups["values"][all_label]["count"] = getattr(
brains_to_iterate, "actual_result_count", len(brains_to_iterate)
)
return groups
def filter_types(self, types):
"""
Search only in enabled types in control-panel
"""
plone_utils = api.portal.get_tool(name="plone_utils")
if not isinstance(types, list):
types = [types]
return plone_utils.getUserFriendlyTypes(types) | /rer.sitesearch-4.3.1.tar.gz/rer.sitesearch-4.3.1/src/rer/sitesearch/restapi/serializer/catalog.py | 0.509276 | 0.273589 | catalog.py | pypi |
from copy import deepcopy
from plone import api
from plone.api.exc import InvalidParameterError
from plone.registry.interfaces import IRegistry
from plone.restapi.search.handler import SearchHandler
from plone.restapi.search.utils import unflatten_dotted_dict
from plone.restapi.services import Service
from rer.sitesearch import _
from rer.sitesearch.restapi.utils import get_indexes_mapping
from rer.sitesearch.restapi.utils import get_types_groups
from zope.component import getUtility
from zope.i18n import translate
from plone.memoize.view import memoize
try:
from rer.solrpush.interfaces.settings import IRerSolrpushSettings
from rer.solrpush.restapi.services.solr_search.solr_search_handler import (
SolrSearchHandler,
)
from rer.solrpush.utils.solr_indexer import get_site_title
HAS_SOLR = True
except ImportError:
HAS_SOLR = False
try:
# rer.agidtheme overrides site tile field
from rer.agidtheme.base.interfaces import IRERSiteSchema as ISiteSchema
from rer.agidtheme.base.utility.interfaces import ICustomFields
RER_THEME = True
except ImportError:
from Products.CMFPlone.interfaces.controlpanel import ISiteSchema
RER_THEME = False
import six
class SearchGet(Service):
@property
def solr_search_enabled(self):
if not HAS_SOLR:
return False
try:
active = api.portal.get_registry_record(
"active", interface=IRerSolrpushSettings
)
search_enabled = api.portal.get_registry_record(
"search_enabled", interface=IRerSolrpushSettings
)
return active and search_enabled
except (KeyError, InvalidParameterError):
return False
@property
@memoize
def searchable_portal_types(self):
groups = get_types_groups()
types = set([])
for group_id, group_data in groups.get("values", {}).items():
if group_data.get("types", []):
types.update(group_data["types"])
return sorted(list(types))
def reply(self):
query = deepcopy(self.request.form)
query = unflatten_dotted_dict(query)
path_infos = self.get_path_infos(query=query)
groups = get_types_groups()
if "group" in query:
for group_id, group_data in groups.get("values", {}).items():
if query["group"] == group_id and group_data["types"]:
query["portal_type"] = group_data["types"]
del query["group"]
if self.solr_search_enabled:
data = self.do_solr_search(query=query)
else:
query["use_site_search_settings"] = True
data = SearchHandler(self.context, self.request).search(query)
if path_infos:
data["path_infos"] = path_infos
return data
def do_solr_search(self, query):
query["facets"] = True
query["facet_fields"] = ["portal_type", "site_name"]
if not query.get("site_name", []):
query["site_name"] = get_site_title()
elif "all" in query.get("site_name", []):
del query["site_name"]
indexes = get_indexes_mapping()
if indexes:
query["facet_fields"].extend(indexes["order"])
if "metadata_fields" not in query:
query["metadata_fields"] = ["Description"]
else:
if "Description" not in query["metadata_fields"]:
query["metadata_fields"].append("Description")
data = SolrSearchHandler(self.context, self.request).search(query)
data["facets"] = self.remap_solr_facets(data=data, query=query)
data["current_site"] = get_site_title()
return data
def remap_solr_facets(self, data, query):
new_facets = {
"groups": get_types_groups(),
"indexes": get_indexes_mapping(),
"sites": {"order": [], "values": {}},
}
for index_id, index_values in data["facets"].items():
if index_id == "site_name":
entry = new_facets["sites"]["values"]
self.handle_sites_facet(
sites=entry,
index_values=index_values,
query=query,
)
new_facets["sites"]["order"] = sorted(entry.keys())
elif index_id == "portal_type":
# groups
self.handle_groups_facet(
groups=new_facets["groups"]["values"],
index_values=index_values,
query=query,
)
else:
entry = new_facets["indexes"]["values"][index_id]
for index_mapping in index_values:
for key, count in index_mapping.items():
if count:
entry["values"][key] = count
return new_facets
def handle_groups_facet(self, groups, index_values, query):
# we need to do a second query in solr, to get the results
# unfiltered by types
portal_types = query.get("portal_type", "")
if portal_types:
new_query = deepcopy(query)
del new_query["portal_type"]
# simplify returned result data
new_query["facet_fields"] = ["portal_type"]
new_query["metadata_fields"] = ["UID"]
new_data = SolrSearchHandler(self.context, self.request).search(new_query)
indexes = new_data["facets"]["portal_type"]
else:
indexes = index_values
all_label = translate(
_("all_types_label", default=u"All content types"),
context=self.request,
)
for type_mapping in indexes:
for ptype, count in type_mapping.items():
for group in groups.values():
if ptype in group["types"]:
group["count"] += count
groups[all_label]["count"] += count
def handle_sites_facet(self, sites, index_values, query):
site = query.get("site_name", "")
if site:
# we need to do an additional query in solr, to get the results
# unfiltered by site_name
new_query = deepcopy(query)
del new_query["site_name"]
# simplify returned result data
new_query["facet_fields"] = ["site_name"]
new_query["metadata_fields"] = ["UID"]
new_data = SolrSearchHandler(self.context, self.request).search(new_query)
indexes = new_data["facets"]["site_name"]
else:
indexes = index_values
for site_mapping in indexes:
for name, count in site_mapping.items():
if count:
sites[name] = count
def get_path_infos(self, query):
if "path" not in query:
return {}
registry = getUtility(IRegistry)
site_settings = registry.forInterface(ISiteSchema, prefix="plone", check=False)
site_title = getattr(site_settings, "site_title") or ""
if RER_THEME:
fields_value = getUtility(ICustomFields)
site_title = fields_value.titleLang(site_title)
if six.PY2:
site_title = site_title.decode("utf-8")
path = query["path"]
if isinstance(path, dict):
path = path.get("query", "")
root_path = "/".join(api.portal.get().getPhysicalPath())
data = {
"site_name": site_title,
"root": "/".join(api.portal.get().getPhysicalPath()),
}
if path != root_path:
folder = api.content.get(path)
if folder:
data["path_title"] = folder.title
return data
class SearchLocalGet(SearchGet):
@property
def solr_search_enabled(self):
return False | /rer.sitesearch-4.3.1.tar.gz/rer.sitesearch-4.3.1/src/rer/sitesearch/restapi/services/search/get.py | 0.467818 | 0.210421 | get.py | pypi |
from datetime import datetime
from DateTime import DateTime
from plone import api
from Products.MimetypesRegistry.MimeTypeItem import guess_icon_path
from rer.solrpush.browser.scales import SolrScalesHandler
from rer.solrpush.interfaces.adapter import ISolrBrain
from rer.solrpush.interfaces.settings import IRerSolrpushSettings
from rer.solrpush.utils.solr_indexer import get_index_fields
from rer.solrpush.utils.solr_indexer import get_site_title
from zope.globalrequest import getRequest
from zope.interface import implementer
try:
from ZTUtils.Lazy import Lazy
except ImportError:
from Products.ZCatalog.Lazy import Lazy
try:
from design.plone.theme.interfaces import IDesignPloneThemeLayer
HAS_RER_THEME = True
except ImportError:
HAS_RER_THEME = False
import os
import six
timezone = DateTime().timezone()
@implementer(ISolrBrain)
class Brain(dict):
"""a dictionary with attribute access"""
def __repr__(self):
return "<SolrBrain for {}>".format(self.getPath())
def __getattr__(self, name):
"""look up attributes in dict"""
marker = []
value = self.get(name, marker)
schema = get_index_fields()
if value is not marker:
field_schema = schema.get(name, {})
if field_schema.get("type", "") == "date":
# dates are stored in SOLR as UTC
value = DateTime(value).toZone(timezone)
return value
else:
if name not in schema:
raise AttributeError(name)
def __init__(self, context, request=None):
self.context = context
self.request = request
self.update(context) # copy data
@property
def is_current_site(self):
return self.get("site_name", "") == get_site_title()
@property
def id(self):
"""convenience alias"""
return self.get("id", self.get("getId"))
@property
def Description(self):
return self.get("description", "")
@property
def Date(self):
if self.EffectiveDate().startswith("1969"):
return self.CreationDate()
return self.EffectiveDate()
def getId(self):
return self.id
def getPath(self):
"""convenience alias"""
return self.get("path", "")
def getObject(self, REQUEST=None, restricted=True):
if self.is_current_site:
path = self.getPath()
if six.PY2:
path = path.encode("utf-8")
return api.content.get(path)
return self
def _unrestrictedGetObject(self):
raise NotImplementedError
def getURL(self, relative=False):
"""
If site_name is the current site, convert the physical path into a url, if it was stored.
Else return url attribute stored in SOLR
"""
url = self.get("url", "")
if self.is_current_site:
frontend_url = api.portal.get_registry_record(
"frontend_url", interface=IRerSolrpushSettings, default=""
)
if frontend_url:
return url.replace(
frontend_url.rstrip("/"), api.portal.get().portal_url()
)
return url
def Creator(self):
return self.get("Creator", "")
def review_state(self):
return self.get("review_state", "")
def PortalType(self):
return self.get("portal_type", "")
def CreationDate(self):
return self.get("created", None)
def EffectiveDate(self):
return self.get("effective", None)
def location(self):
return self.get("location", "")
def ModificationDate(self):
value = self.get("ModificationDate", None)
if not value:
return None
return DateTime(value).toZone(timezone)
def MimeTypeIcon(self):
mime_type = self.get("mime_type", None)
if not mime_type:
return ""
mtt = api.portal.get_tool(name="mimetypes_registry")
navroot_url = api.portal.get().absolute_url()
ctype = mtt.lookup(mime_type)
mimeicon = None
if not ctype:
if HAS_RER_THEME:
if IDesignPloneThemeLayer.providedBy(self.request):
mimeicon = os.path.join(
navroot_url,
"++plone++design.plone.theme/icons/default.svg",
)
else:
mimeicon = os.path.join(navroot_url, guess_icon_path(ctype[0]))
return mimeicon
def restrictedTraverse(self, name):
if name == "@@images":
return SolrScalesHandler(self, getRequest())
return None
class SolrResults(list):
"""a list of results returned from solr, i.e. sol(a)r flares"""
def parseDate(value):
"""use `DateTime` to parse a date, but take care of solr 1.4
stripping away leading zeros for the year representation"""
if value.find("-") < 4:
year, rest = value.split("-", 1)
value = "%04d-%s" % (int(year), rest)
return DateTime(value)
def parse_date_as_datetime(value):
if value.find("-") < 4:
year, rest = value.split("-", 1)
value = "%04d-%s" % (int(year), rest)
format = "%Y-%m-%dT%H:%M:%S"
if "." in value:
format += ".%fZ"
else:
format += "Z"
return datetime.strptime(value, format)
class SolrResponse(Lazy):
"""a solr search response; TODO: this should get an interface!!"""
__allow_access_to_unprotected_subobjects__ = True
def __init__(self, data=None):
if getattr(data, "hits", None) is None and data.get("error", False):
self.actual_result_count = 0
self._data = {}
else:
self.actual_result_count = data.hits
self._data = data.docs
def results(self):
return list(map(Brain, self._data))
def __getitem__(self, index):
return self.results()[index] | /rer.solrpush-1.3.1.tar.gz/rer.solrpush-1.3.1/src/rer/solrpush/parser.py | 0.681727 | 0.177347 | parser.py | pypi |
from plone.app.vocabularies.catalog import CatalogSource as CatalogSourceBase
from plone.app.z3cform.widget import RelatedItemsFieldWidget
from plone.autoform import directives as form
from plone.supermodel import model
from rer.solrpush import _
from zope import schema
class CatalogSource(CatalogSourceBase):
"""
Collection tile specific catalog source to allow targeted widget.
Without this hack, validation doesn't pass
"""
def __contains__(self, value):
return True # Always contains to allow lazy handling of removed objs
class IElevateRowSchema(model.Schema):
text = schema.TextLine(
title=_("elevate_row_schema_text_label", default=u"Text"),
description=_(
"elevate_row_schema_text_help",
default=u"The word that should match in the search.",
),
required=True,
)
uid = schema.List(
title=_("elevate_row_schema_uid_label", u"Elements"),
description=_(
"elevate_row_schema_uid_help",
u"Select a list of elements to elevate for that search word.",
),
value_type=schema.Choice(source=CatalogSource()),
required=True,
)
form.widget("uid", RelatedItemsFieldWidget)
class IRerSolrpushConf(model.Schema):
""""""
active = schema.Bool(
title=_(u"Active"),
description=_(u"Enable SOLR indexing on this site."),
required=False,
default=False,
)
search_enabled = schema.Bool(
title=_(u"Search enabled"),
description=_(
u"Site search will use SOLR as engine instead portal_catalog."
),
required=False,
default=True,
)
force_commit = schema.Bool(
title=_(u"Force commit"),
description=_(
u"Force commits on CRUD operations. If enabled, each indexing "
u"operation to SOLR will be immediately committed and persisted. "
u"This means that updates are immediately available on SOLR queries." # noqa
u"If you are using SolrCloud with ZooKeeper, immediate commits "
u"will slow down response performances when indexing, so it's "
u"better to turn it off. In this case updates will be available "
u"when SOLR periodically commit changes."
),
required=False,
default=True,
)
solr_url = schema.TextLine(
title=_(u"SOLR url"),
description=_(u"The SOLR core to connect to."),
required=True,
)
frontend_url = schema.TextLine(
title=_(u"Frontend url"),
description=_(u"If the website has different URL for frontend users."),
required=False,
)
enabled_types = schema.List(
title=_(u"enabled_types_label", default=u"Enabled portal types"),
description=_(
u"enabled_types_help",
default=u"Select a list of portal types to index in solr. "
u"Empty list means that all portal types will be indexed.",
),
required=False,
default=[],
missing_value=[],
value_type=schema.Choice(
vocabulary="plone.app.vocabularies.PortalTypes"
),
)
index_fields = schema.SourceText(
title=_(
"index_fields_label",
default=u"List of fields loaded from SOLR that we use for "
u"indexing.",
),
description=_(
u"index_fields_help",
default=u"We store this list for performance"
u" reasons. If the configuration changes, you need to click on"
u" Reload button",
),
required=False,
)
# NASCOSTO DAL PANNELLO DI CONTROLLO (vedi: browser/controlpanel.py)
ready = schema.Bool(
title=_(u"Ready"),
description=_(u"SOLR push is ready to be used."),
required=False,
default=False,
)
class IRerSolrpushSearchConf(model.Schema):
query_debug = schema.Bool(
title=_(u"Query debug"),
description=_(
u"If enabled, when a search to SOLR is performed (for "
u"example in Collection), the query will be showed in the page for "
u"debug. Only visible to Managers."
),
required=False,
default=False,
)
remote_elevate_schema = schema.TextLine(
title=_(u"remote_elevate_label", default=u"Remote elevate"),
description=_(
u"remote_elevate_help",
default=u'If this field is set and no "site_name" is '
u"passed in query, elevate schema is taken from an external "
u"source. This is useful if you index several sites and handle "
u"elevate configuration in one single site. This should be an url "
u'that points to "@elevate-schema" view.'
u"For example: http://my-site/@elevate-schema.",
),
default=u"",
required=False,
)
qf = schema.TextLine(
title=_("qf_label", default=u"qf (query fields)"),
description=_(
"qf_help",
default=u"Set a list of fields, each of which is assigned a boost "
u"factor to increase or decrease that particular field’s "
u"importance in the query. "
u"For example: fieldOne^1000.0 fieldTwo fieldThree^10.0",
),
required=False,
default=u"",
)
bq = schema.TextLine(
title=_("bq_label", default=u"bq (boost query)"),
description=_(
"bq_help",
default=u"Set a list query clauses that will be added to the main "
u"query to influence the score. For example if we want to boost "
u'results that have a specific "searchwords" term: '
u"searchwords:something^1000",
),
required=False,
default=u"",
)
bf = schema.TextLine(
title=_("bf_label", default=u"bf (boost functions)"),
description=_(
"bf_help",
default=u"Set a list of functions (with optional boosts) that "
u"will be used to construct FunctionQueries which will be added "
u"to the main query as optional clauses that will influence the "
u"score. Any function supported natively by Solr can be used, "
u"along with a boost value. "
u"For example if we want to give less relevance to "
u"items deeper in the tree we can set something like this: "
u"recip(path_depth,10,100,1)",
),
required=False,
default=u"",
)
class IRerSolrpushSettings(IRerSolrpushConf, IRerSolrpushSearchConf):
"""
Marker interface for settings
""" | /rer.solrpush-1.3.1.tar.gz/rer.solrpush-1.3.1/src/rer/solrpush/interfaces/settings.py | 0.76908 | 0.280108 | settings.py | pypi |
from plone.app.vocabularies.catalog import CatalogSource as CatalogSourceBase
from plone.app.z3cform.widget import RelatedItemsFieldWidget
from plone.autoform import directives as form
from plone.supermodel import model
from rer.solrpush import _
from z3c.relationfield.schema import RelationChoice
from z3c.relationfield.schema import RelationList
from zope import schema
from zope.i18n import translate
from zope.interface import Invalid
from zope.interface import invariant
from zope.globalrequest import getRequest
import json
class CatalogSource(CatalogSourceBase):
"""
Without this hack, validation doesn't pass
"""
def __contains__(self, value):
return True # Always contains to allow lazy handling of removed objs
class IElevateRowSchema(model.Schema):
text = schema.List(
title=_("elevate_row_schema_text_label", default=u"Text"),
description=_(
"elevate_row_schema_text_help",
default=u"The word that should match in the search.",
),
required=True,
value_type=schema.TextLine(),
)
uid = RelationList(
title=_("elevate_row_schema_uid_label", u"Elements"),
description=_(
"elevate_row_schema_uid_help",
u"Select a list of elements to elevate for that search word.",
),
value_type=RelationChoice(vocabulary="plone.app.vocabularies.Catalog"),
required=True,
)
form.widget(
"uid",
RelatedItemsFieldWidget,
vocabulary="plone.app.vocabularies.Catalog",
)
class IElevateSettings(model.Schema):
""" """
elevate_schema = schema.SourceText(
title=_(u"elevate_schema_label", default=u"Elevate configuration"),
description=_(
u"elevate_schema_help",
default=u"Insert a list of values for elevate.",
),
required=False,
)
@invariant
def elevate_invariant(data):
schema = json.loads(data.elevate_schema)
request = getRequest()
words_mapping = [x["text"] for x in schema]
for i, schema_item in enumerate(schema):
elevate_text = schema_item.get("text", [])
if not elevate_text:
raise Invalid(
translate(
_(
"text_required_label",
default="Text field must be filled for Group ${id}.",
mapping=dict(id=i + 1),
),
context=request,
)
)
for text in elevate_text:
for words_i, words in enumerate(words_mapping):
if i == words_i:
# it's the current config
continue
if text in words:
raise Invalid(
translate(
_(
"text_duplicated_label",
default='"${text}" is used in several groups.',
mapping=dict(id=i, text=text),
),
context=request,
)
) | /rer.solrpush-1.3.1.tar.gz/rer.solrpush-1.3.1/src/rer/solrpush/interfaces/elevate.py | 0.595375 | 0.208521 | elevate.py | pypi |
from plone.restapi.deserializer import json_body
from six.moves.urllib.parse import parse_qsl
from six.moves.urllib.parse import urlencode
DEFAULT_BATCH_SIZE = 25
class SolrHypermediaBatch(object):
def __init__(self, request, results):
self.request = request
self.b_start = int(
json_body(self.request).get("b_start", False)
) or int(self.request.form.get("b_start", 0))
self.b_size = int(json_body(self.request).get("b_size", False)) or int(
self.request.form.get("b_size", DEFAULT_BATCH_SIZE)
)
self.hits = getattr(results, "hits", 0)
@property
def canonical_url(self):
"""Return the canonical URL to the batched collection-like resource,
preserving query string params, but stripping all batching related
params from it.
"""
url = self.request["ACTUAL_URL"]
qs_params = parse_qsl(self.request["QUERY_STRING"])
# Remove any batching / sorting related parameters.
# Also take care to preserve list-like query string params.
for key, value in qs_params[:]:
if key in (
"b_size",
"b_start",
"sort_on",
"sort_order",
"sort_limit",
):
qs_params.remove((key, value))
qs = urlencode(qs_params)
if qs_params:
url = "?".join((url, qs))
return url
@property
def current_batch_url(self):
url = self.request["ACTUAL_URL"]
qs = self.request["QUERY_STRING"]
if qs:
url = "?".join((url, qs))
return url
@property
def links(self):
"""Get a dictionary with batching links.
"""
# Don't provide batching links if resultset isn't batched
if self.hits <= self.b_size:
return None
links = {}
last = self._last_page()
next = self._next_page()
prev = self._prev_page()
links["@id"] = self.current_batch_url
links["first"] = self._url_for_batch(0)
links["last"] = self._url_for_batch(last)
if next:
links["next"] = self._url_for_batch(next)
if prev:
links["prev"] = self._url_for_batch(prev)
return links
def _url_for_batch(self, pagenumber):
"""Return a new Batch object for the given pagenumber.
"""
new_start = pagenumber * self.b_size
return self._url_with_params(params={"b_start": new_start})
def _last_page(self):
page = self.hits / self.b_size
if self.hits % self.b_size == 0:
return page - 1
return page
def _next_page(self):
curr_page = self.b_start / self.b_size
if curr_page == self._last_page():
return None
return curr_page + 1
def _prev_page(self):
curr_page = self.b_start / self.b_size
if curr_page == 0:
return None
return curr_page - 1
def _url_with_params(self, params):
"""Build an URL based on the actual URL of the current request URL
and add or update some query string parameters in it.
"""
url = self.request["ACTUAL_URL"]
qs_params = parse_qsl(
self.request["QUERY_STRING"], keep_blank_values=1
)
# Take care to preserve list-like query string arguments (same QS
# param repeated multiple times). In other words, don't turn the
# result of parse_qsl into a dict!
# Drop params to be updated, then prepend new params in order
qs_params = [x for x in qs_params if x[0] not in list(params)]
qs_params = sorted(params.items()) + qs_params
qs = urlencode(qs_params)
if qs_params:
url = "?".join((url, qs))
return url | /rer.solrpush-1.3.1.tar.gz/rer.solrpush-1.3.1/src/rer/solrpush/restapi/services/solr_search/batch.py | 0.874707 | 0.191422 | batch.py | pypi |
"""Definition of the Structured Document content type
"""
from AccessControl import ClassSecurityInfo
from ComputedAttribute import ComputedAttribute
from Products.Archetypes import atapi
from Products.ATContentTypes.content import document
from Products.ATContentTypes.content import folder
from Products.ATContentTypes.content import schemata
from Products.ATContentTypes.content.base import ATCTContent
from Products.CMFCore.permissions import ModifyPortalContent
from Products.CMFCore.permissions import View
from rer.structured_content.config import PROJECTNAME
from rer.structured_content.interfaces import IStructuredDocument
from zope.interface import implements
from ZPublisher.HTTPRequest import HTTPRequest
StructuredDocumentSchema = folder.ATFolderSchema.copy() + document.ATDocumentSchema.copy() + atapi.Schema((
# -*- Your Archetypes field definitions here ... -*-
))
# Set storage on fields copied from ATFolderSchema, making sure
# they work well with the python bridge properties.
StructuredDocumentSchema['title'].storage = atapi.AnnotationStorage()
StructuredDocumentSchema['description'].storage = atapi.AnnotationStorage()
schemata.finalizeATCTSchema(
StructuredDocumentSchema,
folderish=True,
moveDiscussion=False
)
StructuredDocumentSchema['relatedItems'].widget.visible = {'view': 'visible', 'edit': 'visible'}
class StructuredDocument(folder.ATFolder):
"""Description of the Example Type"""
implements(IStructuredDocument)
meta_type = "Structured Document"
schema = StructuredDocumentSchema
title = atapi.ATFieldProperty('title')
description = atapi.ATFieldProperty('description')
security = ClassSecurityInfo()
cmf_edit_kws = ('text_format',)
security.declareProtected(View, 'CookedBody')
def CookedBody(self, stx_level='ignored'):
"""CMF compatibility method
"""
return self.getText()
security.declareProtected(ModifyPortalContent, 'EditableBody')
def EditableBody(self):
"""CMF compatibility method
"""
return self.getRawText()
security.declareProtected(ModifyPortalContent, 'setText')
def setText(self, value, **kwargs):
"""Body text mutator
* hook into mxTidy an replace the value with the tidied value
"""
field = self.getField('text')
# When an object is initialized the first time we have to
# set the filename and mimetype.
if not value and not field.getRaw(self):
if 'mimetype' in kwargs and kwargs['mimetype']:
field.setContentType(self, kwargs['mimetype'])
if 'filename' in kwargs and kwargs['filename']:
field.setFilename(self, kwargs['filename'])
# hook for mxTidy / isTidyHtmlWithCleanup validator
tidyOutput = self.getTidyOutput(field)
if tidyOutput:
value = tidyOutput
field.set(self, value, **kwargs) # set is ok
text_format = ComputedAttribute(ATCTContent.getContentType, 1)
security.declarePrivate('getTidyOutput')
def getTidyOutput(self, field):
"""Get the tidied output for a specific field from the request
if available
"""
request = getattr(self, 'REQUEST', None)
if request is not None and isinstance(request, HTTPRequest):
tidyAttribute = '%s_tidier_data' % field.getName()
return request.get(tidyAttribute, None)
atapi.registerType(StructuredDocument, PROJECTNAME) | /rer.structured_content-1.9.3.tar.gz/rer.structured_content-1.9.3/rer/structured_content/content/structureddocument.py | 0.557484 | 0.230606 | structureddocument.py | pypi |
from .. import subsitesMessageFactory as _
from datetime import datetime
from plone import api
from plone.directives.form import SchemaForm
from rer.subsites.interfaces import IRERSubsiteEnabled
from z3c.form import button
from z3c.form import field
from z3c.form.interfaces import WidgetActionExecutionError
from zope.component import adapter
from zope.interface import implementer
from zope.interface import Invalid
import re
@implementer(IRERSubsiteEnabled)
@adapter(IRERSubsiteEnabled)
class SubsiteStylesFormAdapter(object):
"""
"""
def __init__(self, context):
""" To basic stuff
"""
self.context = context
self.subsite_color = getattr(context, 'subsite_color', '')
self.image = getattr(context, 'image', '')
self.subsite_class = getattr(context, 'subsite_class', '')
class SubsiteStylesForm(SchemaForm):
""" Dinamically built form
"""
schema = IRERSubsiteEnabled
ignoreContext = False
fields = field.Fields(IRERSubsiteEnabled)
# ignoreContext = True
def show_message(self, msg, msg_type):
""" Facade for the show message api function
"""
show_message = api.portal.show_message
return show_message(msg, request=self.request, type=msg_type)
def redirect(self, target=None, msg='', msg_type='error'):
""" Redirects the user to the target, optionally with a portal message
"""
if target is None:
target = self.context.absolute_url()
if msg:
self.show_message(msg, msg_type)
return self.request.response.redirect(target)
def store_data(self, data):
""" Store the data before returning
"""
self.context.subsite_color = data.get('subsite_color')
self.context.image = data.get('image')
self.context.subsite_class = data.get('subsite_class')
# update last modified date
self.context.styles_last_modified = datetime.now().strftime('%Y%m%d%H%M%S') # noqa
def additional_validation(self, data):
if not data.get('subsite_color', ''):
return
m = re.search('^#?\w+;?$', data.get('subsite_color'))
if not m:
raise WidgetActionExecutionError(
'subsite_color',
Invalid(
_(
'error_invalid_css_color',
default='Not a valid color'),
)
)
@button.buttonAndHandler(u'Salva', name='save')
def handleSubmit(self, action):
data, errors = self.extractData()
self.additional_validation(data)
if not errors:
self.store_data(data)
return self.redirect()
@button.buttonAndHandler(u'Annulla', name='cancel')
def handleCancel(self, action):
"""
"""
return self.redirect() | /rer.subsites-1.4.1.tar.gz/rer.subsites-1.4.1/rer/subsites/browser/subsite_styles_form.py | 0.588416 | 0.176193 | subsite_styles_form.py | pypi |
from plone import api
from plone.app.layout.viewlets.common import ViewletBase
from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
from rer.subsites.interfaces import IRERSubsiteEnabled
from rer.subsites.interfaces import IRERSubsitesSettings
from zope.component import getMultiAdapter
class SubsiteViewletBase(ViewletBase):
def __init__(self, context, request, view, manager):
super(SubsiteViewletBase, self).__init__(
context,
request,
view,
manager
)
self.subsite = self.getSubsiteObj()
def render(self):
viewlet_enabled = self.is_viewlet_enabled()
if not viewlet_enabled:
return ""
if self.subsite:
return self.index()
else:
return ''
def getSubsiteObj(self):
for elem in self.context.aq_inner.aq_chain:
if IRERSubsiteEnabled.providedBy(elem):
return elem
return None
def is_viewlet_enabled(self):
""" """
return api.portal.get_registry_record(
'viewlets_enabled',
interface=IRERSubsitesSettings)
class SubsiteTitleViewlet(SubsiteViewletBase):
"""
viewlet with title
"""
index = ViewPageTemplateFile('viewlets/rer_subsite_title.pt')
def get_css_class(self):
context = self.context.aq_inner
context_state = getMultiAdapter(
(context, self.request),
name=u'plone_context_state'
)
real_obj = context_state.canonical_object()
if real_obj == self.subsite:
return ''
return 'subsite-child'
class SubsiteColorViewlet(SubsiteViewletBase):
"""
A Viewlet that allows to add some dynamic css in the header
"""
def render(self):
viewlet_enabled = self.is_viewlet_enabled()
if not viewlet_enabled:
return ""
if not self.subsite:
return ''
return_string = ''
styles = self.get_default_styles()
custom_styles = self.get_custom_styles()
if custom_styles:
styles += custom_styles
return_string = '<style type="text/css">{0}</style>'.format(styles)
return return_string
def get_default_styles(self):
color = getattr(self.subsite, 'subsite_color', '')
image = getattr(self.subsite, 'image', None)
if not color and not image:
return ''
subsite_url = self.subsite.absolute_url()
styles = []
css = '#subsite-title {'
if color:
styles.append('background-color:{0}'.format(color))
if image:
version = getattr(self.subsite, 'styles_last_modified', '')
styles.append(
'background-image:url({0}/@@images/image?v={1})'.format(
subsite_url,
version
)
)
css += ';'.join(styles)
css += '}'
styles = []
css += '#contentCarousel {'
if color:
styles.append('background-color:{0}'.format(color))
css += ';'.join(styles)
css += '}'
return css
def get_custom_styles(self):
"""
read styles from control panel
"""
color = getattr(self.subsite, 'subsite_color', '')
css = api.portal.get_registry_record(
'subsite_styles',
interface=IRERSubsitesSettings)
if not css:
return ''
return css.replace('\r\n', ' ').replace('$color$', color) | /rer.subsites-1.4.1.tar.gz/rer.subsites-1.4.1/rer/subsites/browser/viewlets.py | 0.52975 | 0.244639 | viewlets.py | pypi |
from plone.supermodel import model
from rer.ufficiostampa import _
from zope import schema
class IRerUfficiostampaSettings(model.Schema):
legislatures = schema.SourceText(
title=_("legislatures_label", default=u"List of legislatures",),
description=_(
"legislatures_help",
default=u"This is a list of all legislatures. The last one is the"
u" one used to fill fields in a new Comunicato.",
),
required=True,
)
subscription_channels = schema.List(
title=_(
u"subscription_channels_label", default=u"Subscription Channels"
),
description=_(
u"subscription_channels_description",
default=u"List of available subscription channels."
u"One per line."
u"These channels will be used for users subscriptions "
u"and for select to which channel send a Comunicato.",
),
required=True,
default=[],
missing_value=[],
value_type=schema.TextLine(),
)
token_secret = schema.TextLine(
title=_("token_secret_label", default=u"Token secret"),
description=_(
"token_secret_help",
default=u"Insert the secret key for token generation.",
),
required=True,
)
token_salt = schema.TextLine(
title=_("token_salt_label", default=u"Token salt"),
description=_(
"token_salt_help",
default=u"Insert the salt for token generation. This, in "
u"conjunction with the secret, will generate unique tokens for "
u"subscriptions management links.",
),
required=True,
)
frontend_url = schema.TextLine(
title=_("frontend_url_label", default=u"Frontend URL"),
description=_(
"frontend_url_help",
default=u"If the frontend site is published with a different URL "
u"than the backend, insert it here. All links in emails will be "
u"converted with that URL.",
),
required=False,
)
external_sender_url = schema.TextLine(
title=_("external_sender_url_label", default=u"External sender URL"),
description=_(
"external_sender_url_help",
default=u"If you want to send emails with an external tool "
u"(rer.newsletterdispatcher.flask), insert the url of the service "
u"here. If empty, all emails will be sent from Plone.",
),
required=False,
)
css_styles = schema.SourceText(
title=_("css_styles_label", default=u"Styles",),
description=_(
"css_styles_help",
default=u"Insert a list of CSS styles for received emails.",
),
required=True,
)
comunicato_number = schema.Int(
title=_("comunicato_number_label", default=u"Comunicato number",),
description=_(
"comunicato_number_help",
default=u"The number of last sent Comunicato. You don't have to "
"edit this. It's automatically updated when a Comunicato is published.", # noqa
),
required=True,
default=0,
)
comunicato_year = schema.Int(
title=_("comunicato_year_label", default=u"Comunicato year",),
description=_(
"comunicato_year_help",
default=u"You don't have to edit this. It's automatically updated"
u" on every new year.",
),
required=True,
default=2021,
)
class ILegislaturesRowSchema(model.Schema):
legislature = schema.SourceText(
title=_("legislature_label", default=u"Legislature",),
description=_(
"legislature_help", default=u"Insert the legislature name.",
),
required=True,
)
arguments = schema.List(
title=_("legislature_arguments_label", default=u"Arguments",),
description=_(
"legislature_arguments_help",
default=u"Insert a list of arguments related to this legislature."
u" One per line.",
),
required=True,
value_type=schema.TextLine(),
) | /rer.ufficiostampa-1.6.6.tar.gz/rer.ufficiostampa-1.6.6/src/rer/ufficiostampa/interfaces/settings.py | 0.677794 | 0.246261 | settings.py | pypi |
from plone import api
from plone.restapi.services import Service
from rer.ufficiostampa import _
from zope.component import getUtility
from zope.globalrequest import getRequest
from zope.i18n import translate
from zope.interface import implementer
from zope.publisher.interfaces import IPublishTraverse
from zope.schema.interfaces import IVocabularyFactory
def getVocabularyTermsForForm(vocab_name):
"""
Return the values of vocabulary
"""
portal = api.portal.get()
utility = getUtility(IVocabularyFactory, vocab_name)
values = []
vocab = utility(portal)
for entry in vocab:
if entry.title != u"select_label":
values.append({"value": entry.value, "label": entry.title})
values[0]["isFixed"] = True
return values
def getArguments():
legislatures = getVocabularyTermsForForm(
vocab_name="rer.ufficiostampa.vocabularies.legislatures",
)
res = {}
for legislature in legislatures:
key = legislature.get("value", "")
arguments = []
for brain in api.content.find(legislature=key):
for argument in brain.arguments:
argument_dict = {"value": argument, "label": argument}
if argument_dict not in arguments:
arguments.append(argument_dict)
res[key] = sorted(arguments, key=lambda x: x["label"])
return res
def getTypesValues():
res = [
{"value": "ComunicatoStampa", "label": "Comunicato Stampa"},
{"value": "InvitoStampa", "label": "Invito Stampa"},
]
return res
def getTypesDefault():
res = ["ComunicatoStampa"]
if not api.user.is_anonymous():
res.append("InvitoStampa")
return res
def getSearchFields():
request = getRequest()
legislatures = getVocabularyTermsForForm(
vocab_name="rer.ufficiostampa.vocabularies.legislatures",
)
return [
{
"id": "SearchableText",
"label": translate(
_("comunicati_search_text_label", default=u"Search text"),
context=request,
),
"help": "",
"type": "text",
},
{
"id": "portal_type",
"label": translate(
_("label_portal_type", default="Type"),
context=request,
),
"help": "",
"type": "checkbox",
"options": getTypesValues(),
"default": getTypesDefault(),
"hidden": api.user.is_anonymous(),
},
{
"id": "created",
"label": translate(
_("comunicati_search_created_label", default=u"Date"),
context=request,
),
"help": "",
"type": "date",
},
{
"id": "legislature",
"label": translate(
_("label_legislature", default="Legislature"),
context=request,
),
"help": "",
"type": "select",
"multivalued": True,
"options": legislatures,
"default": [legislatures[0]["value"]],
"slave": {
"id": "arguments",
"label": translate(
_("legislature_arguments_label", default="Arguments"),
context=request,
),
"help": "",
"type": "select",
"multivalued": True,
"slaveOptions": getArguments()
# "options": getVocabularyTermsForForm(
# context=portal,
# vocab_name="rer.ufficiostampa.vocabularies.all_arguments",
# ),
},
},
]
@implementer(IPublishTraverse)
class SearchParametersGet(Service):
def __init__(self, context, request):
super(SearchParametersGet, self).__init__(context, request)
def reply(self):
return getSearchFields() | /rer.ufficiostampa-1.6.6.tar.gz/rer.ufficiostampa-1.6.6/src/rer/ufficiostampa/restapi/services/search_parameters/get.py | 0.598782 | 0.260829 | get.py | pypi |
from repoze.catalog.catalog import Catalog
from repoze.catalog.indexes.field import CatalogFieldIndex
from souper.interfaces import ICatalogFactory
from souper.soup import NodeAttributeIndexer
from zope.interface import implementer
from repoze.catalog.indexes.text import CatalogTextIndex
from souper.soup import NodeTextIndexer
from repoze.catalog.indexes.keyword import CatalogKeywordIndex
import logging
logger = logging.getLogger(__name__)
@implementer(ICatalogFactory)
class SubscriptionsSoupCatalogFactory(object):
def __call__(self, context):
catalog = Catalog()
text_indexer = NodeTextIndexer(["name", "surname", "email"])
catalog[u"text"] = CatalogTextIndex(text_indexer)
email_indexer = NodeAttributeIndexer("email")
catalog[u"email"] = CatalogFieldIndex(email_indexer)
name_indexer = NodeAttributeIndexer("name")
catalog[u"name"] = CatalogFieldIndex(name_indexer)
surname_indexer = NodeAttributeIndexer("surname")
catalog[u"surname"] = CatalogFieldIndex(surname_indexer)
channels_indexer = NodeAttributeIndexer("channels")
catalog[u"channels"] = CatalogKeywordIndex(channels_indexer)
date_indexer = NodeAttributeIndexer("date")
catalog[u"date"] = CatalogFieldIndex(date_indexer)
newspaper_indexer = NodeAttributeIndexer("newspaper")
catalog[u"newspaper"] = CatalogFieldIndex(newspaper_indexer)
return catalog
@implementer(ICatalogFactory)
class SendHistorySoupCatalogFactory(object):
def __call__(self, context):
catalog = Catalog()
channels_indexer = NodeAttributeIndexer("channels")
catalog[u"channels"] = CatalogKeywordIndex(channels_indexer)
date_indexer = NodeAttributeIndexer("date")
catalog[u"date"] = CatalogFieldIndex(date_indexer)
title_indexer = NodeAttributeIndexer("title")
catalog[u"title"] = CatalogTextIndex(title_indexer)
type_indexer = NodeAttributeIndexer("type")
catalog[u"type"] = CatalogFieldIndex(type_indexer)
return catalog | /rer.ufficiostampa-1.6.6.tar.gz/rer.ufficiostampa-1.6.6/src/rer/ufficiostampa/subscriptions/catalog.py | 0.507812 | 0.161419 | catalog.py | pypi |
from zope.interface import implementer
from zope.schema.interfaces import IVocabularyFactory
from zope.schema.vocabulary import SimpleVocabulary, SimpleTerm
from rer.ufficiostampa.interfaces import IRerUfficiostampaSettings
from plone import api
from plone.api.exc import InvalidParameterError
from plone.app.vocabularies.terms import safe_simplevocabulary_from_values
from plone.app.vocabularies.catalog import KeywordsVocabulary
import json
@implementer(IVocabularyFactory)
class ArgumentsVocabularyFactory(object):
def __call__(self, context):
stored = getattr(context.aq_base, "legislature", "")
arguments = []
try:
legislatures = json.loads(
api.portal.get_registry_record(
"legislatures", interface=IRerUfficiostampaSettings
)
)
if not legislatures:
pass
for data in legislatures:
if data.get("legislature", "") == stored:
arguments = data.get("arguments", [])
break
if not arguments:
arguments = legislatures[-1].get("arguments", [])
except (KeyError, InvalidParameterError, TypeError):
arguments = []
for arg in getattr(context, "arguments", []) or []:
if arg and arg not in arguments:
arguments.append(arg)
terms = [
SimpleTerm(value=x, token=x.encode("utf-8"), title=x)
for x in sorted(arguments)
]
return SimpleVocabulary(terms)
@implementer(IVocabularyFactory)
class ChannelsVocabularyFactory(object):
def __call__(self, context):
try:
subscription_channels = api.portal.get_registry_record(
"subscription_channels",
interface=IRerUfficiostampaSettings,
)
except (KeyError, InvalidParameterError):
subscription_channels = []
return safe_simplevocabulary_from_values(subscription_channels)
@implementer(IVocabularyFactory)
class AttachmentsVocabularyFactory(object):
def __call__(self, context):
terms = []
for child in context.listFolderContents(
contentFilter={"portal_type": ["File", "Image"]}
):
terms.append(
SimpleTerm(
value=child.getId(),
token=child.getId(),
title=child.Title(),
)
)
return SimpleVocabulary(terms)
@implementer(IVocabularyFactory)
class LegislaturesVocabularyFactory(object):
def __call__(self, context):
"""
return a list of legislature names.
There are all possible index values sorted on reverse order from
the registry one (last legislature is the first one).
"""
try:
registry_val = json.loads(
api.portal.get_registry_record(
"legislatures", interface=IRerUfficiostampaSettings
)
)
registry_legislatures = [x.get("legislature", "") for x in registry_val]
registry_legislatures.reverse()
except (KeyError, InvalidParameterError, TypeError):
registry_legislatures = []
pc = api.portal.get_tool(name="portal_catalog")
catalog_legislatures = pc.uniqueValuesFor("legislature")
legislatures = [x for x in registry_legislatures if x in catalog_legislatures]
return safe_simplevocabulary_from_values(legislatures)
@implementer(IVocabularyFactory)
class AllArgumentsVocabularyFactory(KeywordsVocabulary):
keyword_index = "arguments"
AllArgumentsVocabulary = AllArgumentsVocabularyFactory()
ArgumentsVocabulary = ArgumentsVocabularyFactory()
ChannelsVocabulary = ChannelsVocabularyFactory()
AttachmentsVocabulary = AttachmentsVocabularyFactory()
LegislaturesVocabulary = LegislaturesVocabularyFactory() | /rer.ufficiostampa-1.6.6.tar.gz/rer.ufficiostampa-1.6.6/src/rer/ufficiostampa/vocabularies/vocabularies.py | 0.455199 | 0.214188 | vocabularies.py | pypi |
from __future__ import annotations
from collections import namedtuple
from math import cos, sin, tau
import numpy as np
from rerun.log.rects import RectFormat
from rerun_demo.turbo import turbo_colormap_data
ColorGrid = namedtuple("ColorGrid", ["positions", "colors"])
def build_color_grid(x_count=10, y_count=10, z_count=10, twist=0):
"""
Create a cube of points with colors.
The total point cloud will have x_count * y_count * z_count points.
Parameters
----------
x_count, y_count, z_count:
Number of points in each dimension.
twist:
Angle to twist from bottom to top of the cube
"""
grid = np.mgrid[
slice(-10, 10, x_count * 1j),
slice(-10, 10, y_count * 1j),
slice(-10, 10, z_count * 1j),
]
angle = np.linspace(-float(twist) / 2, float(twist) / 2, z_count)
for z in range(z_count):
xv, yv, zv = grid[:, :, :, z]
rot_xv = xv * cos(angle[z]) - yv * sin(angle[z])
rot_yv = xv * sin(angle[z]) + yv * cos(angle[z])
grid[:, :, :, z] = [rot_xv, rot_yv, zv]
positions = np.vstack([xyz.ravel() for xyz in grid])
colors = np.vstack(
[
xyz.ravel()
for xyz in np.mgrid[
slice(0, 255, x_count * 1j),
slice(0, 255, y_count * 1j),
slice(0, 255, z_count * 1j),
]
]
)
return ColorGrid(positions.T, colors.T.astype(np.uint8))
color_grid = build_color_grid()
"""Default color grid"""
RectPyramid = namedtuple("RectPyramid", ["rects", "format", "colors"])
def build_rect_pyramid(count=20, width=100, height=100):
"""
Create a stack of N colored rectangles.
Parameters
----------
count:
Number of rectangles to create.
width:
Width of the base of the pyramid.
height:
Height of the pyramid.
"""
x = np.zeros(count)
y = np.linspace(0, height, count)
widths = np.linspace(float(width) / count, width, count)
heights = 0.8 * np.ones(count) * height / count
rects = np.array(list(zip(x, y, widths, heights)))
colors = turbo_colormap_data[np.linspace(0, len(turbo_colormap_data) - 1, count, dtype=int)]
return RectPyramid(rects, RectFormat.XCYCWH, colors)
rect_pyramid = build_rect_pyramid()
"""Default rect pyramid data"""
ColorSpiral = namedtuple("ColorSpiral", ["positions", "colors"])
def build_color_spiral(num_points=100, radius=2, angular_step=0.02, angular_offset=0, z_step=0.1):
"""
Create a spiral of points with colors along the Z axis.
Parameters
----------
num_points:
Total number of points.
radius:
The radius of the spiral.
angular_step:
The factor applied between each step along the trigonemetric circle.
angular_offset:
Offsets the starting position on the trigonemetric circle.
z_step:
The factor applied between between each step along the Z axis.
"""
positions = np.array(
[
[
sin(i * tau * angular_step + angular_offset) * radius,
cos(i * tau * angular_step + angular_offset) * radius,
i * z_step,
]
for i in range(num_points)
]
)
colors = turbo_colormap_data[np.linspace(0, len(turbo_colormap_data) - 1, num_points, dtype=int)]
return ColorSpiral(positions, colors)
color_spiral = build_color_spiral()
"""Default color spiral""" | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun_demo/data.py | 0.948537 | 0.610918 | data.py | pypi |
from __future__ import annotations
import numpy as np
import numpy.typing as npt
def u8_array_to_rgba(arr: npt.NDArray[np.uint8]) -> npt.NDArray[np.uint32]:
"""
Convert an array with inner dimension [R,G,B,A] into packed uint32 values.
Parameters
----------
arr :
Nx3 or Nx4 `[[r,g,b,a], ... ]` of uint8 values
Returns
-------
npt.NDArray[np.uint32]
Array of uint32 value as 0xRRGGBBAA.
"""
r = arr[:, 0]
g = arr[:, 1]
b = arr[:, 2]
a = arr[:, 3] if arr.shape[1] == 4 else np.repeat(0xFF, len(arr))
# Reverse the byte order because this is how we encode into uint32
arr = np.vstack([a, b, g, r]).T
# Make contiguous and then reinterpret
arr = np.ascontiguousarray(arr, dtype=np.uint8)
arr = arr.view(np.uint32)
arr = np.squeeze(arr, axis=1)
return arr # type: ignore[return-value]
def linear_to_gamma_u8_value(linear: npt.NDArray[np.float32 | np.float64]) -> npt.NDArray[np.uint8]:
"""
Transform color values from linear [0.0, 1.0] to gamma encoded [0, 255].
Linear colors are expected to have dtype [numpy.floating][]
Intended to implement the following per color value:
```Rust
if l <= 0.0 {
0
} else if l <= 0.0031308 {
round(3294.6 * l)
} else if l <= 1.0 {
round(269.025 * l.powf(1.0 / 2.4) - 14.025)
} else {
255
}
```
Parameters
----------
linear:
The linear color values to transform.
Returns
-------
np.ndarray[np.uint8]
The gamma encoded color values.
"""
gamma = linear.clip(min=0, max=1)
below = gamma <= 0.0031308
gamma[below] *= 3294.6
above = np.logical_not(below)
gamma[above] = gamma[above] ** (1.0 / 2.4) * 269.025 - 14.025
gamma.round(decimals=0, out=gamma)
return gamma.astype(np.uint8)
def linear_to_gamma_u8_pixel(linear: npt.NDArray[np.float32 | np.float64]) -> npt.NDArray[np.uint8]:
"""
Transform color pixels from linear [0, 1] to gamma encoded [0, 255].
Linear colors are expected to have dtype np.float32 or np.float64.
The last dimension of the colors array `linear` is expected to represent a single pixel color.
- 3 colors means RGB
- 4 colors means RGBA
Parameters
----------
linear:
The linear color pixels to transform.
Returns
-------
np.ndarray[np.uint8]
The gamma encoded color pixels.
"""
num_channels = linear.shape[-1]
assert num_channels in (3, 4)
if num_channels == 3:
return linear_to_gamma_u8_value(linear)
gamma_u8 = np.empty(shape=linear.shape, dtype=np.uint8)
gamma_u8[..., :-1] = linear_to_gamma_u8_value(linear[..., :-1])
gamma_u8[..., -1] = np.around(255 * linear[..., -1])
return gamma_u8 | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/color_conversion.py | 0.957764 | 0.909103 | color_conversion.py | pypi |
from __future__ import annotations
import rerun_bindings as bindings # type: ignore[attr-defined]
from rerun.recording_stream import RecordingStream
# --- Time ---
def set_time_sequence(timeline: str, sequence: int | None, recording: RecordingStream | None = None) -> None:
"""
Set the current time for this thread as an integer sequence.
Used for all subsequent logging on the same thread,
until the next call to `set_time_sequence`.
For example: `set_time_sequence("frame_nr", frame_nr)`.
You can remove a timeline again using `set_time_sequence("frame_nr", None)`.
There is no requirement of monotonicity. You can move the time backwards if you like.
Parameters
----------
timeline : str
The name of the timeline to set the time for.
sequence : int
The current time on the timeline in integer units.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
bindings.set_time_sequence(timeline, sequence, recording=recording)
def set_time_seconds(timeline: str, seconds: float | None, recording: RecordingStream | None = None) -> None:
"""
Set the current time for this thread in seconds.
Used for all subsequent logging on the same thread,
until the next call to [`rerun.set_time_seconds`][] or [`rerun.set_time_nanos`][].
For example: `set_time_seconds("capture_time", seconds_since_unix_epoch)`.
You can remove a timeline again using `set_time_seconds("capture_time", None)`.
Very large values will automatically be interpreted as seconds since unix epoch (1970-01-01).
Small values (less than a few years) will be interpreted as relative
some unknown point in time, and will be shown as e.g. `+3.132s`.
The bindings has a built-in time which is `log_time`, and is logged as seconds
since unix epoch.
There is no requirement of monotonicity. You can move the time backwards if you like.
Parameters
----------
timeline : str
The name of the timeline to set the time for.
seconds : float
The current time on the timeline in seconds.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
bindings.set_time_seconds(timeline, seconds, recording=recording)
def set_time_nanos(timeline: str, nanos: int | None, recording: RecordingStream | None = None) -> None:
"""
Set the current time for this thread.
Used for all subsequent logging on the same thread,
until the next call to [`rerun.set_time_nanos`][] or [`rerun.set_time_seconds`][].
For example: `set_time_nanos("capture_time", nanos_since_unix_epoch)`.
You can remove a timeline again using `set_time_nanos("capture_time", None)`.
Very large values will automatically be interpreted as nanoseconds since unix epoch (1970-01-01).
Small values (less than a few years) will be interpreted as relative
some unknown point in time, and will be shown as e.g. `+3.132s`.
The bindings has a built-in time which is `log_time`, and is logged as nanos since
unix epoch.
There is no requirement of monotonicity. You can move the time backwards if you like.
Parameters
----------
timeline : str
The name of the timeline to set the time for.
nanos : int
The current time on the timeline in nanoseconds.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
bindings.set_time_nanos(timeline, nanos, recording=recording)
def reset_time(recording: RecordingStream | None = None) -> None:
"""
Clear all timeline information on this thread.
This is the same as calling `set_time_*` with `None` for all of the active timelines.
Used for all subsequent logging on the same thread,
until the next call to [`rerun.set_time_nanos`][] or [`rerun.set_time_seconds`][].
Parameters
----------
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
bindings.reset_time(recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/time.py | 0.958722 | 0.532121 | time.py | pypi |
from __future__ import annotations
from argparse import ArgumentParser, Namespace
import rerun as rr
from rerun.recording_stream import RecordingStream
def script_add_args(parser: ArgumentParser) -> None:
"""
Add common Rerun script arguments to `parser`.
Parameters
----------
parser : ArgumentParser
The parser to add arguments to.
"""
parser.add_argument("--headless", action="store_true", help="Don't show GUI")
parser.add_argument(
"--connect",
dest="connect",
action="store_true",
help="Connect to an external viewer",
)
parser.add_argument(
"--serve",
dest="serve",
action="store_true",
help="Serve a web viewer (WARNING: experimental feature)",
)
parser.add_argument("--addr", type=str, default=None, help="Connect to this ip:port")
parser.add_argument("--save", type=str, default=None, help="Save data to a .rrd file at this path")
def script_setup(
args: Namespace,
application_id: str,
) -> RecordingStream:
"""
Run common Rerun script setup actions. Connect to the viewer if necessary.
Parameters
----------
args : Namespace
The parsed arguments from `parser.parse_args()`.
application_id : str
The application ID to use for the viewer.
"""
rr.init(
application_id=application_id,
default_enabled=True,
strict=True,
)
rec: RecordingStream = rr.get_global_data_recording() # type: ignore[assignment]
# NOTE: mypy thinks these methods don't exist because they're monkey-patched.
if args.serve:
rec.serve() # type: ignore[attr-defined]
elif args.connect:
# Send logging data to separate `rerun` process.
# You can omit the argument to connect to the default address,
# which is `127.0.0.1:9876`.
rec.connect(args.addr) # type: ignore[attr-defined]
elif args.save is not None:
rec.save(args.save) # type: ignore[attr-defined]
elif not args.headless:
rec.spawn() # type: ignore[attr-defined]
return rec
def script_teardown(args: Namespace) -> None:
"""
Run common post-actions. Sleep if serving the web viewer.
Parameters
----------
args : Namespace
The parsed arguments from `parser.parse_args()`.
"""
if args.serve:
import time
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
print("Ctrl-C received. Exiting.") | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/script_helpers.py | 0.821582 | 0.176281 | script_helpers.py | pypi |
from __future__ import annotations
import base64
import logging
import random
import string
from typing import Any
from rerun import bindings
DEFAULT_WIDTH = 950
DEFAULT_HEIGHT = 712
DEFAULT_TIMEOUT = 2000
class MemoryRecording:
def __init__(self, storage: bindings.PyMemorySinkStorage) -> None:
self.storage = storage
def reset_data(self) -> None:
"""Reset the data in the MemoryRecording."""
self.storage.reset_data()
def reset_blueprint(self, add_to_app_default_blueprint: bool = False) -> None:
"""Reset the blueprint in the MemoryRecording."""
self.storage.reset_blueprint(add_to_app_default_blueprint) # type: ignore[no-untyped-def]
def as_html(
self,
width: int = DEFAULT_WIDTH,
height: int = DEFAULT_HEIGHT,
app_url: str | None = None,
timeout_ms: int = DEFAULT_TIMEOUT,
other: MemoryRecording | None = None,
) -> str:
"""
Generate an HTML snippet that displays the recording in an IFrame.
For use in contexts such as Jupyter notebooks.
⚠️ This will do a blocking flush of the current sink before returning!
Parameters
----------
width : int
The width of the viewer in pixels.
height : int
The height of the viewer in pixels.
app_url : str
Alternative HTTP url to find the Rerun web viewer. This will default to using https://app.rerun.io
or localhost if [rerun.start_web_viewer_server][] has been called.
timeout_ms : int
The number of milliseconds to wait for the Rerun web viewer to load.
other: MemoryRecording
An optional MemoryRecording to merge with this one.
"""
if app_url is None:
app_url = bindings.get_app_url()
# Use a random presentation ID to avoid collisions when multiple recordings are shown in the same notebook.
presentation_id = "".join(random.choice(string.ascii_letters) for i in range(6))
if other:
other = other.storage
base64_data = base64.b64encode(self.storage.concat_as_bytes(other)).decode("utf-8")
html_template = f"""
<div id="{presentation_id}_rrd" style="display: none;" data-rrd="{base64_data}"></div>
<div id="{presentation_id}_error" style="display: none;"><p>Timed out waiting for {app_url} to load.</p>
<p>Consider using <code>rr.start_web_viewer_server()</code></p></div>
<script>
{presentation_id}_timeout = setTimeout(() => {{
document.getElementById("{presentation_id}_error").style.display = 'block';
}}, {timeout_ms});
window.addEventListener("message", function(rrd) {{
return async function {presentation_id}_onIframeReady(event) {{
var iframe = document.getElementById("{presentation_id}_iframe");
if (event.source === iframe.contentWindow) {{
clearTimeout({presentation_id}_timeout);
document.getElementById("{presentation_id}_error").style.display = 'none';
iframe.style.display = 'inline';
window.removeEventListener("message", {presentation_id}_onIframeReady);
iframe.contentWindow.postMessage((await rrd), "*");
}}
}}
}}(async function() {{
await new Promise(r => setTimeout(r, 0));
var div = document.getElementById("{presentation_id}_rrd");
var base64Data = div.dataset.rrd;
var intermediate = atob(base64Data);
var buff = new Uint8Array(intermediate.length);
for (var i = 0; i < intermediate.length; i++) {{
buff[i] = intermediate.charCodeAt(i);
}}
return buff;
}}()));
</script>
<iframe id="{presentation_id}_iframe" width="{width}" height="{height}"
src="{app_url}?url=web_event://&persist=0"
frameborder="0" style="display: none;" allowfullscreen=""></iframe>
"""
return html_template
def show(
self,
other: MemoryRecording | None = None,
width: int = DEFAULT_WIDTH,
height: int = DEFAULT_HEIGHT,
app_url: str | None = None,
timeout_ms: int = DEFAULT_TIMEOUT,
) -> Any:
"""
Output the Rerun viewer using IPython [IPython.core.display.HTML][].
Parameters
----------
width : int
The width of the viewer in pixels.
height : int
The height of the viewer in pixels.
app_url : str
Alternative HTTP url to find the Rerun web viewer. This will default to using https://app.rerun.io
or localhost if [rerun.start_web_viewer_server][] has been called.
timeout_ms : int
The number of milliseconds to wait for the Rerun web viewer to load.
other: MemoryRecording
An optional MemoryRecording to merge with this one.
"""
html = self.as_html(width=width, height=height, app_url=app_url, timeout_ms=timeout_ms, other=other)
try:
from IPython.core.display import HTML
return HTML(html) # type: ignore[no-untyped-call]
except ImportError:
logging.warning("Could not import IPython.core.display. Returning raw HTML string instead.")
return html
def _repr_html_(self) -> Any:
return self.as_html() | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/recording.py | 0.934268 | 0.153708 | recording.py | pypi |
from __future__ import annotations
from rerun import bindings
# ---
class RecordingStream:
"""
A RecordingStream is used to send data to Rerun.
You can instantiate a RecordingStream by calling either [`rerun.init`][] (to create a global
recording) or [`rerun.new_recording`][] (for more advanced use cases).
Multithreading
--------------
A RecordingStream can safely be copied and sent to other threads.
You can also set a recording as the global active one for all threads ([`rerun.set_global_data_recording`][])
or just for the current thread ([`rerun.set_thread_local_data_recording`][]).
Similarly, the `with` keyword can be used to temporarily set the active recording for the
current thread, e.g.:
```
with rec:
rr.log_points(...)
```
See also: [`rerun.get_data_recording`][], [`rerun.get_global_data_recording`][],
[`rerun.get_thread_local_data_recording`][].
Available methods
-----------------
Every function in the Rerun SDK that takes an optional RecordingStream as a parameter can also
be called as a method on RecordingStream itself.
This includes, but isn't limited to:
- Metadata-related functions:
[`rerun.is_enabled`][], [`rerun.get_recording_id`][], ...
- Sink-related functions:
[`rerun.connect`][], [`rerun.spawn`][], ...
- Time-related functions:
[`rerun.set_time_seconds`][], [`rerun.set_time_sequence`][], ...
- Log-related functions:
[`rerun.log_points`][], [`rerun.log_mesh_file`][], ...
For an exhaustive list, see `help(rerun.RecordingStream)`.
Micro-batching
--------------
Micro-batching using both space and time triggers (whichever comes first) is done automatically
in a dedicated background thread.
You can configure the frequency of the batches using the following environment variables:
- `RERUN_FLUSH_TICK_SECS`:
Flush frequency in seconds (default: `0.05` (50ms)).
- `RERUN_FLUSH_NUM_BYTES`:
Flush threshold in bytes (default: `1048576` (1MiB)).
- `RERUN_FLUSH_NUM_ROWS`:
Flush threshold in number of rows (default: `18446744073709551615` (u64::MAX)).
"""
def __init__(self, inner: bindings.PyRecordingStream) -> None:
self.inner = inner
self._prev: RecordingStream | None = None
def __enter__(self): # type: ignore[no-untyped-def]
self._prev = set_thread_local_data_recording(self)
return self
def __exit__(self, type, value, traceback): # type: ignore[no-untyped-def]
self._prev = set_thread_local_data_recording(self._prev) # type: ignore[arg-type]
# NOTE: The type is a string because we cannot reference `RecordingStream` yet at this point.
def to_native(self: RecordingStream | None) -> bindings.PyRecordingStream | None:
return self.inner if self is not None else None
def __del__(self): # type: ignore[no-untyped-def]
recording = RecordingStream.to_native(self)
bindings.flush(blocking=False, recording=recording)
def _patch(funcs): # type: ignore[no-untyped-def]
"""Adds the given functions as methods to the `RecordingStream` class; injects `recording=self` in passing."""
import functools
import os
from typing import Any
# If this is a special RERUN_APP_ONLY context (launched via .spawn), we
# can bypass everything else, which keeps us from monkey patching methods
# that never get used.
if os.environ.get("RERUN_APP_ONLY"):
return
# NOTE: Python's closures capture by reference... make sure to copy `fn` early.
def eager_wrap(fn): # type: ignore[no-untyped-def]
@functools.wraps(fn)
def wrapper(self, *args: Any, **kwargs: Any) -> Any: # type: ignore[no-untyped-def]
kwargs["recording"] = self
return fn(*args, **kwargs)
return wrapper
for fn in funcs:
wrapper = eager_wrap(fn) # type: ignore[no-untyped-call]
setattr(RecordingStream, fn.__name__, wrapper)
# ---
def is_enabled(
recording: RecordingStream | None = None,
) -> bool:
"""
Is this Rerun recording enabled.
If false, all calls to the recording are ignored.
The default can be set in [`rerun.init`][], but is otherwise `True`.
This can be controlled with the environment variable `RERUN` (e.g. `RERUN=on` or `RERUN=off`).
"""
return bindings.is_enabled(recording=RecordingStream.to_native(recording)) # type: ignore[no-any-return]
def get_application_id(
recording: RecordingStream | None = None,
) -> str | None:
"""
Get the application ID that this recording is associated with, if any.
Parameters
----------
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
Returns
-------
str
The application ID that this recording is associated with.
"""
app_id = bindings.get_application_id(recording=RecordingStream.to_native(recording))
return str(app_id) if app_id is not None else None
def get_recording_id(
recording: RecordingStream | None = None,
) -> str | None:
"""
Get the recording ID that this recording is logging to, as a UUIDv4, if any.
The default recording_id is based on `multiprocessing.current_process().authkey`
which means that all processes spawned with `multiprocessing`
will have the same default recording_id.
If you are not using `multiprocessing` and still want several different Python
processes to log to the same Rerun instance (and be part of the same recording),
you will need to manually assign them all the same recording_id.
Any random UUIDv4 will work, or copy the recording id for the parent process.
Parameters
----------
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
Returns
-------
str
The recording ID that this recording is logging to.
"""
rec_id = bindings.get_recording_id(recording=RecordingStream.to_native(recording))
return str(rec_id) if rec_id is not None else None
_patch([is_enabled, get_application_id, get_recording_id]) # type: ignore[no-untyped-call]
# ---
def get_data_recording(
recording: RecordingStream | None = None,
) -> RecordingStream | None:
"""
Returns the most appropriate recording to log data to, in the current context, if any.
* If `recording` is specified, returns that one;
* Otherwise, falls back to the currently active thread-local recording, if there is one;
* Otherwise, falls back to the currently active global recording, if there is one;
* Otherwise, returns None.
Parameters
----------
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
Returns
-------
Optional[RecordingStream]
The most appropriate recording to log data to, in the current context, if any.
"""
result = bindings.get_data_recording(recording=recording)
return RecordingStream(result) if result is not None else None
def get_global_data_recording() -> RecordingStream | None:
"""
Returns the currently active global recording, if any.
Returns
-------
Optional[RecordingStream]
The currently active global recording, if any.
"""
result = bindings.get_global_data_recording()
return RecordingStream(result) if result is not None else None
def set_global_data_recording(recording: RecordingStream) -> RecordingStream | None:
"""
Replaces the currently active global recording with the specified one.
Parameters
----------
recording:
The newly active global recording.
"""
result = bindings.set_global_data_recording(RecordingStream.to_native(recording))
return RecordingStream(result) if result is not None else None
def get_thread_local_data_recording() -> RecordingStream | None:
"""
Returns the currently active thread-local recording, if any.
Returns
-------
Optional[RecordingStream]
The currently active thread-local recording, if any.
"""
result = bindings.get_thread_local_data_recording()
return RecordingStream(result) if result is not None else None
def set_thread_local_data_recording(recording: RecordingStream) -> RecordingStream | None:
"""
Replaces the currently active thread-local recording with the specified one.
Parameters
----------
recording:
The newly active thread-local recording.
"""
result = bindings.set_thread_local_data_recording(recording=RecordingStream.to_native(recording))
return RecordingStream(result) if result is not None else None | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/recording_stream.py | 0.92329 | 0.782226 | recording_stream.py | pypi |
from __future__ import annotations
import logging
import rerun_bindings as bindings # type: ignore[attr-defined]
from rerun.recording import MemoryRecording
from rerun.recording_stream import RecordingStream
# --- Sinks ---
def connect(
addr: str | None = None, flush_timeout_sec: float | None = 2.0, recording: RecordingStream | None = None
) -> None:
"""
Connect to a remote Rerun Viewer on the given ip:port.
Requires that you first start a Rerun Viewer by typing 'rerun' in a terminal.
This function returns immediately.
Parameters
----------
addr
The ip:port to connect to
flush_timeout_sec: float
The minimum time the SDK will wait during a flush before potentially
dropping data if progress is not being made. Passing `None` indicates no timeout,
and can cause a call to `flush` to block indefinitely.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
bindings.connect(addr=addr, flush_timeout_sec=flush_timeout_sec, recording=recording)
_connect = connect # we need this because Python scoping is horrible
def save(path: str, recording: RecordingStream | None = None) -> None:
"""
Stream all log-data to a file.
Parameters
----------
path : str
The path to save the data to.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
if not bindings.is_enabled():
logging.warning("Rerun is disabled - save() call ignored. You must call rerun.init before saving a recording.")
return
recording = RecordingStream.to_native(recording)
bindings.save(path=path, recording=recording)
def disconnect(recording: RecordingStream | None = None) -> None:
"""
Closes all TCP connections, servers, and files.
Closes all TCP connections, servers, and files that have been opened with
[`rerun.connect`], [`rerun.serve`], [`rerun.save`] or [`rerun.spawn`].
Parameters
----------
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
bindings.disconnect(recording=recording)
def memory_recording(recording: RecordingStream | None = None) -> MemoryRecording:
"""
Streams all log-data to a memory buffer.
This can be used to display the RRD to alternative formats such as html.
See: [rerun.MemoryRecording.as_html][].
Parameters
----------
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
Returns
-------
MemoryRecording
A memory recording object that can be used to read the data.
"""
recording = RecordingStream.to_native(recording)
return MemoryRecording(bindings.memory_recording(recording=recording))
def serve(
open_browser: bool = True,
web_port: int | None = None,
ws_port: int | None = None,
recording: RecordingStream | None = None,
) -> None:
"""
Serve log-data over WebSockets and serve a Rerun web viewer over HTTP.
You can also connect to this server with the native viewer using `rerun localhost:9090`.
WARNING: This is an experimental feature.
This function returns immediately.
Parameters
----------
open_browser
Open the default browser to the viewer.
web_port:
The port to serve the web viewer on (defaults to 9090).
ws_port:
The port to serve the WebSocket server on (defaults to 9877)
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
bindings.serve(open_browser, web_port, ws_port, recording=recording)
def spawn(port: int = 9876, connect: bool = True, recording: RecordingStream | None = None) -> None:
"""
Spawn a Rerun Viewer, listening on the given port.
This is often the easiest and best way to use Rerun.
Just call this once at the start of your program.
You can also call [rerun.init][] with a `spawn=True` argument.
Parameters
----------
port : int
The port to listen on.
connect
also connect to the viewer and stream logging data to it.
recording:
Specifies the [`rerun.RecordingStream`][] to use if `connect = True`.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
import os
import subprocess
import sys
from time import sleep
# Let the spawned rerun process know it's just an app
new_env = os.environ.copy()
new_env["RERUN_APP_ONLY"] = "true"
# sys.executable: the absolute path of the executable binary for the Python interpreter
python_executable = sys.executable
if python_executable is None:
python_executable = "python3"
# start_new_session=True ensures the spawned process does NOT die when
# we hit ctrl-c in the terminal running the parent Python process.
subprocess.Popen([python_executable, "-m", "rerun", "--port", str(port)], env=new_env, start_new_session=True)
# TODO(emilk): figure out a way to postpone connecting until the rerun viewer is listening.
# For example, wait until it prints "Hosting a SDK server over TCP at …"
sleep(0.5) # almost as good as waiting the correct amount of time
if connect:
_connect(f"127.0.0.1:{port}", recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/sinks.py | 0.891902 | 0.343149 | sinks.py | pypi |
from __future__ import annotations
from typing import Any, Sequence
import numpy as np
import numpy.typing as npt
from rerun import bindings
from rerun.components.annotation import ClassIdArray
from rerun.components.box import Box3DArray
from rerun.components.color import ColorRGBAArray
from rerun.components.instance import InstanceArray
from rerun.components.label import LabelArray
from rerun.components.quaternion import QuaternionArray
from rerun.components.radius import RadiusArray
from rerun.components.vec import Vec3DArray
from rerun.log import (
Color,
Colors,
OptionalClassIds,
_normalize_colors,
_normalize_ids,
_normalize_labels,
_normalize_radii,
)
from rerun.log.extension_components import _add_extension_components
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_obb",
"log_obbs",
]
@log_decorator
def log_obb(
entity_path: str,
*,
half_size: npt.ArrayLike | None,
position: npt.ArrayLike | None = None,
rotation_q: npt.ArrayLike | None = None,
color: Color | None = None,
stroke_width: float | None = None,
label: str | None = None,
class_id: int | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a 3D Oriented Bounding Box, or OBB.
Example:
--------
```
rr.log_obb("my_obb", half_size=[1.0, 2.0, 3.0], position=[0, 0, 0], rotation_q=[0, 0, 0, 1])
```
Parameters
----------
entity_path:
The path to the oriented bounding box in the space hierarchy.
half_size:
Array with [x, y, z] half dimensions of the OBB.
position:
Optional array with [x, y, z] position of the OBB in world space.
rotation_q:
Optional array with quaternion coordinates [x, y, z, w] for the rotation from model to world space.
color:
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
stroke_width:
Optional width of the line edges.
label:
Optional text label placed at `position`.
class_id:
Optional class id for the OBB. The class id provides colors and labels if not specified explicitly.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the bounding box will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
if half_size is not None:
half_size = np.require(half_size, dtype="float32")
if half_size.shape[0] == 3:
instanced["rerun.box3d"] = Box3DArray.from_numpy(half_size.reshape(1, 3))
else:
raise TypeError("half_size should be 1x3")
if position is not None:
position = np.require(position, dtype="float32")
if position.shape[0] == 3:
instanced["rerun.vec3d"] = Vec3DArray.from_numpy(position.reshape(1, 3))
else:
raise TypeError("position should be 1x3")
if rotation_q is not None:
rotation = np.require(rotation_q, dtype="float32")
if rotation.shape[0] == 4:
instanced["rerun.quaternion"] = QuaternionArray.from_numpy(rotation.reshape(1, 4))
else:
raise TypeError("rotation should be 1x4")
if color is not None:
colors = _normalize_colors(color)
instanced["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
# We store the stroke_width in radius
if stroke_width:
radii = _normalize_radii([stroke_width / 2])
instanced["rerun.radius"] = RadiusArray.from_numpy(radii)
if label:
instanced["rerun.label"] = LabelArray.new([label])
if class_id:
class_ids = _normalize_ids([class_id])
instanced["rerun.class_id"] = ClassIdArray.from_numpy(class_ids)
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(
entity_path,
components=splats,
timeless=timeless,
recording=recording,
)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording)
@log_decorator
def log_obbs(
entity_path: str,
*,
half_sizes: npt.ArrayLike | None,
positions: npt.ArrayLike | None = None,
rotations_q: npt.ArrayLike | None = None,
colors: Color | Colors | None = None,
stroke_widths: npt.ArrayLike | None = None,
labels: Sequence[str] | None = None,
class_ids: OptionalClassIds | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a 3D Oriented Bounding Box, or OBB.
Example:
--------
```
rr.log_obb("my_obb", half_size=[1.0, 2.0, 3.0], position=[0, 0, 0], rotation_q=[0, 0, 0, 1])
```
Parameters
----------
entity_path:
The path to the oriented bounding box in the space hierarchy.
half_sizes:
Nx3 Array. Each row is the [x, y, z] half dimensions of an OBB.
positions:
Optional Nx3 array. Each row is [x, y, z] positions of an OBB in world space.
rotations_q:
Optional Nx3 array. Each row is quaternion coordinates [x, y, z, w] for the rotation from model to world space.
colors:
Optional Nx3 or Nx4 array. Each row is RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers,
with separate alpha.
stroke_widths:
Optional array of the width of the line edges.
labels:
Optional array of text labels placed at `position`.
class_ids:
Optional array of class id for the OBBs. The class id provides colors and labels if not specified explicitly.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the bounding box will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
colors = _normalize_colors(colors)
stroke_widths = _normalize_radii(stroke_widths)
radii = stroke_widths / 2
labels = _normalize_labels(labels)
class_ids = _normalize_ids(class_ids)
# 0 = instanced, 1 = splat
comps = [{}, {}] # type: ignore[var-annotated]
if half_sizes is not None:
half_sizes = np.require(half_sizes, dtype="float32")
if len(half_sizes) == 0 or half_sizes.shape[1] == 3:
comps[0]["rerun.box3d"] = Box3DArray.from_numpy(half_sizes)
else:
raise TypeError("half_size should be Nx3")
if positions is not None:
positions = np.require(positions, dtype="float32")
if len(positions) == 0 or positions.shape[1] == 3:
comps[0]["rerun.vec3d"] = Vec3DArray.from_numpy(positions)
else:
raise TypeError("position should be 1x3")
if rotations_q is not None:
rotations_q = np.require(rotations_q, dtype="float32")
if len(rotations_q) == 0 or rotations_q.shape[1] == 4:
comps[0]["rerun.quaternion"] = QuaternionArray.from_numpy(rotations_q)
else:
raise TypeError("rotation should be 1x4")
if len(colors):
is_splat = len(colors.shape) == 1
if is_splat:
colors = colors.reshape(1, len(colors))
comps[is_splat]["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if len(radii):
is_splat = len(radii) == 1
comps[is_splat]["rerun.radius"] = RadiusArray.from_numpy(radii)
if len(labels):
is_splat = len(labels) == 1
comps[is_splat]["rerun.label"] = LabelArray.new(labels)
if len(class_ids):
is_splat = len(class_ids) == 1
comps[is_splat]["rerun.class_id"] = ClassIdArray.from_numpy(class_ids)
if ext:
_add_extension_components(comps[0], comps[1], ext, None)
if comps[1]:
comps[1]["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=comps[1], timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
bindings.log_arrow_msg(entity_path, components=comps[0], timeless=timeless, recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/bounding_box.py | 0.964614 | 0.722655 | bounding_box.py | pypi |
from __future__ import annotations
from typing import Any, Sequence
import numpy as np
import numpy.typing as npt
from rerun import bindings
from rerun.log import Colors, _normalize_colors
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_mesh",
"log_meshes",
]
@log_decorator
def log_mesh(
entity_path: str,
positions: Any,
*,
indices: Any | None = None,
normals: Any | None = None,
albedo_factor: Any | None = None,
vertex_colors: Colors | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a raw 3D mesh by specifying its vertex positions, and optionally indices, normals and albedo factor.
You can also use [`rerun.log_mesh_file`] to log .gltf, .glb, .obj, etc.
Example:
-------
```
# A simple red triangle:
rerun.log_mesh(
"world/mesh",
positions = [
[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0]
],
indices = [0, 1, 2],
normals = [
[0.0, 0.0, 1.0],
[0.0, 0.0, 1.0],
[0.0, 0.0, 1.0]
],
albedo_factor = [1.0, 0.0, 0.0],
)
```
Parameters
----------
entity_path:
Path to the mesh in the space hierarchy
positions:
An array of 3D points.
If no `indices` are specified, then each triplet of positions is interpreted as a triangle.
indices:
If specified, is a flattened array of indices that describe the mesh's triangles,
i.e. its length must be divisible by 3.
normals:
If specified, is a (potentially flattened) array of 3D vectors that describe the normal for each
vertex, i.e. the total number of elements must be divisible by 3 and more importantly, `len(normals)` should be
equal to `len(positions)`.
albedo_factor:
Optional color multiplier of the mesh using RGB or unmuliplied RGBA in linear 0-1 space.
vertex_colors:
Optional array of RGB(A) vertex colors, in sRGB gamma space, either as 0-1 floats or 0-255 integers.
If specified, the alpha is considered separate (unmultiplied).
timeless:
If true, the mesh will be timeless (default: False)
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
positions = np.asarray(positions, dtype=np.float32).flatten()
if indices is not None:
indices = np.asarray(indices, dtype=np.uint32).flatten()
if normals is not None:
normals = np.asarray(normals, dtype=np.float32).flatten()
if albedo_factor is not None:
albedo_factor = np.asarray(albedo_factor, dtype=np.float32).flatten()
if vertex_colors is not None:
vertex_colors = _normalize_colors(vertex_colors)
# Mesh arrow handling happens inside the python bridge
bindings.log_meshes(
entity_path,
position_buffers=[positions.flatten()],
vertex_color_buffers=[vertex_colors],
index_buffers=[indices],
normal_buffers=[normals],
albedo_factors=[albedo_factor],
timeless=timeless,
recording=recording,
)
@log_decorator
def log_meshes(
entity_path: str,
position_buffers: Sequence[npt.ArrayLike],
*,
vertex_color_buffers: Sequence[Colors | None],
index_buffers: Sequence[npt.ArrayLike | None],
normal_buffers: Sequence[npt.ArrayLike | None],
albedo_factors: Sequence[npt.ArrayLike | None],
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log multiple raw 3D meshes by specifying their different buffers and albedo factors.
To learn more about how the data within these buffers is interpreted and laid out, refer
to the documentation for [`rerun.log_mesh`].
Parameters
----------
entity_path:
Path to the mesh in the space hierarchy
position_buffers:
A sequence of position buffers, one for each mesh.
vertex_color_buffers:
An optional sequence of vertex color buffers, one for each mesh.
index_buffers:
An optional sequence of index buffers, one for each mesh.
normal_buffers:
An optional sequence of normal buffers, one for each mesh.
albedo_factors:
An optional sequence of albedo factors, one for each mesh.
timeless:
If true, the mesh will be timeless (default: False)
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
position_buffers = [np.asarray(p, dtype=np.float32).flatten() for p in position_buffers]
if vertex_color_buffers is not None:
vertex_color_buffers = [_normalize_colors(c) for c in vertex_color_buffers]
if index_buffers is not None:
index_buffers = [np.asarray(i, dtype=np.uint32).flatten() if i else None for i in index_buffers]
if normal_buffers is not None:
normal_buffers = [np.asarray(n, dtype=np.float32).flatten() if n else None for n in normal_buffers]
if albedo_factors is not None:
albedo_factors = [np.asarray(af, dtype=np.float32).flatten() if af else None for af in albedo_factors]
# Mesh arrow handling happens inside the python bridge
bindings.log_meshes(
entity_path,
position_buffers=position_buffers,
vertex_color_buffers=vertex_color_buffers,
index_buffers=index_buffers,
normal_buffers=normal_buffers,
albedo_factors=albedo_factors,
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/mesh.py | 0.952706 | 0.653396 | mesh.py | pypi |
from __future__ import annotations
from typing import Any
import numpy as np
from rerun import bindings
from rerun.components.color import ColorRGBAArray
from rerun.components.instance import InstanceArray
from rerun.components.label import LabelArray
from rerun.components.radius import RadiusArray
from rerun.components.scalar import ScalarArray, ScalarPlotPropsArray
from rerun.log import Color, _normalize_colors
from rerun.log.extension_components import _add_extension_components
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_scalar",
]
@log_decorator
def log_scalar(
entity_path: str,
scalar: float,
*,
label: str | None = None,
color: Color | None = None,
radius: float | None = None,
scattered: bool | None = None,
ext: dict[str, Any] | None = None,
recording: RecordingStream | None = None,
) -> None:
"""
Log a double-precision scalar that will be visualized as a timeseries plot.
The current simulation time will be used for the time/X-axis, hence scalars
cannot be timeless!
See [here](https://github.com/rerun-io/rerun/blob/main/examples/python/plots/main.py) for a larger example.
Understanding the plot and attributes hierarchy
-----------------------------------------------
Timeseries come in three parts: points, lines and finally the plots
themselves. As a user of the Rerun SDK, your one and only entrypoint into
that hierarchy is through the lowest-level layer: the points.
When logging scalars and their attributes (label, color, radius, scattered)
through this function, Rerun will turn them into points with similar
attributes. From these points, lines with appropriate attributes can then be
inferred; and from these inferred lines, plots with appropriate attributes
will be inferred in turn!
In terms of actual hierarchy:
- Each space represents a single plot.
- Each entity path within a space that contains scalar data is a line within that plot.
- Each logged scalar is a point.
E.g. the following:
```
t=1.0
rerun.log_scalar("trig/sin", math.sin(t), label="sin(t)", color=[255, 0, 0])
rerun.log_scalar("trig/cos", math.cos(t), label="cos(t)", color=[0, 0, 255])
```
will yield a single plot (space = `trig`), comprised of two lines
(entity paths `trig/sin` and `trig/cos`).
Parameters
----------
entity_path:
The path to the scalar in the space hierarchy.
scalar:
The scalar value to log.
label:
An optional label for the point.
This won't show up on points at the moment, as our plots don't yet
support displaying labels for individual points
TODO(https://github.com/rerun-io/rerun/issues/1289). If all points
within a single entity path (i.e. a line) share the same label, then
this label will be used as the label for the line itself. Otherwise, the
line will be named after the entity path. The plot itself is named after
the space it's in.
color:
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
If left unspecified, a pseudo-random color will be used instead. That
same color will apply to all points residing in the same entity path
that don't have a color specified.
Points within a single line do not have to share the same color, the line
will have differently colored segments as appropriate.
If all points within a single entity path (i.e. a line) share the same
color, then this color will be used as the line color in the plot legend.
Otherwise, the line will appear gray in the legend.
radius:
An optional radius for the point.
Points within a single line do not have to share the same radius, the line
will have differently sized segments as appropriate.
If all points within a single entity path (i.e. a line) share the same
radius, then this radius will be used as the line width too. Otherwise, the
line will use the default width of `1.0`.
scattered:
Specifies whether the point should form a continuous line with its
neighbors, or whether it should stand on its own, akin to a scatter plot.
Points within a single line do not have to all share the same scatteredness:
the line will switch between a scattered and a continuous representation as
required.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
instanced["rerun.scalar"] = ScalarArray.from_numpy(np.array([scalar]))
if label:
instanced["rerun.label"] = LabelArray.new([label])
if color is not None:
colors = _normalize_colors(color)
instanced["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if radius:
instanced["rerun.radius"] = RadiusArray.from_numpy(np.array([radius]))
if scattered:
props = [{"scattered": scattered}]
instanced["rerun.scalar_plot_props"] = ScalarPlotPropsArray.from_props(props)
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=splats, timeless=False, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(entity_path, components=instanced, timeless=False, recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/scalar.py | 0.93779 | 0.720147 | scalar.py | pypi |
from __future__ import annotations
from dataclasses import dataclass
from enum import Enum
from pathlib import Path
import numpy as np
import numpy.typing as npt
from rerun import bindings
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"MeshFormat",
"ImageFormat",
"log_mesh_file",
"log_image_file",
]
class MeshFormat(Enum):
"""Mesh file format."""
# Needs some way of logging materials too, or adding some default material to the
# viewer.
# GLTF = "GLTF"
GLB = "GLB"
"""glTF binary format."""
# Needs some way of logging materials too, or adding some default material to the
# viewer.
OBJ = "OBJ"
"""Wavefront .obj format."""
@dataclass
class ImageFormat(Enum):
"""Image file format."""
JPEG = "jpeg"
"""JPEG format."""
PNG = "png"
"""PNG format."""
@log_decorator
def log_mesh_file(
entity_path: str,
mesh_format: MeshFormat,
*,
mesh_bytes: bytes | None = None,
mesh_path: Path | None = None,
transform: npt.ArrayLike | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log the contents of a mesh file (.gltf, .glb, .obj, …).
You must pass either `mesh_bytes` or `mesh_path`.
You can also use [`rerun.log_mesh`] to log raw mesh data.
Example:
-------
```
# Move mesh 10 units along the X axis.
transform=np.array([
[1, 0, 0, 10],
[0, 1, 0, 0],
[0, 0, 1, 0]])
```
Parameters
----------
entity_path:
Path to the mesh in the space hierarchy
mesh_format:
Format of the mesh file
mesh_bytes:
Content of an mesh file, e.g. a `.glb`.
mesh_path:
Path to an mesh file, e.g. a `.glb`.
transform:
Optional 3x4 affine transform matrix applied to the mesh
timeless:
If true, the mesh will be timeless (default: False)
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if transform is None:
transform = np.empty(shape=(0, 0), dtype=np.float32)
else:
transform = np.require(transform, dtype="float32")
# Mesh arrow handling happens inside the python bridge
bindings.log_mesh_file(
entity_path,
mesh_format=mesh_format.value,
mesh_bytes=mesh_bytes,
mesh_path=mesh_path,
transform=transform,
timeless=timeless,
recording=recording,
)
@log_decorator
def log_image_file(
entity_path: str,
*,
img_bytes: bytes | None = None,
img_path: Path | None = None,
img_format: ImageFormat | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log an image file given its contents or path on disk.
You must pass either `img_bytes` or `img_path`.
Only JPEGs and PNGs are supported right now.
JPEGs will be stored compressed, saving memory,
whilst PNGs will currently be decoded before they are logged.
This may change in the future.
If no `img_format` is specified, rerun will try to guess it.
Parameters
----------
entity_path:
Path to the image in the space hierarchy.
img_bytes:
Content of an image file, e.g. a `.jpg`.
img_path:
Path to an image file, e.g. a `.jpg`.
img_format:
Format of the image file.
timeless:
If true, the image will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
img_format = getattr(img_format, "value", None)
# Image file arrow handling happens inside the python bridge
bindings.log_image_file(
entity_path,
img_bytes=img_bytes,
img_path=img_path,
img_format=img_format,
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/file.py | 0.947902 | 0.607081 | file.py | pypi |
from __future__ import annotations
from typing import Any, Sequence
import numpy as np
import numpy.typing as npt
from rerun import bindings
from rerun.components.annotation import ClassIdArray
from rerun.components.color import ColorRGBAArray
from rerun.components.draw_order import DrawOrderArray
from rerun.components.instance import InstanceArray
from rerun.components.label import LabelArray
from rerun.components.point import Point2DArray, Point3DArray
from rerun.components.radius import RadiusArray
from rerun.log import (
Color,
Colors,
OptionalClassIds,
OptionalKeyPointIds,
_normalize_colors,
_normalize_ids,
_normalize_labels,
_normalize_radii,
)
from rerun.log.error_utils import _send_warning
from rerun.log.extension_components import _add_extension_components
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_point",
"log_points",
]
@log_decorator
def log_point(
entity_path: str,
position: npt.ArrayLike | None = None,
*,
radius: float | None = None,
color: Color | None = None,
label: str | None = None,
class_id: int | None = None,
keypoint_id: int | None = None,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a 2D or 3D point, with a position and optional color, radii, label, etc.
Logging again to the same `entity_path` will replace the previous point.
Colors should either be in 0-255 gamma space or in 0-1 linear space.
Colors can be RGB or RGBA represented as a 2-element or 3-element sequence.
Supported dtypes for `color`:
-----------------------------
- uint8: color components should be in 0-255 sRGB gamma space, except for alpha which should be in 0-255 linear
space.
- float32/float64: all color components should be in 0-1 linear space.
Parameters
----------
entity_path:
Path to the point in the space hierarchy.
position:
Any 2-element or 3-element array-like.
radius:
Optional radius (make it a sphere).
color:
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
label:
Optional text to show with the point.
class_id:
Optional class id for the point.
The class id provides color and label if not specified explicitly.
See [rerun.log_annotation_context][]
keypoint_id:
Optional key point id for the point, identifying it within a class.
If keypoint_id is passed but no class_id was specified, class_id will be set to 0.
This is useful to identify points within a single classification (which is identified with class_id).
E.g. the classification might be 'Person' and the keypoints refer to joints on a detected skeleton.
See [rerun.log_annotation_context][]
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for 2D points is 30.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the point will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if keypoint_id is not None and class_id is None:
class_id = 0
if position is not None:
position = np.require(position, dtype="float32")
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
if position is not None:
if position.size == 2:
instanced["rerun.point2d"] = Point2DArray.from_numpy(position.reshape(1, 2))
elif position.size == 3:
instanced["rerun.point3d"] = Point3DArray.from_numpy(position.reshape(1, 3))
else:
raise TypeError("Position must have a total size of 2 or 3")
if color is not None:
colors = _normalize_colors(color)
instanced["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if radius:
radii = _normalize_radii([radius])
instanced["rerun.radius"] = RadiusArray.from_numpy(radii)
if label:
instanced["rerun.label"] = LabelArray.new([label])
if class_id:
class_ids = _normalize_ids([class_id])
instanced["rerun.class_id"] = ClassIdArray.from_numpy(class_ids)
if draw_order is not None:
instanced["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=splats, timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording)
@log_decorator
def log_points(
entity_path: str,
positions: npt.ArrayLike | None = None,
*,
identifiers: npt.ArrayLike | None = None,
colors: Color | Colors | None = None,
radii: npt.ArrayLike | None = None,
labels: Sequence[str] | None = None,
class_ids: OptionalClassIds = None,
keypoint_ids: OptionalKeyPointIds = None,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log 2D or 3D points, with positions and optional colors, radii, labels, etc.
Logging again to the same `entity_path` will replace all the previous points.
Colors should either be in 0-255 gamma space or in 0-1 linear space.
Colors can be RGB or RGBA. You can supply no colors, one color,
or one color per point in a Nx3 or Nx4 numpy array.
Supported dtypes for `colors`:
------------------------------
- uint8: color components should be in 0-255 sRGB gamma space, except for alpha which should be in 0-255 linear
space.
- float32/float64: all color components should be in 0-1 linear space.
Parameters
----------
entity_path:
Path to the points in the space hierarchy.
positions:
Nx2 or Nx3 array
identifiers:
Unique numeric id that shows up when you hover or select the point.
colors:
Optional colors of the points.
The colors are interpreted as RGB or RGBA in sRGB gamma-space,
as either 0-1 floats or 0-255 integers, with separate alpha.
radii:
Optional radii (make it a sphere).
labels:
Optional per-point text to show with the points
class_ids:
Optional class ids for the points.
The class id provides colors and labels if not specified explicitly.
See [rerun.log_annotation_context][]
keypoint_ids:
Optional key point ids for the points, identifying them within a class.
If keypoint_ids are passed in but no class_ids were specified, class_id will be set to 0.
This is useful to identify points within a single classification (which is identified with class_id).
E.g. the classification might be 'Person' and the keypoints refer to joints on a detected skeleton.
See [rerun.log_annotation_context][]
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for 2D points is 30.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the points will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if keypoint_ids is not None and class_ids is None:
class_ids = 0
if positions is None:
positions = np.require([], dtype="float32")
else:
positions = np.require(positions, dtype="float32")
colors = _normalize_colors(colors)
radii = _normalize_radii(radii)
labels = _normalize_labels(labels)
class_ids = _normalize_ids(class_ids)
keypoint_ids = _normalize_ids(keypoint_ids)
identifiers_np = np.array((), dtype="uint64")
if identifiers is not None:
try:
identifiers_np = np.require(identifiers, dtype="uint64")
except ValueError:
_send_warning("Only integer identifiers supported", 1)
# 0 = instanced, 1 = splat
comps = [{}, {}] # type: ignore[var-annotated]
if positions.any():
if positions.shape[1] == 2:
comps[0]["rerun.point2d"] = Point2DArray.from_numpy(positions)
elif positions.shape[1] == 3:
comps[0]["rerun.point3d"] = Point3DArray.from_numpy(positions)
else:
raise TypeError("Positions should be either Nx2 or Nx3")
if len(identifiers_np):
comps[0]["rerun.instance_key"] = InstanceArray.from_numpy(identifiers_np)
if len(colors):
is_splat = len(colors.shape) == 1
if is_splat:
colors = colors.reshape(1, len(colors))
comps[is_splat]["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if len(radii):
is_splat = len(radii) == 1
comps[is_splat]["rerun.radius"] = RadiusArray.from_numpy(radii)
if len(labels):
is_splat = len(labels) == 1
comps[is_splat]["rerun.label"] = LabelArray.new(labels)
if len(class_ids):
is_splat = len(class_ids) == 1
comps[is_splat]["rerun.class_id"] = ClassIdArray.from_numpy(class_ids)
if len(keypoint_ids):
is_splat = len(keypoint_ids) == 1
comps[is_splat]["rerun.keypoint_id"] = ClassIdArray.from_numpy(keypoint_ids)
if draw_order is not None:
comps[True]["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(comps[0], comps[1], ext, identifiers_np)
if comps[1]:
comps[1]["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=comps[1], timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
bindings.log_arrow_msg(entity_path, components=comps[0], timeless=timeless, recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/points.py | 0.944296 | 0.369315 | points.py | pypi |
from __future__ import annotations
from dataclasses import dataclass
from typing import Iterable, Sequence, Tuple, Union
from rerun import bindings
from rerun.log import Color, _normalize_colors
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"AnnotationInfo",
"ClassDescription",
"log_annotation_context",
]
@dataclass
class AnnotationInfo:
"""
Annotation info annotating a class id or key-point id.
Color and label will be used to annotate entities/keypoints which reference the id.
The id refers either to a class or key-point id
"""
id: int = 0
"""The id of the class or key-point to annotate"""
label: str | None = None
"""The label that will be shown in the UI"""
color: Color | None = None
"""The color that will be applied to the annotated entity"""
AnnotationInfoLike = Union[Tuple[int, str], Tuple[int, str, Color], AnnotationInfo]
"""Type helper representing the different ways to specify an [AnnotationInfo][rerun.log.annotation.AnnotationInfo]"""
def coerce_annotation_info(arg: AnnotationInfoLike) -> AnnotationInfo:
if type(arg) is AnnotationInfo:
return arg
else:
return AnnotationInfo(*arg) # type: ignore[misc]
@dataclass
class ClassDescription:
"""
Metadata about a class type identified by an id.
Typically a class description contains only a annotation info.
However, within a class there might be several keypoints, each with its own annotation info.
Keypoints in turn may be connected to each other by connections (typically used for skeleton edges).
"""
info: AnnotationInfoLike | None = None
"""The annotation info for the class"""
keypoint_annotations: Iterable[AnnotationInfoLike] | None = None
"""The annotation infos for the all key-points"""
keypoint_connections: Iterable[int | tuple[int, int]] | None = None
"""The connections between key-points"""
ClassDescriptionLike = Union[AnnotationInfoLike, ClassDescription]
"""Type helper representing the different ways to specify a [ClassDescription][rerun.log.annotation.ClassDescription]"""
def coerce_class_descriptor_like(arg: ClassDescriptionLike) -> ClassDescription:
if type(arg) is ClassDescription:
return arg
else:
return ClassDescription(info=arg) # type: ignore[arg-type]
@log_decorator
def log_annotation_context(
entity_path: str,
class_descriptions: ClassDescriptionLike | Iterable[ClassDescriptionLike],
*,
timeless: bool = True,
recording: RecordingStream | None = None,
) -> None:
"""
Log an annotation context made up of a collection of [ClassDescription][rerun.log.annotation.ClassDescription]s.
Any entity needing to access the annotation context will find it by searching the
path upward. If all entities share the same you can simply log it to the
root ("/"), or if you want a per-entity ClassDescriptions log it to the same path as
your entity.
Each ClassDescription must include an annotation info with an id, which will
be used for matching the class and may optionally include a label and color.
Colors should either be in 0-255 gamma space or in 0-1 gamma space. Colors
can be RGB or RGBA.
These can either be specified verbosely as:
```
[AnnotationInfo(id=23, label='foo', color=(255, 0, 0)), ...]
```
Or using short-hand tuples.
```
[(23, 'bar'), ...]
```
Unspecified colors will be filled in by the visualizer randomly.
Parameters
----------
entity_path:
The path to the annotation context in the space hierarchy.
class_descriptions:
A single ClassDescription or a collection of ClassDescriptions.
timeless:
If true, the annotation context will be timeless (default: True).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if not isinstance(class_descriptions, Iterable):
class_descriptions = [class_descriptions]
# Coerce tuples into ClassDescription dataclass for convenience
typed_class_descriptions = (coerce_class_descriptor_like(d) for d in class_descriptions)
# Convert back to fixed tuple for easy pyo3 conversion
# This is pretty messy but will likely go away / be refactored with pending data-model changes.
def info_to_tuple(info: AnnotationInfoLike | None) -> tuple[int, str | None, Sequence[int] | None]:
if info is None:
return (0, None, None)
info = coerce_annotation_info(info)
color = None if info.color is None else _normalize_colors(info.color).tolist()
return (info.id, info.label, color)
def keypoint_connections_to_flat_list(
keypoint_connections: Iterable[int | tuple[int, int]] | None
) -> Sequence[int]:
if keypoint_connections is None:
return []
# flatten keypoint connections
connections = list(keypoint_connections)
if type(connections[0]) is tuple:
connections = [item for tuple in connections for item in tuple] # type: ignore[union-attr]
return connections # type: ignore[return-value]
tuple_class_descriptions = [
(
info_to_tuple(d.info),
tuple(info_to_tuple(a) for a in d.keypoint_annotations or []),
keypoint_connections_to_flat_list(d.keypoint_connections),
)
for d in typed_class_descriptions
]
# AnnotationContext arrow handling happens inside the python bridge
bindings.log_annotation_context(entity_path, tuple_class_descriptions, timeless, recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/annotation.py | 0.95469 | 0.660542 | annotation.py | pypi |
from __future__ import annotations
from typing import Any, Sequence
import numpy as np
import numpy.typing as npt
from rerun import bindings
from rerun.components.annotation import ClassIdArray
from rerun.components.color import ColorRGBAArray
from rerun.components.draw_order import DrawOrderArray
from rerun.components.instance import InstanceArray
from rerun.components.label import LabelArray
from rerun.components.rect2d import Rect2DArray, RectFormat
from rerun.log import Color, Colors, OptionalClassIds, _normalize_colors, _normalize_ids, _normalize_labels
from rerun.log.error_utils import _send_warning
from rerun.log.extension_components import _add_extension_components
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"RectFormat",
"log_rect",
"log_rects",
]
@log_decorator
def log_rect(
entity_path: str,
rect: npt.ArrayLike | None,
*,
rect_format: RectFormat = RectFormat.XYWH,
color: Color | None = None,
label: str | None = None,
class_id: int | None = None,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a 2D rectangle.
Parameters
----------
entity_path:
Path to the rectangle in the space hierarchy.
rect:
the rectangle in [x, y, w, h], or some format you pick with the `rect_format` argument.
rect_format:
how to interpret the `rect` argument
color:
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
label:
Optional text to show inside the rectangle.
class_id:
Optional class id for the rectangle.
The class id provides color and label if not specified explicitly.
See [rerun.log_annotation_context][]
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for rects is 10.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the rect will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if np.any(rect): # type: ignore[arg-type]
rects = np.asarray([rect], dtype="float32")
else:
rects = np.zeros((0, 4), dtype="float32")
assert type(rects) is np.ndarray
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
instanced["rerun.rect2d"] = Rect2DArray.from_numpy_and_format(rects, rect_format)
if color is not None:
colors = _normalize_colors(color)
instanced["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if label:
instanced["rerun.label"] = LabelArray.new([label])
if class_id:
class_ids = _normalize_ids([class_id])
instanced["rerun.class_id"] = ClassIdArray.from_numpy(class_ids)
if draw_order is not None:
instanced["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(
entity_path,
components=splats,
timeless=timeless,
recording=recording,
)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(
entity_path,
components=instanced,
timeless=timeless,
recording=recording,
)
@log_decorator
def log_rects(
entity_path: str,
rects: npt.ArrayLike | None,
*,
rect_format: RectFormat = RectFormat.XYWH,
identifiers: Sequence[int] | None = None,
colors: Color | Colors | None = None,
labels: Sequence[str] | None = None,
class_ids: OptionalClassIds = None,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log multiple 2D rectangles.
Logging again to the same `entity_path` will replace all the previous rectangles.
Colors should either be in 0-255 gamma space or in 0-1 linear space.
Colors can be RGB or RGBA. You can supply no colors, one color,
or one color per point in a Nx3 or Nx4 numpy array.
Supported `dtype`s for `colors`:
--------------------------------
- uint8: color components should be in 0-255 sRGB gamma space, except for alpha which should be in 0-255 linear
space.
- float32/float64: all color components should be in 0-1 linear space.
Parameters
----------
entity_path:
Path to the rectangles in the space hierarchy.
rects:
Nx4 numpy array, where each row is [x, y, w, h], or some format you pick with the `rect_format` argument.
rect_format:
how to interpret the `rect` argument
identifiers:
Unique numeric id that shows up when you hover or select the point.
colors:
Optional per-rectangle gamma-space RGB or RGBA as 0-1 floats or 0-255 integers.
labels:
Optional per-rectangle text to show inside the rectangle.
class_ids:
Optional class ids for the rectangles.
The class id provides colors and labels if not specified explicitly.
See [rerun.log_annotation_context][]
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for rects is 10.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the rects will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
# Treat None the same as []
if np.any(rects): # type: ignore[arg-type]
rects = np.asarray(rects, dtype="float32")
else:
rects = np.zeros((0, 4), dtype="float32")
assert type(rects) is np.ndarray
colors = _normalize_colors(colors)
class_ids = _normalize_ids(class_ids)
labels = _normalize_labels(labels)
identifiers_np = np.array((), dtype="int64")
if identifiers:
try:
identifiers = [int(id) for id in identifiers]
identifiers_np = np.array(identifiers, dtype="int64")
except ValueError:
_send_warning("Only integer identifiers supported", 1)
# 0 = instanced, 1 = splat
comps = [{}, {}] # type: ignore[var-annotated]
comps[0]["rerun.rect2d"] = Rect2DArray.from_numpy_and_format(rects, rect_format)
if len(identifiers_np):
comps[0]["rerun.instance_key"] = InstanceArray.from_numpy(identifiers_np)
if len(colors):
is_splat = len(colors.shape) == 1
if is_splat:
colors = colors.reshape(1, len(colors))
comps[is_splat]["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if len(labels):
is_splat = len(labels) == 1
comps[is_splat]["rerun.label"] = LabelArray.new(labels)
if len(class_ids):
is_splat = len(class_ids) == 1
comps[is_splat]["rerun.class_id"] = ClassIdArray.from_numpy(class_ids)
if draw_order is not None:
comps[True]["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(comps[0], comps[1], ext, identifiers_np)
if comps[1]:
comps[1]["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(
entity_path,
components=comps[1],
timeless=timeless,
recording=recording,
)
# Always the primary component last so range-based queries will include the other data. See(#1215)
bindings.log_arrow_msg(
entity_path,
components=comps[0],
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/rects.py | 0.962108 | 0.377914 | rects.py | pypi |
from __future__ import annotations
from typing import Any
import numpy as np
import numpy.typing as npt
from rerun import bindings
from rerun.components.arrow import Arrow3DArray
from rerun.components.color import ColorRGBAArray
from rerun.components.instance import InstanceArray
from rerun.components.label import LabelArray
from rerun.components.radius import RadiusArray
from rerun.log import Color, _normalize_colors, _normalize_radii
from rerun.log.extension_components import _add_extension_components
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_arrow",
]
@log_decorator
def log_arrow(
entity_path: str,
origin: npt.ArrayLike | None,
vector: npt.ArrayLike | None = None,
*,
color: Color | None = None,
label: str | None = None,
width_scale: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a 3D arrow.
An arrow is defined with an `origin`, and a `vector`. This can also be considered as `start` and `end` positions
for the arrow.
The shaft is rendered as a cylinder with `radius = 0.5 * width_scale`.
The tip is rendered as a cone with `height = 2.0 * width_scale` and `radius = 1.0 * width_scale`.
Parameters
----------
entity_path
The path to store the entity at.
origin
The base position of the arrow.
vector
The vector along which the arrow will be drawn.
color
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
label
An optional text to show beside the arrow.
width_scale
An optional scaling factor, default=1.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless
The entity is not time-dependent, and will be visible at any time point.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
if origin is not None:
if vector is None:
raise TypeError("Must provide both origin and vector")
origin = np.require(origin, dtype="float32")
vector = np.require(vector, dtype="float32")
instanced["rerun.arrow3d"] = Arrow3DArray.from_numpy(origin.reshape(1, 3), vector.reshape(1, 3))
if color is not None:
colors = _normalize_colors(color)
instanced["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if label:
instanced["rerun.label"] = LabelArray.new([label])
if width_scale:
radii = _normalize_radii([width_scale / 2])
instanced["rerun.radius"] = RadiusArray.from_numpy(radii)
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(
entity_path,
components=splats,
timeless=timeless,
recording=recording,
)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(
entity_path,
components=instanced,
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/arrow.py | 0.948119 | 0.305594 | arrow.py | pypi |
from __future__ import annotations
from typing import Any
import numpy.typing as npt
from deprecated import deprecated
from rerun import bindings
from rerun.components.disconnected_space import DisconnectedSpaceArray
from rerun.components.quaternion import Quaternion
from rerun.components.transform3d import (
Rigid3D,
RotationAxisAngle,
Scale3D,
Transform3D,
Transform3DArray,
Translation3D,
TranslationAndMat3,
TranslationRotationScale3D,
)
from rerun.log.error_utils import _send_warning
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_view_coordinates",
"log_unknown_transform",
"log_disconnected_space",
"log_rigid3",
"log_transform3d",
]
@log_decorator
def log_view_coordinates(
entity_path: str,
*,
xyz: str = "",
up: str = "",
right_handed: bool | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log the view coordinates for an entity.
Each entity defines its own coordinate system, called a space.
By logging view coordinates you can give semantic meaning to the XYZ axes of the space.
This is particularly useful for 3D spaces, to set the up-axis.
For pinhole entities this will control the direction of the camera frustum.
You should use [`rerun.log_pinhole(…, camera_xyz=…)`][rerun.log_pinhole] for this instead,
and read the documentation there.
For full control, set the `xyz` parameter to a three-letter acronym (`xyz="RDF"`). Each letter represents:
* R: Right
* L: Left
* U: Up
* D: Down
* F: Forward
* B: Back
Some of the most common are:
* "RDF": X=Right Y=Down Z=Forward (right-handed)
* "RUB" X=Right Y=Up Z=Back (right-handed)
* "RDB": X=Right Y=Down Z=Back (left-handed)
* "RUF": X=Right Y=Up Z=Forward (left-handed)
Currently Rerun only supports right-handed coordinate systems.
Example
-------
```
rerun.log_view_coordinates("world/camera/image", xyz="RUB")
```
For world-coordinates it's often convenient to just specify an up-axis.
You can do so by using the `up`-parameter (where `up` is one of "+X", "-X", "+Y", "-Y", "+Z", "-Z"):
```
rerun.log_view_coordinates("world", up="+Z", right_handed=True, timeless=True)
rerun.log_view_coordinates("world", up="-Y", right_handed=False, timeless=True)
```
Parameters
----------
entity_path:
Path in the space hierarchy where the view coordinate will be set.
xyz:
Three-letter acronym for the view coordinate axes.
up:
Which axis is up? One of "+X", "-X", "+Y", "-Y", "+Z", "-Z".
right_handed:
If True, the coordinate system is right-handed. If False, it is left-handed.
timeless:
If true, the view coordinates will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if xyz == "" and up == "":
_send_warning("You must set either 'xyz' or 'up'. Ignoring log.", 1)
return
if xyz != "" and up != "":
_send_warning("You must set either 'xyz' or 'up', but not both. Dropping up.", 1)
up = ""
if xyz != "":
bindings.log_view_coordinates_xyz(
entity_path,
xyz,
right_handed,
timeless,
recording=recording,
)
else:
if right_handed is None:
right_handed = True
bindings.log_view_coordinates_up_handedness(
entity_path,
up,
right_handed,
timeless,
recording=recording,
)
@deprecated(version="0.7.0", reason="Use log_disconnected_space instead.")
@log_decorator
def log_unknown_transform(
entity_path: str,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log that this entity is NOT in the same space as the parent, but you do not (yet) know how they relate.
Parameters
----------
entity_path:
The path of the affected entity.
timeless:
Log the data as timeless.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
instanced: dict[str, Any] = {}
instanced["rerun.disconnected_space"] = DisconnectedSpaceArray.single()
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording)
@log_decorator
def log_disconnected_space(
entity_path: str,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log that this entity is NOT in the same space as the parent.
This is useful for specifying that a subgraph is independent of the rest of the scene.
If a transform or pinhole is logged on the same path, this component will be ignored.
Parameters
----------
entity_path:
The path of the affected entity.
timeless:
Log the data as timeless.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
instanced: dict[str, Any] = {}
instanced["rerun.disconnected_space"] = DisconnectedSpaceArray.single()
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording)
@log_decorator
def log_transform3d(
entity_path: str,
transform: (
TranslationAndMat3
| TranslationRotationScale3D
| RotationAxisAngle
| Translation3D
| Scale3D
| Quaternion
| Rigid3D
),
*,
from_parent: bool = False,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log an (affine) 3D transform between this entity and the parent.
If `from_parent` is set to `True`, the transformation is from the parent to the space of the entity_path,
otherwise it is from the child to the parent.
Note that new transforms replace previous, i.e. if you call this function several times on the same path,
each new transform will replace the previous one and does not combine with it.
Examples
--------
```
# Log translation only.
rr.log_transform3d("transform_test/translation", rr.Translation3D((2, 1, 3)))
# Log scale along the x axis only.
rr.log_transform3d("transform_test/x_scaled", rr.Scale3D((3, 1, 1)))
# Log a rotation around the z axis.
rr.log_transform3d("transform_test/z_rotated_object", rr.RotationAxisAngle((0, 0, 1), degrees=20))
# Log scale followed by translation along the Y-axis.
rr.log_transform3d(
"transform_test/scaled_and_translated_object", rr.TranslationRotationScale3D([0.0, 1.0, 0.0], scale=2)
)
# Log translation + rotation, also called a rigid transform.
rr.log_transform3d("transform_test/rigid3", rr.Rigid3D([1, 2, 3], rr.RotationAxisAngle((0, 1, 0), radians=1.57)))
# Log translation, rotation & scale all at once.
rr.log_transform3d(
"transform_test/transformed",
rr.TranslationRotationScale3D(
translation=[0, 1, 5],
rotation=rr.RotationAxisAngle((0, 0, 1), degrees=20),
scale=2,
),
)
```
Parameters
----------
entity_path:
Path of the *child* space in the space hierarchy.
transform:
Instance of a rerun data class that describes a three dimensional transform.
One of:
* `TranslationAndMat3`
* `TranslationRotationScale3D`
* `Rigid3D`
* `RotationAxisAngle`
* `Translation3D`
* `Quaternion`
* `Scale3D`
from_parent:
If True, the transform is from the parent to the child, otherwise it is from the child to the parent.
timeless:
If true, the transform will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
# Convert additionally supported types to TranslationRotationScale3D
if isinstance(transform, RotationAxisAngle) or isinstance(transform, Quaternion):
transform = TranslationRotationScale3D(rotation=transform)
elif isinstance(transform, Translation3D):
transform = TranslationRotationScale3D(translation=transform)
elif isinstance(transform, Scale3D):
transform = TranslationRotationScale3D(scale=transform)
elif isinstance(transform, Rigid3D):
transform = TranslationRotationScale3D(rotation=transform.rotation, translation=transform.translation)
instanced = {"rerun.transform3d": Transform3DArray.from_transform(Transform3D(transform, from_parent))}
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording)
@deprecated(version="0.7.0", reason="Use log_transform3d instead and, if xyz was set, use log_view_coordinates.")
@log_decorator
def log_rigid3(
entity_path: str,
*,
parent_from_child: tuple[npt.ArrayLike, npt.ArrayLike] | None = None,
child_from_parent: tuple[npt.ArrayLike, npt.ArrayLike] | None = None,
xyz: str = "",
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a proper rigid 3D transform between this entity and the parent (_deprecated_).
Set either `parent_from_child` or `child_from_parent` to a tuple of `(translation_xyz, quat_xyzw)`.
Note: This function is deprecated. Use [`rerun.log_transform3d`][] instead.
Parent-from-child
-----------------
Also known as pose (e.g. camera extrinsics).
The translation is the position of the entity in the parent space.
The resulting transform from child to parent corresponds to taking a point in the child space,
rotating it by the given rotations, and then translating it by the given translation:
`point_parent = translation + quat * point_child * quat*`
Example
-------
```
t = 0.0
translation = [math.sin(t), math.cos(t), 0.0] # circle around origin
rotation = [0.5, 0.0, 0.0, np.sin(np.pi/3)] # 60 degrees around x-axis
rerun.log_rigid3("sun/planet", parent_from_child=(translation, rotation))
```
Parameters
----------
entity_path:
Path of the *child* space in the space hierarchy.
parent_from_child:
A tuple of `(translation_xyz, quat_xyzw)` mapping points in the child space to the parent space.
child_from_parent:
the inverse of `parent_from_child`
xyz:
Optionally set the view coordinates of this entity, e.g. to `RDF` for `X=Right, Y=Down, Z=Forward`.
This is a convenience for also calling [log_view_coordinates][rerun.log_view_coordinates].
timeless:
If true, the transform will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if parent_from_child and child_from_parent:
raise TypeError("Set either parent_from_child or child_from_parent, but not both.")
if parent_from_child:
rotation = None
if parent_from_child[1] is not None:
rotation = Quaternion(xyzw=parent_from_child[1])
log_transform3d(
entity_path,
Rigid3D(translation=parent_from_child[0], rotation=rotation),
timeless=timeless,
recording=recording,
)
elif child_from_parent:
rotation = None
if child_from_parent[1] is not None:
rotation = Quaternion(xyzw=child_from_parent[1])
log_transform3d(
entity_path,
Rigid3D(translation=child_from_parent[0], rotation=rotation),
from_parent=True,
timeless=timeless,
recording=recording,
)
else:
raise TypeError("Set either parent_from_child or child_from_parent.")
if xyz != "":
log_view_coordinates(
entity_path,
xyz=xyz,
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/transform.py | 0.947575 | 0.647798 | transform.py | pypi |
from __future__ import annotations
from typing import Any, Iterable
import numpy as np
import numpy.typing as npt
from deprecated import deprecated
from rerun import bindings
from rerun.components.color import ColorRGBAArray
from rerun.components.draw_order import DrawOrderArray
from rerun.components.instance import InstanceArray
from rerun.components.linestrip import LineStrip2DArray, LineStrip3DArray
from rerun.components.radius import RadiusArray
from rerun.log import Color, Colors, _normalize_colors, _normalize_radii
from rerun.log.error_utils import _send_warning
from rerun.log.extension_components import _add_extension_components
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_path",
"log_line_strip",
"log_line_strips_2d",
"log_line_strips_3d",
"log_line_segments",
]
@deprecated(version="0.2.0", reason="Use log_line_strip instead")
def log_path(
entity_path: str,
positions: npt.ArrayLike | None,
*,
stroke_width: float | None = None,
color: Color | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
log_line_strip(
entity_path, positions, stroke_width=stroke_width, color=color, ext=ext, timeless=timeless, recording=recording
)
@log_decorator
def log_line_strip(
entity_path: str,
positions: npt.ArrayLike | None,
*,
stroke_width: float | None = None,
color: Color | None = None,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
r"""
Log a line strip through 2D or 3D space.
A line strip is a list of points connected by line segments. It can be used to draw approximations of smooth curves.
The points will be connected in order, like so:
```
2------3 5
/ \ /
0----1 \ /
4
```
Parameters
----------
entity_path:
Path to the path in the space hierarchy
positions:
An Nx2 or Nx3 array of points along the path.
stroke_width:
Optional width of the line.
color:
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for lines is 20.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the path will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if positions is not None:
positions = np.require(positions, dtype="float32")
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
if positions is not None:
if positions.shape[1] == 2:
instanced["rerun.linestrip2d"] = LineStrip2DArray.from_numpy_arrays([positions])
elif positions.shape[1] == 3:
instanced["rerun.linestrip3d"] = LineStrip3DArray.from_numpy_arrays([positions])
else:
raise TypeError("Positions should be either Nx2 or Nx3")
if color is not None:
colors = _normalize_colors(color)
instanced["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
# We store the stroke_width in radius
if stroke_width:
radii = _normalize_radii([stroke_width / 2])
instanced["rerun.radius"] = RadiusArray.from_numpy(radii)
if draw_order is not None:
instanced["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=splats, timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording)
@log_decorator
def log_line_strips_2d(
entity_path: str,
line_strips: Iterable[npt.ArrayLike] | None,
*,
identifiers: npt.ArrayLike | None = None,
stroke_widths: npt.ArrayLike | None = None,
colors: Color | Colors | None = None,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
r"""
Log a batch of line strips through 2D space.
Each line strip is a list of points connected by line segments. It can be used to draw
approximations of smooth curves.
The points will be connected in order, like so:
```
2------3 5
/ \ /
0----1 \ /
4
```
Parameters
----------
entity_path:
Path to the path in the space hierarchy
line_strips:
An iterable of Nx2 arrays of points along the path.
To log an empty line_strip use `np.zeros((0,0,3))` or `np.zeros((0,0,2))`
identifiers:
Unique numeric id that shows up when you hover or select the line.
stroke_widths:
Optional widths of the line.
colors:
Optional colors of the lines.
RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for lines is 20.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the path will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
colors = _normalize_colors(colors)
stroke_widths = _normalize_radii(stroke_widths)
radii = stroke_widths / 2.0
identifiers_np = np.array((), dtype="uint64")
if identifiers is not None:
try:
identifiers_np = np.require(identifiers, dtype="uint64")
except ValueError:
_send_warning("Only integer identifiers supported", 1)
# 0 = instanced, 1 = splat
comps = [{}, {}] # type: ignore[var-annotated]
if line_strips is not None:
line_strip_arrs = [np.require(line, dtype="float32") for line in line_strips]
dims = [line.shape[1] for line in line_strip_arrs]
if any(d != 2 for d in dims):
raise ValueError("All line strips must be Nx2")
comps[0]["rerun.linestrip2d"] = LineStrip2DArray.from_numpy_arrays(line_strip_arrs)
if len(identifiers_np):
comps[0]["rerun.instance_key"] = InstanceArray.from_numpy(identifiers_np)
if len(colors):
is_splat = len(colors.shape) == 1
if is_splat:
colors = colors.reshape(1, len(colors))
comps[is_splat]["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
# We store the stroke_width in radius
if len(radii):
is_splat = len(radii) == 1
comps[is_splat]["rerun.radius"] = RadiusArray.from_numpy(radii)
if draw_order is not None:
comps[1]["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(comps[0], comps[1], ext, identifiers_np)
if comps[1]:
comps[1]["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=comps[1], timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
bindings.log_arrow_msg(entity_path, components=comps[0], timeless=timeless, recording=recording)
@log_decorator
def log_line_strips_3d(
entity_path: str,
line_strips: Iterable[npt.ArrayLike] | None,
*,
identifiers: npt.ArrayLike | None = None,
stroke_widths: npt.ArrayLike | None = None,
colors: Color | Colors | None = None,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
r"""
Log a batch of line strips through 3D space.
Each line strip is a list of points connected by line segments. It can be used to draw approximations
of smooth curves.
The points will be connected in order, like so:
```
2------3 5
/ \ /
0----1 \ /
4
```
Parameters
----------
entity_path:
Path to the path in the space hierarchy
line_strips:
An iterable of Nx3 arrays of points along the path.
To log an empty line_strip use `np.zeros((0,0,3))` or `np.zeros((0,0,2))`
identifiers:
Unique numeric id that shows up when you hover or select the line.
stroke_widths:
Optional widths of the line.
colors:
Optional colors of the lines.
RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for lines is 20.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the path will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
colors = _normalize_colors(colors)
stroke_widths = _normalize_radii(stroke_widths)
radii = stroke_widths / 2.0
identifiers_np = np.array((), dtype="uint64")
if identifiers is not None:
try:
identifiers_np = np.require(identifiers, dtype="uint64")
except ValueError:
_send_warning("Only integer identifiers supported", 1)
# 0 = instanced, 1 = splat
comps = [{}, {}] # type: ignore[var-annotated]
if line_strips is not None:
line_strip_arrs = [np.require(line, dtype="float32") for line in line_strips]
dims = [line.shape[1] for line in line_strip_arrs]
if any(d != 3 for d in dims):
raise ValueError("All line strips must be Nx3")
comps[0]["rerun.linestrip3d"] = LineStrip3DArray.from_numpy_arrays(line_strip_arrs)
if len(identifiers_np):
comps[0]["rerun.instance_key"] = InstanceArray.from_numpy(identifiers_np)
if len(colors):
is_splat = len(colors.shape) == 1
if is_splat:
colors = colors.reshape(1, len(colors))
comps[is_splat]["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
# We store the stroke_width in radius
if len(radii):
is_splat = len(radii) == 1
comps[is_splat]["rerun.radius"] = RadiusArray.from_numpy(radii)
if draw_order is not None:
comps[1]["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(comps[0], comps[1], ext, identifiers_np)
if comps[1]:
comps[1]["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=comps[1], timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
bindings.log_arrow_msg(entity_path, components=comps[0], timeless=timeless, recording=recording)
@log_decorator
def log_line_segments(
entity_path: str,
positions: npt.ArrayLike,
*,
stroke_width: float | None = None,
color: Color | None = None,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
r"""
Log many 2D or 3D line segments.
The points will be connected in even-odd pairs, like so:
```
2------3 5
/
0----1 /
4
```
Parameters
----------
entity_path:
Path to the line segments in the space hierarchy
positions:
An Nx2 or Nx3 array of points. Even-odd pairs will be connected as segments.
stroke_width:
Optional width of the line.
color:
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for lines is 20.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the line segments will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
if positions is None:
positions = np.require([], dtype="float32")
positions = np.require(positions, dtype="float32")
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
if positions is not None:
# If not a multiple of 2, drop the last row
if len(positions) % 2:
positions = positions[:-1]
if positions.ndim > 1 and positions.shape[1] == 2:
# Reshape even-odd pairs into a collection of line-strips of length2
# [[a00, a01], [a10, a11], [b00, b01], [b10, b11]]
# -> [[[a00, a01], [a10, a11]], [[b00, b01], [b10, b11]]]
positions = positions.reshape([len(positions) // 2, 2, 2])
instanced["rerun.linestrip2d"] = LineStrip2DArray.from_numpy_arrays(positions)
elif positions.ndim > 1 and positions.shape[1] == 3:
# Same as above but for 3d points
positions = positions.reshape([len(positions) // 2, 2, 3])
instanced["rerun.linestrip3d"] = LineStrip3DArray.from_numpy_arrays(positions)
else:
raise TypeError("Positions should be either Nx2 or Nx3")
# The current API splats both color and stroke-width, though the data-model doesn't
# require that we do so.
if color is not None:
colors = _normalize_colors(color)
splats["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
# We store the stroke_width in radius
if stroke_width:
radii = _normalize_radii([stroke_width / 2])
splats["rerun.radius"] = RadiusArray.from_numpy(radii)
if draw_order is not None:
instanced["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=splats, timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/lines.py | 0.930514 | 0.443721 | lines.py | pypi |
from __future__ import annotations
import logging
from dataclasses import dataclass
from typing import Any, Final
from rerun import bindings
from rerun.components.color import ColorRGBAArray
from rerun.components.instance import InstanceArray
from rerun.components.text_entry import TextEntryArray
from rerun.log import Color, _normalize_colors
from rerun.recording_stream import RecordingStream
# Fully qualified to avoid circular import
__all__ = [
"LogLevel",
"log_text_entry_internal",
]
@dataclass
class LogLevel:
"""
Represents the standard log levels.
This is a collection of constants rather than an enum because we do support
arbitrary strings as level (e.g. for user-defined levels).
"""
CRITICAL: Final = "CRITICAL"
""" Designates catastrophic failures. """
ERROR: Final = "ERROR"
""" Designates very serious errors. """
WARN: Final = "WARN"
""" Designates hazardous situations. """
INFO: Final = "INFO"
""" Designates useful information. """
DEBUG: Final = "DEBUG"
""" Designates lower priority information. """
TRACE: Final = "TRACE"
""" Designates very low priority, often extremely verbose, information. """
def log_text_entry_internal(
entity_path: str,
text: str,
*,
level: str | None = LogLevel.INFO,
color: Color | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Internal API to log a text entry, with optional level.
This implementation doesn't support extension components, or the exception-capturing decorator
and is intended to be used from inside the other rerun log functions.
Parameters
----------
entity_path:
The object path to log the text entry under.
text:
The text to log.
level:
The level of the text entry (default: `LogLevel.INFO`). Note this can technically
be an arbitrary string, but it's recommended to use one of the constants
from [LogLevel][rerun.log.text.LogLevel]
color:
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
timeless:
Whether the text entry should be timeless.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
if text:
instanced["rerun.text_entry"] = TextEntryArray.from_bodies_and_levels([(text, level)])
else:
logging.warning(f"Null text entry in log_text_entry('{entity_path}') will be dropped.")
if color is not None:
colors = _normalize_colors(color)
instanced["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=splats, timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/text_internal.py | 0.948119 | 0.25678 | text_internal.py | pypi |
from __future__ import annotations
import logging
from typing import Any, Final
import rerun.log.extension_components
from rerun import bindings
from rerun.components.color import ColorRGBAArray
from rerun.components.instance import InstanceArray
from rerun.components.text_entry import TextEntryArray
from rerun.log import Color, _normalize_colors
from rerun.log.log_decorator import log_decorator
from rerun.log.text_internal import LogLevel
from rerun.recording_stream import RecordingStream
# Fully qualified to avoid circular import
__all__ = [
"LogLevel",
"LoggingHandler",
"log_text_entry",
]
class LoggingHandler(logging.Handler):
"""
Provides a logging handler that forwards all events to the Rerun SDK.
Because Rerun's data model doesn't match 1-to-1 with the different concepts from
python's logging ecosystem, we need a way to map the latter to the former:
Mapping
-------
* Root Entity: Optional root entity to gather all the logs under.
* Entity path: the name of the logger responsible for the creation of the LogRecord
is used as the final entity path, appended after the Root Entity path.
* Level: the log level is mapped as-is.
* Body: the body of the text entry corresponds to the formatted output of
the LogRecord using the standard formatter of the logging package,
unless it has been overridden by the user.
[Read more about logging handlers](https://docs.python.org/3/howto/logging.html#handlers)
"""
LVL2NAME: Final = {
logging.CRITICAL: LogLevel.CRITICAL,
logging.ERROR: LogLevel.ERROR,
logging.WARNING: LogLevel.WARN,
logging.INFO: LogLevel.INFO,
logging.DEBUG: LogLevel.DEBUG,
}
def __init__(self, root_entity_path: str | None = None):
logging.Handler.__init__(self)
self.root_entity_path = root_entity_path
def emit(self, record: logging.LogRecord) -> None:
"""Emits a record to the Rerun SDK."""
objpath = record.name.replace(".", "/")
if self.root_entity_path is not None:
objpath = f"{self.root_entity_path}/{objpath}"
level = self.LVL2NAME.get(record.levelno)
if level is None: # user-defined level
level = record.levelname
# NOTE: will go to the most appropriate recording!
log_text_entry(objpath, record.getMessage(), level=level)
@log_decorator
def log_text_entry(
entity_path: str,
text: str,
*,
level: str | None = LogLevel.INFO,
color: Color | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a text entry, with optional level.
Parameters
----------
entity_path:
The object path to log the text entry under.
text:
The text to log.
level:
The level of the text entry (default: `LogLevel.INFO`). Note this can technically
be an arbitrary string, but it's recommended to use one of the constants
from [LogLevel][rerun.log.text.LogLevel]
color:
Optional RGB or RGBA in sRGB gamma-space as either 0-1 floats or 0-255 integers, with separate alpha.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
Whether the text entry should be timeless.
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
if text:
instanced["rerun.text_entry"] = TextEntryArray.from_bodies_and_levels([(text, level)])
else:
logging.warning(f"Null text entry in log_text_entry('{entity_path}') will be dropped.")
if color is not None:
colors = _normalize_colors(color)
instanced["rerun.colorrgba"] = ColorRGBAArray.from_numpy(colors)
if ext:
rerun.log.extension_components._add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=splats, timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/text.py | 0.949681 | 0.251039 | text.py | pypi |
from __future__ import annotations
from io import BytesIO
from typing import Any
import numpy as np
import numpy.typing as npt
from PIL import Image
from rerun import bindings
from rerun.log.error_utils import _send_warning
from rerun.log.file import ImageFormat, log_image_file
from rerun.log.log_decorator import log_decorator
from rerun.log.tensor import Tensor, _log_tensor, _to_numpy
from rerun.recording_stream import RecordingStream
__all__ = [
"log_image",
"log_depth_image",
"log_segmentation_image",
]
@log_decorator
def log_image(
entity_path: str,
image: Tensor,
*,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
jpeg_quality: int | None = None,
) -> None:
"""
Log a gray or color image.
The image should either have 1, 3 or 4 channels (gray, RGB or RGBA).
Supported dtypes
----------------
- uint8, uint16, uint32, uint64: color components should be in 0-`max_uint` sRGB gamma space, except for alpha
which should be in 0-`max_uint` linear space.
- float16, float32, float64: all color components should be in 0-1 linear space.
- int8, int16, int32, int64: if all pixels are positive, they are interpreted as their unsigned counterparts.
Otherwise, the image is normalized before display (the pixel with the lowest value is black and the pixel with
the highest value is white).
Parameters
----------
entity_path:
Path to the image in the space hierarchy.
image:
A [Tensor][rerun.log.tensor.Tensor] representing the image to log.
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for images is -10.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the image will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
jpeg_quality:
If set, encode the image as a JPEG to save storage space.
Higher quality = larger file size.
A quality of 95 still saves a lot of space, but is visually very similar.
JPEG compression works best for photographs.
Only RGB images are supported.
Note that compressing to JPEG costs a bit of CPU time, both when logging
and later when viewing them.
"""
recording = RecordingStream.to_native(recording)
image = _to_numpy(image)
shape = image.shape
non_empty_dims = [d for d in shape if d != 1]
num_non_empty_dims = len(non_empty_dims)
interpretable_as_image = True
# Catch some errors early:
if num_non_empty_dims < 2 or 3 < num_non_empty_dims:
_send_warning(f"Expected image, got array of shape {shape}", 1, recording=recording)
interpretable_as_image = False
if num_non_empty_dims == 3:
depth = shape[-1]
if depth not in (1, 3, 4):
_send_warning(
f"Expected image depth of 1 (gray), 3 (RGB) or 4 (RGBA). Instead got array of shape {shape}",
1,
recording=recording,
)
interpretable_as_image = False
# TODO(#672): Don't squeeze once the image view can handle extra empty dimensions
if interpretable_as_image and num_non_empty_dims != len(shape):
image = np.squeeze(image)
if jpeg_quality is not None:
# TODO(emilk): encode JPEG in background thread instead
if image.dtype not in ["uint8", "sint32", "float32"]:
# Convert to a format supported by Image.fromarray
image = image.astype("float32")
pil_image = Image.fromarray(image)
output = BytesIO()
pil_image.save(output, format="JPEG", quality=jpeg_quality)
jpeg_bytes = output.getvalue()
output.close()
# TODO(emilk): pass draw_order too
log_image_file(entity_path=entity_path, img_bytes=jpeg_bytes, img_format=ImageFormat.JPEG, timeless=timeless)
return
_log_tensor(entity_path, image, draw_order=draw_order, ext=ext, timeless=timeless, recording=recording)
@log_decorator
def log_depth_image(
entity_path: str,
image: Tensor,
*,
draw_order: float | None = None,
meter: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log a depth image.
The image must be a 2D array.
Supported dtypes
----------------
float16, float32, float64, uint8, uint16, uint32, uint64, int8, int16, int32, int64
Parameters
----------
entity_path:
Path to the image in the space hierarchy.
image:
A [Tensor][rerun.log.tensor.Tensor] representing the depth image to log.
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for images is -10.0.
meter:
How long is a meter in the given dtype?
For instance: with uint16, perhaps meter=1000 which would mean
you have millimeter precision and a range of up to ~65 meters (2^16 / 1000).
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the image will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
image = _to_numpy(image)
# TODO(#635): Remove when issue with displaying f64 depth images is fixed.
if image.dtype == np.float64:
image = image.astype(np.float32)
shape = image.shape
non_empty_dims = [d for d in shape if d != 1]
num_non_empty_dims = len(non_empty_dims)
# Catch some errors early:
if num_non_empty_dims != 2:
_send_warning(f"Expected 2D depth image, got array of shape {shape}", 1, recording=recording)
_log_tensor(
entity_path, image, timeless=timeless, meaning=bindings.TensorDataMeaning.Depth, recording=recording
)
else:
# TODO(#672): Don't squeeze once the image view can handle extra empty dimensions.
if num_non_empty_dims != len(shape):
image = np.squeeze(image)
_log_tensor(
entity_path,
image,
draw_order=draw_order,
meter=meter,
ext=ext,
timeless=timeless,
meaning=bindings.TensorDataMeaning.Depth,
recording=recording,
)
@log_decorator
def log_segmentation_image(
entity_path: str,
image: npt.ArrayLike,
*,
draw_order: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log an image made up of integer class-ids.
The image should have 1 channel, i.e. be either `H x W` or `H x W x 1`.
See: [rerun.log_annotation_context][] for information on how to map the class-ids to
colors and labels.
Supported dtypes
----------------
uint8, uint16
Parameters
----------
entity_path:
Path to the image in the space hierarchy.
image:
A [Tensor][rerun.log.tensor.Tensor] representing the segmentation image to log.
draw_order:
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for images is -10.0.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the image will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
image = np.array(image, copy=False)
if image.dtype not in (np.dtype("uint8"), np.dtype("uint16")):
image = np.require(image, np.uint16)
non_empty_dims = [d for d in image.shape if d != 1]
num_non_empty_dims = len(non_empty_dims)
# Catch some errors early:
if num_non_empty_dims != 2:
_send_warning(
f"Expected single channel image, got array of shape {image.shape}. Can't interpret as segmentation image.",
1,
recording=recording,
)
_log_tensor(
entity_path,
tensor=image,
draw_order=draw_order,
ext=ext,
timeless=timeless,
recording=recording,
)
else:
# TODO(#672): Don't squeeze once the image view can handle extra empty dimensions.
if num_non_empty_dims != len(image.shape):
image = np.squeeze(image)
_log_tensor(
entity_path,
tensor=image,
draw_order=draw_order,
meaning=bindings.TensorDataMeaning.ClassId,
ext=ext,
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/image.py | 0.816736 | 0.519765 | image.py | pypi |
from __future__ import annotations
from typing import Any, Sequence
import numpy as np
import numpy.typing as npt
import pyarrow as pa
import rerun.log.error_utils
from rerun import bindings
from rerun.components.instance import InstanceArray
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
# Fully qualified to avoid circular import
__all__ = [
"_add_extension_components",
"log_extension_components",
]
EXT_PREFIX = "ext."
EXT_COMPONENT_TYPES: dict[str, Any] = {}
def _add_extension_components(
instanced: dict[str, Any],
splats: dict[str, Any],
ext: dict[str, Any],
identifiers: npt.NDArray[np.uint64] | None,
) -> None:
for name, value in ext.items():
# Don't log empty components
if value is None:
continue
# Add the ext prefix, unless it's already there
if not name.startswith(EXT_PREFIX):
name = EXT_PREFIX + name
np_type, pa_type = EXT_COMPONENT_TYPES.get(name, (None, None))
try:
if np_type is not None:
np_value = np.atleast_1d(np.array(value, copy=False, dtype=np_type))
pa_value = pa.array(np_value, type=pa_type)
else:
np_value = np.atleast_1d(np.array(value, copy=False))
pa_value = pa.array(np_value)
EXT_COMPONENT_TYPES[name] = (np_value.dtype, pa_value.type)
except Exception as ex:
rerun.log.error_utils._send_warning(
"Error converting extension data to arrow for component {}. Dropping.\n{}: {}".format(
name, type(ex).__name__, ex
),
1,
)
continue
is_splat = (len(np_value) == 1) and (len(identifiers or []) != 1)
if is_splat:
splats[name] = pa_value
else:
instanced[name] = pa_value
@log_decorator
def log_extension_components(
entity_path: str,
ext: dict[str, Any],
*,
identifiers: Sequence[int] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log an arbitrary collection of extension components.
Each item in `ext` will be logged as a separate component.
- The key will be used as the name of the component
- The value must be able to be converted to an array of arrow types. In general, if
you can pass it to [pyarrow.array](https://arrow.apache.org/docs/python/generated/pyarrow.array.html),
you can log it as a extension component.
All values must either have the same length, or be singular in which case they will be
treated as a splat.
Extension components will be prefixed with "ext." to avoid collisions with rerun native components.
You do not need to include this prefix; it will be added for you.
Note: rerun requires that a given component only take on a single type. The first type logged
will be the type that is used for all future logs of that component. The API will make
a best effort to do type conversion if supported by numpy and arrow. Any components that
can't be converted will be dropped.
If you are want to inspect how your component will be converted to the underlying
arrow code, the following snippet is what is happening internally:
```
np_value = np.atleast_1d(np.array(value, copy=False))
pa_value = pa.array(value)
```
Parameters
----------
entity_path:
Path to the extension components in the space hierarchy.
ext:
A dictionary of extension components.
identifiers:
Optional identifiers for each component. If provided, must be the same length as the components.
timeless:
If true, the components will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
recording = RecordingStream.to_native(recording)
identifiers_np = np.array((), dtype="uint64")
if identifiers:
try:
identifiers = [int(id) for id in identifiers]
identifiers_np = np.array(identifiers, dtype="uint64")
except ValueError:
rerun.log.error_utils._send_warning("Only integer identifiers supported", 1)
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
if len(identifiers_np):
instanced["rerun.instance_key"] = InstanceArray.from_numpy(identifiers_np)
_add_extension_components(instanced, splats, ext, identifiers_np)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(entity_path, components=splats, timeless=timeless, recording=recording)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless, recording=recording) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/extension_components.py | 0.909788 | 0.373676 | extension_components.py | pypi |
from __future__ import annotations
from typing import Optional, Sequence, Union
import numpy as np
import numpy.typing as npt
ColorDtype = Union[np.uint8, np.float32, np.float64]
Color = Union[npt.NDArray[ColorDtype], Sequence[Union[int, float]]]
Colors = Union[Sequence[Color], npt.NDArray[ColorDtype]]
OptionalClassIds = Optional[Union[int, npt.ArrayLike]]
OptionalKeyPointIds = Optional[Union[int, npt.ArrayLike]]
def _to_sequence(array: npt.ArrayLike | None) -> Sequence[float] | None:
return np.require(array, float).tolist() # type: ignore[no-any-return]
def _normalize_colors(colors: Color | Colors | None = None) -> npt.NDArray[np.uint8]:
"""
Normalize flexible colors arrays.
Float colors are assumed to be in 0-1 gamma sRGB space.
All other colors are assumed to be in 0-255 gamma sRGB space.
If there is an alpha, we assume it is in linear space, and separate (NOT pre-multiplied).
"""
if colors is None:
# An empty array represents no colors.
return np.array((), dtype=np.uint8)
else:
colors_array = np.array(colors, copy=False)
# Rust expects colors in 0-255 uint8
if colors_array.dtype.type in [np.float32, np.float64]:
# Assume gamma-space colors
return np.require(np.round(colors_array * 255.0), np.uint8)
return np.require(colors_array, np.uint8)
def _normalize_ids(class_ids: OptionalClassIds = None) -> npt.NDArray[np.uint16]:
"""Normalize flexible class id arrays."""
if class_ids is None:
return np.array((), dtype=np.uint16)
else:
# TODO(andreas): Does this need optimizing for the case where class_ids is already an np array?
return np.atleast_1d(np.array(class_ids, dtype=np.uint16, copy=False))
def _normalize_radii(radii: npt.ArrayLike | None = None) -> npt.NDArray[np.float32]:
"""Normalize flexible radii arrays."""
if radii is None:
return np.array((), dtype=np.float32)
else:
return np.atleast_1d(np.array(radii, dtype=np.float32, copy=False))
def _normalize_labels(labels: str | Sequence[str] | None) -> Sequence[str]:
if labels is None:
return []
else:
return labels
def _normalize_matrix3(matrix: npt.ArrayLike | None) -> npt.ArrayLike:
matrix = np.eye(3) if matrix is None else matrix
matrix = np.array(matrix, dtype=np.float32, order="F")
if matrix.shape != (3, 3):
raise ValueError(f"Expected 3x3 matrix, shape was instead {matrix.shape}")
# Rerun is column major internally, tell numpy to use Fortran order which is just that.
return matrix.flatten(order="F") | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/__init__.py | 0.877988 | 0.468973 | __init__.py | pypi |
from __future__ import annotations
from typing import Any, Iterable, Protocol, Union
import numpy as np
import numpy.typing as npt
from rerun import bindings
from rerun.components.draw_order import DrawOrderArray
from rerun.components.instance import InstanceArray
from rerun.components.tensor import TensorArray
from rerun.log.error_utils import _send_warning
from rerun.log.extension_components import _add_extension_components
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_tensor",
]
class TorchTensorLike(Protocol):
"""Describes what is need from a Torch Tensor to be loggable to Rerun."""
def numpy(self, force: bool) -> npt.NDArray[Any]:
...
Tensor = Union[npt.ArrayLike, TorchTensorLike]
"""Type helper for a tensor-like object that can be logged to Rerun."""
def _to_numpy(tensor: Tensor) -> npt.NDArray[Any]:
# isinstance is 4x faster than catching AttributeError
if isinstance(tensor, np.ndarray):
return tensor
try:
# Make available to the cpu
return tensor.numpy(force=True) # type: ignore[union-attr]
except AttributeError:
return np.array(tensor, copy=False)
@log_decorator
def log_tensor(
entity_path: str,
tensor: npt.ArrayLike,
*,
names: Iterable[str | None] | None = None,
meter: float | None = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log an n-dimensional tensor.
Parameters
----------
entity_path:
Path to the tensor in the space hierarchy.
tensor:
A [Tensor][rerun.log.tensor.Tensor] object.
names:
Optional names for each dimension of the tensor.
meter:
Optional scale of the tensor (e.g. meters per cell).
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the tensor will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
_log_tensor(
entity_path,
tensor=_to_numpy(tensor),
names=names,
meter=meter,
ext=ext,
timeless=timeless,
recording=recording,
)
def _log_tensor(
entity_path: str,
tensor: npt.NDArray[Any],
draw_order: float | None = None,
names: Iterable[str | None] | None = None,
meter: float | None = None,
meaning: bindings.TensorDataMeaning = None,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""Log a general tensor, perhaps with named dimensions."""
if names is not None:
names = list(names)
if len(tensor.shape) != len(names):
_send_warning(
(
f"len(tensor.shape) = len({tensor.shape}) = {len(tensor.shape)} != "
+ f"len(names) = len({names}) = {len(names)}. Dropping tensor dimension names."
),
2,
recording=recording,
)
names = None
SUPPORTED_DTYPES: Any = [
np.uint8,
np.uint16,
np.uint32,
np.uint64,
np.int8,
np.int16,
np.int32,
np.int64,
np.float16,
np.float32,
np.float64,
]
if tensor.dtype not in SUPPORTED_DTYPES:
_send_warning(
f"Unsupported dtype: {tensor.dtype}. Expected a numeric type. Skipping this tensor.",
2,
recording=recording,
)
return
instanced: dict[str, Any] = {}
splats: dict[str, Any] = {}
instanced["rerun.tensor"] = TensorArray.from_numpy(tensor, names, meaning, meter)
if draw_order is not None:
instanced["rerun.draw_order"] = DrawOrderArray.splat(draw_order)
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = InstanceArray.splat()
bindings.log_arrow_msg(
entity_path,
components=splats,
timeless=timeless,
recording=recording,
)
# Always the primary component last so range-based queries will include the other data. See(#1215)
if instanced:
bindings.log_arrow_msg(
entity_path,
components=instanced,
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/tensor.py | 0.953955 | 0.322553 | tensor.py | pypi |
from __future__ import annotations
import numpy.typing as npt
from rerun import bindings
from rerun.components.pinhole import Pinhole, PinholeArray
from rerun.log.error_utils import _send_warning
from rerun.log.log_decorator import log_decorator
from rerun.recording_stream import RecordingStream
__all__ = [
"log_pinhole",
]
@log_decorator
def log_pinhole(
entity_path: str,
*,
width: int,
height: int,
focal_length_px: float | npt.ArrayLike | None = None,
principal_point_px: npt.ArrayLike | None = None,
child_from_parent: npt.ArrayLike | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
camera_xyz: str | None = None,
) -> None:
"""
Log a perspective camera model.
This logs the pinhole model that projects points from the parent (camera) space to this space (image) such that:
```
point_image_hom = child_from_parent * point_cam
point_image = point_image_hom[:,1] / point_image_hom[2]
```
Where `point_image_hom` is the projected point in the image space expressed in homogeneous coordinates.
Example
-------
```
width = 640
height = 480
f_len = (height * width) ** 0.5
rerun.log_pinhole("world/camera/image",
width = width,
height = height,
focal_length_px = f_len)
# More explicit:
u_cen = width / 2
v_cen = height / 2
rerun.log_pinhole("world/camera/image",
width = width,
height = height,
child_from_parent = [[f_len, 0, u_cen],
[0, f_len, v_cen],
[0, 0, 1 ]],
camera_xyz="RDF")
```
Parameters
----------
entity_path:
Path to the child (image) space in the space hierarchy.
focal_length_px:
The focal length of the camera in pixels.
This is the diagonal of the projection matrix.
Set one value for symmetric cameras, or two values (X=Right, Y=Down) for anamorphic cameras.
principal_point_px:
The center of the camera in pixels.
The default is half the width and height.
This is the last column of the projection matrix.
Expects two values along the dimensions Right and Down
child_from_parent:
Row-major intrinsics matrix for projecting from camera space to image space.
The first two axes are X=Right and Y=Down, respectively.
Projection is done along the positive third (Z=Forward) axis.
This can be specified _instead_ of `focal_length_px` and `principal_point_px`.
width:
Width of the image in pixels.
height:
Height of the image in pixels.
timeless:
If true, the camera will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
camera_xyz:
Sets the view coordinates for the camera. The default is "RDF", i.e. X=Right, Y=Down, Z=Forward,
and this is also the recommended setting.
This means that the camera frustum will point along the positive Z axis of the parent space,
and the cameras "up" direction will be along the negative Y axis of the parent space.
Each letter represents:
* R: Right
* L: Left
* U: Up
* D: Down
* F: Forward
* B: Back
The camera furstum will point whichever axis is set to `F` (or the oppositve of `B`).
When logging a depth image under this entity, this is the direction the point cloud will be projected.
With XYZ=RDF, the default forward is +Z.
The frustum's "up" direction will be whichever axis is set to `U` (or the oppositve of `D`).
This will match the negative Y direction of pixel space (all images are assumed to have xyz=RDF).
With RDF, the default is up is -Y.
The frustum's "right" direction will be whichever axis is set to `R` (or the oppositve of `L`).
This will match the positive X direction of pixel space (all images are assumed to have xyz=RDF).
With RDF, the default right is +x.
Other common formats are "RUB" (X=Right, Y=Up, Z=Back) and "FLU" (X=Forward, Y=Left, Z=Up).
Equivalent to calling [`rerun.log_view_coordinates(entity, xyz=…)`][rerun.log_view_coordinates].
NOTE: setting this to something else than "RDF" (the default) will change the orientation of the camera frustum,
and make the pinhole matrix not match up with the coordinate system of the pinhole entity.
The pinhole matrix (the `child_from_parent` argument) always project along the third (Z) axis,
but will be re-oriented to project along the forward axis of the `camera_xyz` argument.
"""
matrix: npt.ArrayLike
if child_from_parent is None:
# TODO(emilk): Use a union type for the Pinhole component instead of converting to a matrix here
if focal_length_px is None:
_send_warning("log_pinhole: either child_from_parent or focal_length_px must be set", 1)
focal_length_px = (height * width) ** 0.5 # a reasonable default
if principal_point_px is None:
principal_point_px = [width / 2, height / 2]
if type(focal_length_px) in (int, float):
fl_x = focal_length_px
fl_y = focal_length_px
else:
try:
# TODO(emilk): check that it is 2 elements long
fl_x = focal_length_px[0] # type: ignore[index]
fl_y = focal_length_px[1] # type: ignore[index]
except Exception:
_send_warning("log_pinhole: expected focal_length_px to be one or two floats", 1)
fl_x = width / 2
fl_y = fl_x
try:
# TODO(emilk): check that it is 2 elements long
u_cen = principal_point_px[0] # type: ignore[index]
v_cen = principal_point_px[1] # type: ignore[index]
except Exception:
_send_warning("log_pinhole: expected principal_point_px to be one or two floats", 1)
u_cen = width / 2
v_cen = height / 2
matrix = [[fl_x, 0, u_cen], [0, fl_y, v_cen], [0, 0, 1]] # type: ignore[assignment]
else:
matrix = child_from_parent
if focal_length_px is not None:
_send_warning("log_pinhole: both child_from_parent and focal_length_px set", 1)
if principal_point_px is not None:
_send_warning("log_pinhole: both child_from_parent and principal_point_px set", 1)
instanced = {"rerun.pinhole": PinholeArray.from_pinhole(Pinhole(matrix, [width, height]))}
bindings.log_arrow_msg(entity_path, components=instanced, timeless=timeless)
if camera_xyz:
bindings.log_view_coordinates_xyz(
entity_path,
xyz=camera_xyz,
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/camera.py | 0.871324 | 0.718113 | camera.py | pypi |
from __future__ import annotations
import rerun_bindings as bindings # type: ignore[attr-defined]
from rerun.recording_stream import RecordingStream
def new_blueprint(
application_id: str,
*,
blueprint_id: str | None = None,
make_default: bool = False,
make_thread_default: bool = False,
spawn: bool = False,
add_to_app_default_blueprint: bool = False,
default_enabled: bool = True,
) -> RecordingStream:
"""
Creates a new blueprint with a user-chosen application id (name) to configure the appearance of Rerun.
If you only need a single global blueprint, [`rerun.init`][] might be simpler.
Parameters
----------
application_id : str
Your Rerun recordings will be categorized by this application id, so
try to pick a unique one for each application that uses the Rerun SDK.
For example, if you have one application doing object detection
and another doing camera calibration, you could have
`rerun.init("object_detector")` and `rerun.init("calibrator")`.
blueprint_id : Optional[str]
Set the blueprint ID that this process is logging to, as a UUIDv4.
The default blueprint_id is based on `multiprocessing.current_process().authkey`
which means that all processes spawned with `multiprocessing`
will have the same default blueprint_id.
If you are not using `multiprocessing` and still want several different Python
processes to log to the same Rerun instance (and be part of the same blueprint),
you will need to manually assign them all the same blueprint_id.
Any random UUIDv4 will work, or copy the blueprint_id for the parent process.
make_default : bool
If true (_not_ the default), the newly initialized blueprint will replace the current
active one (if any) in the global scope.
make_thread_default : bool
If true (_not_ the default), the newly initialized blueprint will replace the current
active one (if any) in the thread-local scope.
spawn : bool
Spawn a Rerun Viewer and stream logging data to it.
Short for calling `spawn` separately.
If you don't call this, log events will be buffered indefinitely until
you call either `connect`, `show`, or `save`
add_to_app_default_blueprint
Should the blueprint append to the existing app-default blueprint instead instead of creating a new one.
default_enabled
Should Rerun logging be on by default?
Can overridden with the RERUN env-var, e.g. `RERUN=on` or `RERUN=off`.
Returns
-------
RecordingStream
A handle to the [`rerun.RecordingStream`][]. Use it to log data to Rerun.
"""
blueprint_id = application_id if add_to_app_default_blueprint else blueprint_id
blueprint = RecordingStream(
bindings.new_blueprint(
application_id=application_id,
blueprint_id=blueprint_id,
make_default=make_default,
make_thread_default=make_thread_default,
default_enabled=default_enabled,
)
)
if spawn:
from rerun.sinks import spawn as _spawn
_spawn(recording=blueprint)
return blueprint
def add_space_view(
*,
origin: str,
name: str | None,
entity_paths: list[str] | None,
blueprint: RecordingStream | None = None,
) -> None:
"""
Add a new space view to the blueprint.
Parameters
----------
origin : str
The EntityPath to use as the origin of this space view. All other entities will be transformed
to be displayed relative to this origin.
name : Optional[str]
The name of the space view to show in the UI. Will default to the origin if not provided.
entity_paths : Optional[List[str]]
The entities to be shown in the space view. If not provided, this will default to [origin]
blueprint : Optional[RecordingStream]
The blueprint to add the space view to. If None, the default global blueprint is used.
"""
if name is None:
name = origin
if entity_paths is None:
entity_paths = [origin]
blueprint = RecordingStream.to_native(blueprint)
bindings.add_space_view(name, origin, entity_paths, blueprint)
def set_panels(
*,
all_expanded: bool | None = None,
blueprint_view_expanded: bool | None = None,
selection_view_expanded: bool | None = None,
timeline_view_expanded: bool | None = None,
blueprint: RecordingStream | None = None,
) -> None:
"""
Change the visibility of the view panels.
Parameters
----------
all_expanded : Optional[bool]
Expand or collapse all panels.
blueprint_view_expanded : Optional[bool]
Expand or collapse the blueprint view panel.
selection_view_expanded : Optional[bool]
Expand or collapse the selection view panel.
timeline_view_expanded : Optional[bool]
Expand or collapse the timeline view panel.
blueprint : Optional[RecordingStream]
The blueprint to add the space view to. If None, the default global blueprint is used.
"""
blueprint = RecordingStream.to_native(blueprint)
bindings.set_panels(
blueprint_view_expanded=blueprint_view_expanded or all_expanded,
selection_view_expanded=selection_view_expanded or all_expanded,
timeline_view_expanded=timeline_view_expanded or all_expanded,
blueprint=blueprint,
)
def set_auto_space_views(
enabled: bool,
blueprint: RecordingStream | None = None,
) -> None:
"""
Change whether or not the blueprint automatically adds space views for all entities.
Parameters
----------
enabled : Optional[bool]
Whether or not to automatically add space views for all entities.
blueprint : Optional[RecordingStream]
The blueprint to add the space view to. If None, the default global blueprint is used.
"""
blueprint = RecordingStream.to_native(blueprint)
bindings.set_auto_space_views(enabled, blueprint) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/log/experimental/blueprint.py | 0.929352 | 0.375706 | blueprint.py | pypi |
from __future__ import annotations
from typing import Any, Final, Type, cast
import pyarrow as pa
from rerun import bindings
all = [
"annotation",
"arrow",
"box",
"color",
"draw_order",
"experimental",
"label",
"pinhole",
"point",
"quaternion",
"radius",
"rect2d",
"scalar_plot_props",
"scalar",
"tensor",
"text_entry",
"vec",
]
# Component names that are recognized by Rerun.
REGISTERED_COMPONENT_NAMES: Final[dict[str, pa.field]] = bindings.get_registered_component_names()
def ComponentTypeFactory(name: str, array_cls: type[pa.ExtensionArray], field: pa.Field) -> type[pa.ExtensionType]:
"""Build a component type wrapper."""
def __init__(self: type[pa.ExtensionType]) -> None:
pa.ExtensionType.__init__(self, self.storage_type, field.name)
def __arrow_ext_serialize__(self: type[pa.ExtensionType]) -> bytes:
return b""
@classmethod # type: ignore[misc]
def __arrow_ext_deserialize__(
cls: type[pa.ExtensionType], storage_type: Any, serialized: Any
) -> type[pa.ExtensionType]:
"""Return an instance of this subclass given the serialized metadata."""
return cast(Type[pa.ExtensionType], cls())
def __arrow_ext_class__(self: type[pa.ExtensionType]) -> type[pa.ExtensionArray]:
return array_cls
component_type = type(
name,
(pa.ExtensionType,),
{
"storage_type": field.type,
"__init__": __init__,
"__arrow_ext_serialize__": __arrow_ext_serialize__,
"__arrow_ext_deserialize__": __arrow_ext_deserialize__,
"__arrow_ext_class__": __arrow_ext_class__,
},
)
return cast(Type[pa.ExtensionType], component_type)
def union_discriminant_type(data_type: pa.DenseUnionType, discriminant: str) -> pa.DataType:
"""Return the data type of the given discriminant."""
return next(f.type for f in list(data_type) if f.name == discriminant)
def build_dense_union(data_type: pa.DenseUnionType, discriminant: str, child: pa.Array) -> pa.UnionArray:
"""
Build a dense UnionArray given the `data_type`, a discriminant, and the child value array.
If the discriminant string doesn't match any possible value, a `ValueError` is raised.
WARNING: Because of #705, each new union component needs to be handled in `array_to_rust` on the native side.
"""
try:
idx = [f.name for f in list(data_type)].index(discriminant)
type_ids = pa.array([idx] * len(child), type=pa.int8())
value_offsets = pa.array(range(len(child)), type=pa.int32())
children = [pa.nulls(0, type=f.type) for f in list(data_type)]
try:
children[idx] = child.cast(data_type[idx].type, safe=False)
except pa.ArrowInvalid:
# Since we're having issues with nullability in union types (see below),
# the cast sometimes fails but can be skipped.
children[idx] = child
return pa.Array.from_buffers(
type=data_type,
length=len(child),
buffers=[None, type_ids.buffers()[1], value_offsets.buffers()[1]],
children=children,
)
# Cast doesn't work for non-flat unions it seems - we're getting issues about the nullability of union variants.
# It's pointless anyways since on the native side we have to cast the field types
# See https://github.com/rerun-io/rerun/issues/795
# .cast(data_type)
except ValueError as e:
raise ValueError(e.args) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/components/__init__.py | 0.936489 | 0.277642 | __init__.py | pypi |
from __future__ import annotations
from typing import SupportsFloat, SupportsInt, overload
import numpy as np
import numpy.typing as npt
__all__ = [
"int_or_none",
"float_or_none",
"bool_or_none",
"str_or_none",
"to_np_uint8",
"to_np_uint16",
"to_np_uint32",
"to_np_uint64",
"to_np_int8",
"to_np_int16",
"to_np_int32",
"to_np_int64",
"to_np_bool",
"to_np_float16",
"to_np_float32",
"to_np_float64",
]
@overload
def int_or_none(data: None) -> None:
...
@overload
def int_or_none(data: SupportsInt) -> int:
...
def int_or_none(data: SupportsInt | None) -> int | None:
if data is None:
return None
return int(data)
@overload
def float_or_none(data: None) -> None:
...
@overload
def float_or_none(data: SupportsFloat) -> float:
...
def float_or_none(data: SupportsFloat | None) -> float | None:
if data is None:
return None
return float(data)
@overload
def bool_or_none(data: None) -> None:
...
@overload
def bool_or_none(data: bool) -> bool:
...
def bool_or_none(data: bool | None) -> bool | None:
if data is None:
return None
return bool(data)
@overload
def str_or_none(data: None) -> None:
...
@overload
def str_or_none(data: str) -> str:
...
def str_or_none(data: str | None) -> str | None:
if data is None:
return None
return str(data)
def to_np_uint8(data: npt.ArrayLike) -> npt.NDArray[np.uint8]:
"""Convert some datat to a numpy uint8 array."""
return np.asarray(data, dtype=np.uint8)
def to_np_uint16(data: npt.ArrayLike) -> npt.NDArray[np.uint16]:
"""Convert some datat to a numpy uint16 array."""
return np.asarray(data, dtype=np.uint16)
def to_np_uint32(data: npt.ArrayLike) -> npt.NDArray[np.uint32]:
"""Convert some datat to a numpy uint32 array."""
return np.asarray(data, dtype=np.uint32)
def to_np_uint64(data: npt.ArrayLike) -> npt.NDArray[np.uint64]:
"""Convert some datat to a numpy uint64 array."""
return np.asarray(data, dtype=np.uint64)
def to_np_int8(data: npt.ArrayLike) -> npt.NDArray[np.int8]:
"""Convert some datat to a numpy int8 array."""
return np.asarray(data, dtype=np.int8)
def to_np_int16(data: npt.ArrayLike) -> npt.NDArray[np.int16]:
"""Convert some datat to a numpy int16 array."""
return np.asarray(data, dtype=np.int16)
def to_np_int32(data: npt.ArrayLike) -> npt.NDArray[np.int32]:
"""Convert some datat to a numpy int32 array."""
return np.asarray(data, dtype=np.int32)
def to_np_int64(data: npt.ArrayLike) -> npt.NDArray[np.int64]:
"""Convert some datat to a numpy int64 array."""
return np.asarray(data, dtype=np.int64)
def to_np_bool(data: npt.ArrayLike) -> npt.NDArray[np.bool_]:
"""Convert some datat to a numpy bool array."""
return np.asarray(data, dtype=np.bool_)
def to_np_float16(data: npt.ArrayLike) -> npt.NDArray[np.float16]:
"""Convert some datat to a numpy float16 array."""
return np.asarray(data, dtype=np.float16)
def to_np_float32(data: npt.ArrayLike) -> npt.NDArray[np.float32]:
"""Convert some datat to a numpy float32 array."""
return np.asarray(data, dtype=np.float32)
def to_np_float64(data: npt.ArrayLike) -> npt.NDArray[np.float64]:
"""Convert some datat to a numpy float64 array."""
return np.asarray(data, dtype=np.float64) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/_converters.py | 0.902196 | 0.556038 | _converters.py | pypi |
from __future__ import annotations
from typing import Any, Generic, TypeVar, cast
import pyarrow as pa
from attrs import define, fields
T = TypeVar("T")
@define
class Archetype:
"""Base class for all archetypes."""
def __str__(self) -> str:
cls = type(self)
s = f"rr.{cls.__name__}(\n"
for fld in fields(cls):
if "component" in fld.metadata:
comp = getattr(self, fld.name)
datatype = getattr(comp, "type", None)
if datatype:
s += f" {datatype.extension_name}<{datatype.storage_type}>(\n {comp.to_pylist()}\n )\n"
s += ")"
return s
__repr__ = __str__
class BaseExtensionType(pa.ExtensionType): # type: ignore[misc]
"""Extension type for datatypes and non-delegating components."""
_ARRAY_TYPE: type[pa.ExtensionArray] = pa.ExtensionArray
"""The extension array class associated with this class."""
# Note: (de)serialization is not used in the Python SDK
def __arrow_ext_serialize__(self) -> bytes:
return b""
# noinspection PyMethodOverriding
@classmethod
def __arrow_ext_deserialize__(cls, storage_type: Any, serialized: Any) -> pa.ExtensionType:
return cls()
def __arrow_ext_class__(self) -> type[pa.ExtensionArray]:
return self._ARRAY_TYPE
class NamedExtensionArray(pa.ExtensionArray): # type: ignore[misc]
"""Common base class for any extension array that has a name."""
_EXTENSION_NAME = ""
"""The fully qualified name of this class."""
@property
def extension_name(self) -> str:
return self._EXTENSION_NAME
class BaseExtensionArray(NamedExtensionArray, Generic[T]): # type: ignore[misc]
"""Extension array for datatypes and non-delegating components."""
_EXTENSION_TYPE = pa.ExtensionType
"""The extension type class associated with this class."""
@classmethod
def from_similar(cls, data: T | None) -> BaseExtensionArray[T]:
"""
Primary method for creating Arrow arrays for components.
This method must flexibly accept native data (which comply with type `T`). Subclasses must provide a type
parameter specifying the type of the native data (this is automatically handled by the code generator).
The actual creation of the Arrow array is delegated to the `_native_to_pa_array()` method, which is not
implemented by default.
Parameters
----------
data : T | None
The data to convert into an Arrow array.
Returns
-------
The Arrow array encapsulating the data.
"""
data_type = cls._EXTENSION_TYPE()
if data is None:
return cast(BaseExtensionArray[T], data_type.wrap_array(pa.array([], type=data_type.storage_type)))
else:
return cast(
BaseExtensionArray[T], data_type.wrap_array(cls._native_to_pa_array(data, data_type.storage_type))
)
@staticmethod
def _native_to_pa_array(data: T, data_type: pa.DataType) -> pa.Array:
"""
Converts native data into an Arrow array.
Subclasses must provide an implementation of this method (via an override) if they are to be used as either
an archetype's field (which should be the case for all components), or a (delegating) component's field (for
datatypes). Datatypes which are used only within other datatypes may omit implementing this method, provided
that the top-level datatype implements it.
A hand-coded override must be provided for the code generator to implement this method. The override must be
named `xxx_native_to_pa_array()`, where `xxx` is the lowercase name of the datatype. The override must be
located in the `_overrides` subpackage and *explicitly* imported by `_overrides/__init__.py` (to be noticed
by the code generator).
`color_native_to_pa_array()` in `_overrides/color.py` is a good example of how to implement this method, in
conjunction with the native type's converter (see `color_converter()`, used to construct the native `Color`
object).
Parameters
----------
data : T
The data to convert into an Arrow array.
data_type : pa.DataType
The Arrow data type of the data.
Returns
-------
The Arrow array encapsulating the data.
"""
raise NotImplementedError
class BaseDelegatingExtensionType(pa.ExtensionType): # type: ignore[misc]
"""Extension type for delegating components."""
_TYPE_NAME = ""
"""The fully qualified name of the component."""
_ARRAY_TYPE = pa.ExtensionArray
"""The extension array class associated with this component."""
_DELEGATED_EXTENSION_TYPE = BaseExtensionType
"""The extension type class associated with this component's datatype."""
def __init__(self) -> None:
# TODO(ab, cmc): we unwrap the type here because we can't have two layers of extension types for now
pa.ExtensionType.__init__(self, self._DELEGATED_EXTENSION_TYPE().storage_type, self._TYPE_NAME)
# Note: (de)serialization is not used in the Python SDK
def __arrow_ext_serialize__(self) -> bytes:
return b""
# noinspection PyMethodOverriding
@classmethod
def __arrow_ext_deserialize__(cls, storage_type: Any, serialized: Any) -> pa.ExtensionType:
return cls()
def __arrow_ext_class__(self) -> type[pa.ExtensionArray]:
return self._ARRAY_TYPE # type: ignore[no-any-return]
class BaseDelegatingExtensionArray(BaseExtensionArray[T]): # type: ignore[misc]
"""Extension array for delegating components."""
_DELEGATED_ARRAY_TYPE = BaseExtensionArray[T] # type: ignore[valid-type]
"""The extension array class associated with this component's datatype."""
@classmethod
def from_similar(cls, data: T | None) -> BaseDelegatingExtensionArray[T]:
arr = cls._DELEGATED_ARRAY_TYPE.from_similar(data)
# TODO(ab, cmc): we unwrap the type here because we can't have two layers of extension types for now
return cast(BaseDelegatingExtensionArray[T], cls._EXTENSION_TYPE().wrap_array(arr.storage)) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/_baseclasses.py | 0.916601 | 0.331444 | _baseclasses.py | pypi |
from __future__ import annotations
from typing import Any, Callable, Iterable, Union, cast
import numpy as np
import numpy.typing as npt
import pyarrow as pa
from attrs import fields
from .. import RecordingStream, bindings
from ..log import error_utils
from . import archetypes as arch
from . import components as cmp
from . import datatypes as dt
from ._baseclasses import Archetype, NamedExtensionArray
__all__ = ["log"]
EXT_PREFIX = "ext."
ext_component_types: dict[str, Any] = {}
# adapted from rerun.log._add_extension_components
def _add_extension_components(
instanced: dict[str, pa.ExtensionArray],
splats: dict[str, pa.ExtensionArray],
ext: dict[str, Any],
identifiers: npt.NDArray[np.uint64] | None,
) -> None:
global ext_component_types
for name, value in ext.items():
# Don't log empty components
if value is None:
continue
# Add the ext prefix, unless it's already there
if not name.startswith(EXT_PREFIX):
name = EXT_PREFIX + name
np_type, pa_type = ext_component_types.get(name, (None, None))
try:
if np_type is not None:
np_value = np.atleast_1d(np.array(value, copy=False, dtype=np_type))
pa_value = pa.array(np_value, type=pa_type)
else:
np_value = np.atleast_1d(np.array(value, copy=False))
pa_value = pa.array(np_value)
ext_component_types[name] = (np_value.dtype, pa_value.type)
except Exception as ex:
error_utils._send_warning(
"Error converting extension data to arrow for component {}. Dropping.\n{}: {}".format(
name, type(ex).__name__, ex
),
1,
)
continue
is_splat = (len(np_value) == 1) and (len(identifiers or []) != 1)
if is_splat:
splats[name] = pa_value # noqa
else:
instanced[name] = pa_value # noqa
def _extract_components(entity: Archetype) -> Iterable[tuple[NamedExtensionArray, bool]]:
"""Extract the components from an entity, yielding (component, is_primary) tuples."""
for fld in fields(type(entity)):
if "component" in fld.metadata:
comp = getattr(entity, fld.name)
if comp is not None:
yield getattr(entity, fld.name), fld.metadata["component"] == "primary"
def _splat() -> cmp.InstanceKeyArray:
"""Helper to generate a splat InstanceKeyArray."""
_MAX_U64 = 2**64 - 1
return pa.array([_MAX_U64], type=cmp.InstanceKeyType().storage_type) # type: ignore[no-any-return]
Loggable = Union[Archetype, dt.Transform3DLike]
"""All the things that `rr.log()` can accept and log."""
_UPCASTING_RULES: dict[type[Loggable], Callable[[Any], Archetype]] = {
dt.TranslationRotationScale3D: arch.Transform3D,
dt.TranslationAndMat3x3: arch.Transform3D,
dt.Transform3D: arch.Transform3D,
}
def _upcast_entity(entity: Loggable) -> Archetype:
from .. import strict_mode
if type(entity) in _UPCASTING_RULES:
entity = _UPCASTING_RULES[type(entity)](entity)
if strict_mode():
if not isinstance(entity, Archetype):
raise TypeError(f"Expected Archetype, got {type(entity)}")
return cast(Archetype, entity)
def log(
entity_path: str,
entity: Loggable,
ext: dict[str, Any] | None = None,
timeless: bool = False,
recording: RecordingStream | None = None,
) -> None:
"""
Log an entity.
Parameters
----------
entity_path:
Path to the entity in the space hierarchy.
entity: Archetype
The archetype object representing the entity.
ext:
Optional dictionary of extension components. See [rerun.log_extension_components][]
timeless:
If true, the entity will be timeless (default: False).
recording:
Specifies the [`rerun.RecordingStream`][] to use.
If left unspecified, defaults to the current active data recording, if there is one.
See also: [`rerun.init`][], [`rerun.set_global_data_recording`][].
"""
archetype = _upcast_entity(entity)
instanced: dict[str, NamedExtensionArray] = {}
splats: dict[str, NamedExtensionArray] = {}
# find canonical length of this entity by based on the longest length of any primary component
archetype_length = max(len(comp) for comp, primary in _extract_components(archetype) if primary)
for comp, primary in _extract_components(archetype):
if primary:
instanced[comp.extension_name] = comp.storage
elif len(comp) == 1 and archetype_length > 1:
splats[comp.extension_name] = comp.storage
elif len(comp) >= 1:
instanced[comp.extension_name] = comp.storage
# TODO(#2825): For now we just don't log anything for unspecified components, to match the
# historical behavior.
# From the PoV of the high-level API, this is incorrect though: logging an archetype should
# give the user the guarantee that past state cannot leak into their data.
# else: # len == 0
# instanced[comp.extension_name] = comp.storage
if ext:
_add_extension_components(instanced, splats, ext, None)
if splats:
splats["rerun.instance_key"] = _splat()
bindings.log_arrow_msg( # pyright: ignore[reportGeneralTypeIssues]
entity_path,
components=splats,
timeless=timeless,
recording=recording,
)
# Always the primary component last so range-based queries will include the other data. See(#1215)
bindings.log_arrow_msg( # pyright: ignore[reportGeneralTypeIssues]
entity_path,
components=instanced,
timeless=timeless,
recording=recording,
) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/log.py | 0.85493 | 0.21767 | log.py | pypi |
from __future__ import annotations
from attrs import define, field
from .. import components
from .._baseclasses import (
Archetype,
)
__all__ = ["Points3D"]
@define(str=False, repr=False)
class Points3D(Archetype):
"""
A 3D point cloud with positions and optional colors, radii, labels, etc.
Example
-------
```python
import rerun as rr
import rerun.experimental as rr2
rr.init("points", spawn=True)
rr2.log("simple", rr2.Points3D([[0, 0, 0], [1, 1, 1]]))
```
"""
points: components.Point3DArray = field(
metadata={"component": "primary"},
converter=components.Point3DArray.from_similar, # type: ignore[misc]
)
"""
All the actual 3D points that make up the point cloud.
"""
radii: components.RadiusArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.RadiusArray.from_similar, # type: ignore[misc]
)
"""
Optional radii for the points, effectively turning them into circles.
"""
colors: components.ColorArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.ColorArray.from_similar, # type: ignore[misc]
)
"""
Optional colors for the points.
The colors are interpreted as RGB or RGBA in sRGB gamma-space,
As either 0-1 floats or 0-255 integers, with separate alpha.
"""
labels: components.LabelArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.LabelArray.from_similar, # type: ignore[misc]
)
"""
Optional text labels for the points.
"""
draw_order: components.DrawOrderArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.DrawOrderArray.from_similar, # type: ignore[misc]
)
"""
An optional floating point value that specifies the 3D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for 3D points is 30.0.
"""
class_ids: components.ClassIdArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.ClassIdArray.from_similar, # type: ignore[misc]
)
"""
Optional class Ids for the points.
The class ID provides colors and labels if not specified explicitly.
"""
keypoint_ids: components.KeypointIdArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.KeypointIdArray.from_similar, # type: ignore[misc]
)
"""
Optional keypoint IDs for the points, identifying them within a class.
If keypoint IDs are passed in but no class IDs were specified, the class ID will
default to 0.
This is useful to identify points within a single classification (which is identified
with `class_id`).
E.g. the classification might be 'Person' and the keypoints refer to joints on a
detected skeleton.
"""
instance_keys: components.InstanceKeyArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.InstanceKeyArray.from_similar, # type: ignore[misc]
)
"""
Unique identifiers for each individual point in the batch.
"""
__str__ = Archetype.__str__
__repr__ = Archetype.__repr__ | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/archetypes/points3d.py | 0.861203 | 0.787809 | points3d.py | pypi |
from __future__ import annotations
from attrs import define, field
from .. import components
from .._baseclasses import (
Archetype,
)
__all__ = ["Points2D"]
@define(str=False, repr=False)
class Points2D(Archetype):
"""
A 2D point cloud with positions and optional colors, radii, labels, etc.
Example
-------
```python
import rerun as rr
import rerun.experimental as rr2
rr.init("points", spawn=True)
rr2.log("simple", rr2.Points2D([[0, 0], [1, 1]]))
# Log an extra rect to set the view bounds
rr.log_rect("bounds", [0, 0, 4, 3], rect_format=rr.RectFormat.XCYCWH)
```
"""
points: components.Point2DArray = field(
metadata={"component": "primary"},
converter=components.Point2DArray.from_similar, # type: ignore[misc]
)
"""
All the actual 2D points that make up the point cloud.
"""
radii: components.RadiusArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.RadiusArray.from_similar, # type: ignore[misc]
)
"""
Optional radii for the points, effectively turning them into circles.
"""
colors: components.ColorArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.ColorArray.from_similar, # type: ignore[misc]
)
"""
Optional colors for the points.
The colors are interpreted as RGB or RGBA in sRGB gamma-space,
As either 0-1 floats or 0-255 integers, with separate alpha.
"""
labels: components.LabelArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.LabelArray.from_similar, # type: ignore[misc]
)
"""
Optional text labels for the points.
"""
draw_order: components.DrawOrderArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.DrawOrderArray.from_similar, # type: ignore[misc]
)
"""
An optional floating point value that specifies the 2D drawing order.
Objects with higher values are drawn on top of those with lower values.
The default for 2D points is 30.0.
"""
class_ids: components.ClassIdArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.ClassIdArray.from_similar, # type: ignore[misc]
)
"""
Optional class Ids for the points.
The class ID provides colors and labels if not specified explicitly.
"""
keypoint_ids: components.KeypointIdArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.KeypointIdArray.from_similar, # type: ignore[misc]
)
"""
Optional keypoint IDs for the points, identifying them within a class.
If keypoint IDs are passed in but no class IDs were specified, the class ID will
default to 0.
This is useful to identify points within a single classification (which is identified
with `class_id`).
E.g. the classification might be 'Person' and the keypoints refer to joints on a
detected skeleton.
"""
instance_keys: components.InstanceKeyArray | None = field(
metadata={"component": "secondary"},
default=None,
converter=components.InstanceKeyArray.from_similar, # type: ignore[misc]
)
"""
Unique identifiers for each individual point in the batch.
"""
__str__ = Archetype.__str__
__repr__ = Archetype.__repr__ | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/archetypes/points2d.py | 0.861261 | 0.815049 | points2d.py | pypi |
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Sequence, Union
import pyarrow as pa
from attrs import define, field
from .. import datatypes
from .._baseclasses import (
BaseExtensionArray,
BaseExtensionType,
)
from ._overrides import scale3d_inner_converter # noqa: F401
__all__ = ["Scale3D", "Scale3DArray", "Scale3DArrayLike", "Scale3DLike", "Scale3DType"]
@define
class Scale3D:
"""
3D scaling factor, part of a transform representation.
Example
-------
```python
# uniform scaling
scale = rr.dt.Scale3D(3.)
# non-uniform scaling
scale = rr.dt.Scale3D([1, 1, -1])
scale = rr.dt.Scale3D(rr.dt.Vec3D([1, 1, -1]))
```
"""
inner: datatypes.Vec3D | float = field(converter=scale3d_inner_converter)
"""
ThreeD (datatypes.Vec3D):
Individual scaling factors for each axis, distorting the original object.
Uniform (float):
Uniform scaling factor along all axis.
"""
if TYPE_CHECKING:
Scale3DLike = Union[Scale3D, datatypes.Vec3D, float, datatypes.Vec3DLike]
Scale3DArrayLike = Union[
Scale3D,
datatypes.Vec3D,
float,
Sequence[Scale3DLike],
]
else:
Scale3DLike = Any
Scale3DArrayLike = Any
# --- Arrow support ---
class Scale3DType(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("ThreeD", pa.list_(pa.field("item", pa.float32(), False, {}), 3), False, {}),
pa.field("Uniform", pa.float32(), False, {}),
]
),
"rerun.datatypes.Scale3D",
)
class Scale3DArray(BaseExtensionArray[Scale3DArrayLike]):
_EXTENSION_NAME = "rerun.datatypes.Scale3D"
_EXTENSION_TYPE = Scale3DType
@staticmethod
def _native_to_pa_array(data: Scale3DArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
Scale3DType._ARRAY_TYPE = Scale3DArray
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(Scale3DType()) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/scale3d.py | 0.872361 | 0.653597 | scale3d.py | pypi |
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Sequence, Union
import pyarrow as pa
from attrs import define, field
from .. import datatypes
from .._baseclasses import (
BaseExtensionArray,
BaseExtensionType,
)
from ._overrides import transform3d_native_to_pa_array # noqa: F401
__all__ = ["Transform3D", "Transform3DArray", "Transform3DArrayLike", "Transform3DLike", "Transform3DType"]
@define
class Transform3D:
"""Representation of a 3D affine transform."""
inner: datatypes.TranslationAndMat3x3 | datatypes.TranslationRotationScale3D = field()
"""
TranslationAndMat3x3 (datatypes.TranslationAndMat3x3):
TranslationRotationScale (datatypes.TranslationRotationScale3D):
"""
if TYPE_CHECKING:
Transform3DLike = Union[
Transform3D,
datatypes.TranslationAndMat3x3,
datatypes.TranslationRotationScale3D,
]
Transform3DArrayLike = Union[
Transform3D,
datatypes.TranslationAndMat3x3,
datatypes.TranslationRotationScale3D,
Sequence[Transform3DLike],
]
else:
Transform3DLike = Any
Transform3DArrayLike = Any
# --- Arrow support ---
class Transform3DType(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field(
"TranslationAndMat3x3",
pa.struct(
[
pa.field(
"translation", pa.list_(pa.field("item", pa.float32(), False, {}), 3), True, {}
),
pa.field("matrix", pa.list_(pa.field("item", pa.float32(), False, {}), 9), True, {}),
pa.field("from_parent", pa.bool_(), False, {}),
]
),
False,
{},
),
pa.field(
"TranslationRotationScale",
pa.struct(
[
pa.field(
"translation", pa.list_(pa.field("item", pa.float32(), False, {}), 3), True, {}
),
pa.field(
"rotation",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field(
"Quaternion",
pa.list_(pa.field("item", pa.float32(), False, {}), 4),
False,
{},
),
pa.field(
"AxisAngle",
pa.struct(
[
pa.field(
"axis",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
pa.field(
"angle",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("Radians", pa.float32(), False, {}),
pa.field("Degrees", pa.float32(), False, {}),
]
),
False,
{},
),
]
),
False,
{},
),
]
),
True,
{},
),
pa.field(
"scale",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field(
"ThreeD",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
pa.field("Uniform", pa.float32(), False, {}),
]
),
True,
{},
),
pa.field("from_parent", pa.bool_(), False, {}),
]
),
False,
{},
),
]
),
"rerun.datatypes.Transform3D",
)
class Transform3DArray(BaseExtensionArray[Transform3DArrayLike]):
_EXTENSION_NAME = "rerun.datatypes.Transform3D"
_EXTENSION_TYPE = Transform3DType
@staticmethod
def _native_to_pa_array(data: Transform3DArrayLike, data_type: pa.DataType) -> pa.Array:
return transform3d_native_to_pa_array(data, data_type)
Transform3DType._ARRAY_TYPE = Transform3DArray
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(Transform3DType()) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/transform3d.py | 0.854566 | 0.393502 | transform3d.py | pypi |
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Literal, Sequence, Union
import numpy as np
import numpy.typing as npt
import pyarrow as pa
from attrs import define, field
from .. import datatypes
from .._baseclasses import (
BaseExtensionArray,
BaseExtensionType,
)
from .._converters import (
bool_or_none,
float_or_none,
str_or_none,
to_np_float32,
)
__all__ = [
"AffixFuzzer1",
"AffixFuzzer1Array",
"AffixFuzzer1ArrayLike",
"AffixFuzzer1Like",
"AffixFuzzer1Type",
"AffixFuzzer2",
"AffixFuzzer2Array",
"AffixFuzzer2ArrayLike",
"AffixFuzzer2Like",
"AffixFuzzer2Type",
"AffixFuzzer3",
"AffixFuzzer3Array",
"AffixFuzzer3ArrayLike",
"AffixFuzzer3Like",
"AffixFuzzer3Type",
"AffixFuzzer4",
"AffixFuzzer4Array",
"AffixFuzzer4ArrayLike",
"AffixFuzzer4Like",
"AffixFuzzer4Type",
"AffixFuzzer5",
"AffixFuzzer5Array",
"AffixFuzzer5ArrayLike",
"AffixFuzzer5Like",
"AffixFuzzer5Type",
"FlattenedScalar",
"FlattenedScalarArray",
"FlattenedScalarArrayLike",
"FlattenedScalarLike",
"FlattenedScalarType",
]
@define
class FlattenedScalar:
value: float = field(converter=float)
def __array__(self, dtype: npt.DTypeLike = None) -> npt.NDArray[Any]:
return np.asarray(self.value, dtype=dtype)
def __float__(self) -> float:
return float(self.value)
FlattenedScalarLike = FlattenedScalar
FlattenedScalarArrayLike = Union[
FlattenedScalar,
Sequence[FlattenedScalarLike],
]
# --- Arrow support ---
class FlattenedScalarType(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self, pa.struct([pa.field("value", pa.float32(), False, {})]), "rerun.testing.datatypes.FlattenedScalar"
)
class FlattenedScalarArray(BaseExtensionArray[FlattenedScalarArrayLike]):
_EXTENSION_NAME = "rerun.testing.datatypes.FlattenedScalar"
_EXTENSION_TYPE = FlattenedScalarType
@staticmethod
def _native_to_pa_array(data: FlattenedScalarArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
FlattenedScalarType._ARRAY_TYPE = FlattenedScalarArray
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(FlattenedScalarType())
def _affixfuzzer1_almost_flattened_scalar_converter(x: datatypes.FlattenedScalarLike) -> datatypes.FlattenedScalar:
if isinstance(x, datatypes.FlattenedScalar):
return x
else:
return datatypes.FlattenedScalar(x)
@define
class AffixFuzzer1:
single_string_required: str = field(converter=str)
many_strings_required: list[str] = field()
flattened_scalar: float = field(converter=float)
almost_flattened_scalar: datatypes.FlattenedScalar = field(
converter=_affixfuzzer1_almost_flattened_scalar_converter
)
single_float_optional: float | None = field(default=None, converter=float_or_none)
single_string_optional: str | None = field(default=None, converter=str_or_none)
many_floats_optional: npt.NDArray[np.float32] | None = field(default=None, converter=to_np_float32)
many_strings_optional: list[str] | None = field(default=None)
from_parent: bool | None = field(default=None, converter=bool_or_none)
AffixFuzzer1Like = AffixFuzzer1
AffixFuzzer1ArrayLike = Union[
AffixFuzzer1,
Sequence[AffixFuzzer1Like],
]
# --- Arrow support ---
class AffixFuzzer1Type(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.struct(
[
pa.field("single_float_optional", pa.float32(), True, {}),
pa.field("single_string_required", pa.utf8(), False, {}),
pa.field("single_string_optional", pa.utf8(), True, {}),
pa.field("many_floats_optional", pa.list_(pa.field("item", pa.float32(), True, {})), True, {}),
pa.field("many_strings_required", pa.list_(pa.field("item", pa.utf8(), False, {})), False, {}),
pa.field("many_strings_optional", pa.list_(pa.field("item", pa.utf8(), True, {})), True, {}),
pa.field("flattened_scalar", pa.float32(), False, {}),
pa.field(
"almost_flattened_scalar", pa.struct([pa.field("value", pa.float32(), False, {})]), False, {}
),
pa.field("from_parent", pa.bool_(), True, {}),
]
),
"rerun.testing.datatypes.AffixFuzzer1",
)
class AffixFuzzer1Array(BaseExtensionArray[AffixFuzzer1ArrayLike]):
_EXTENSION_NAME = "rerun.testing.datatypes.AffixFuzzer1"
_EXTENSION_TYPE = AffixFuzzer1Type
@staticmethod
def _native_to_pa_array(data: AffixFuzzer1ArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
AffixFuzzer1Type._ARRAY_TYPE = AffixFuzzer1Array
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(AffixFuzzer1Type())
@define
class AffixFuzzer2:
single_float_optional: float | None = field(default=None, converter=float_or_none)
def __array__(self, dtype: npt.DTypeLike = None) -> npt.NDArray[Any]:
return np.asarray(self.single_float_optional, dtype=dtype)
AffixFuzzer2Like = AffixFuzzer2
AffixFuzzer2ArrayLike = Union[
AffixFuzzer2,
Sequence[AffixFuzzer2Like],
]
# --- Arrow support ---
class AffixFuzzer2Type(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(self, pa.float32(), "rerun.testing.datatypes.AffixFuzzer2")
class AffixFuzzer2Array(BaseExtensionArray[AffixFuzzer2ArrayLike]):
_EXTENSION_NAME = "rerun.testing.datatypes.AffixFuzzer2"
_EXTENSION_TYPE = AffixFuzzer2Type
@staticmethod
def _native_to_pa_array(data: AffixFuzzer2ArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
AffixFuzzer2Type._ARRAY_TYPE = AffixFuzzer2Array
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(AffixFuzzer2Type())
@define
class AffixFuzzer3:
inner: float | list[datatypes.AffixFuzzer1] | npt.NDArray[np.float32] = field()
"""
degrees (float):
radians (float):
craziness (list[datatypes.AffixFuzzer1]):
fixed_size_shenanigans (npt.NDArray[np.float32]):
"""
kind: Literal["degrees", "radians", "craziness", "fixed_size_shenanigans"] = field(default="degrees")
if TYPE_CHECKING:
AffixFuzzer3Like = Union[
AffixFuzzer3,
float,
list[datatypes.AffixFuzzer1],
npt.NDArray[np.float32],
]
AffixFuzzer3ArrayLike = Union[
AffixFuzzer3,
float,
list[datatypes.AffixFuzzer1],
npt.NDArray[np.float32],
Sequence[AffixFuzzer3Like],
]
else:
AffixFuzzer3Like = Any
AffixFuzzer3ArrayLike = Any
# --- Arrow support ---
class AffixFuzzer3Type(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("degrees", pa.float32(), False, {}),
pa.field("radians", pa.float32(), False, {}),
pa.field(
"craziness",
pa.list_(
pa.field(
"item",
pa.struct(
[
pa.field("single_float_optional", pa.float32(), True, {}),
pa.field("single_string_required", pa.utf8(), False, {}),
pa.field("single_string_optional", pa.utf8(), True, {}),
pa.field(
"many_floats_optional",
pa.list_(pa.field("item", pa.float32(), True, {})),
True,
{},
),
pa.field(
"many_strings_required",
pa.list_(pa.field("item", pa.utf8(), False, {})),
False,
{},
),
pa.field(
"many_strings_optional",
pa.list_(pa.field("item", pa.utf8(), True, {})),
True,
{},
),
pa.field("flattened_scalar", pa.float32(), False, {}),
pa.field(
"almost_flattened_scalar",
pa.struct([pa.field("value", pa.float32(), False, {})]),
False,
{},
),
pa.field("from_parent", pa.bool_(), True, {}),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"fixed_size_shenanigans", pa.list_(pa.field("item", pa.float32(), False, {}), 3), False, {}
),
]
),
"rerun.testing.datatypes.AffixFuzzer3",
)
class AffixFuzzer3Array(BaseExtensionArray[AffixFuzzer3ArrayLike]):
_EXTENSION_NAME = "rerun.testing.datatypes.AffixFuzzer3"
_EXTENSION_TYPE = AffixFuzzer3Type
@staticmethod
def _native_to_pa_array(data: AffixFuzzer3ArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
AffixFuzzer3Type._ARRAY_TYPE = AffixFuzzer3Array
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(AffixFuzzer3Type())
@define
class AffixFuzzer4:
inner: datatypes.AffixFuzzer3 | list[datatypes.AffixFuzzer3] = field()
"""
single_required (datatypes.AffixFuzzer3):
many_required (list[datatypes.AffixFuzzer3]):
many_optional (list[datatypes.AffixFuzzer3]):
"""
kind: Literal["single_required", "many_required", "many_optional"] = field(default="single_required")
if TYPE_CHECKING:
AffixFuzzer4Like = Union[
AffixFuzzer4,
datatypes.AffixFuzzer3,
list[datatypes.AffixFuzzer3],
]
AffixFuzzer4ArrayLike = Union[
AffixFuzzer4,
datatypes.AffixFuzzer3,
list[datatypes.AffixFuzzer3],
Sequence[AffixFuzzer4Like],
]
else:
AffixFuzzer4Like = Any
AffixFuzzer4ArrayLike = Any
# --- Arrow support ---
class AffixFuzzer4Type(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field(
"single_required",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("degrees", pa.float32(), False, {}),
pa.field("radians", pa.float32(), False, {}),
pa.field(
"craziness",
pa.list_(
pa.field(
"item",
pa.struct(
[
pa.field("single_float_optional", pa.float32(), True, {}),
pa.field("single_string_required", pa.utf8(), False, {}),
pa.field("single_string_optional", pa.utf8(), True, {}),
pa.field(
"many_floats_optional",
pa.list_(pa.field("item", pa.float32(), True, {})),
True,
{},
),
pa.field(
"many_strings_required",
pa.list_(pa.field("item", pa.utf8(), False, {})),
False,
{},
),
pa.field(
"many_strings_optional",
pa.list_(pa.field("item", pa.utf8(), True, {})),
True,
{},
),
pa.field("flattened_scalar", pa.float32(), False, {}),
pa.field(
"almost_flattened_scalar",
pa.struct([pa.field("value", pa.float32(), False, {})]),
False,
{},
),
pa.field("from_parent", pa.bool_(), True, {}),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"fixed_size_shenanigans",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
]
),
False,
{},
),
pa.field(
"many_required",
pa.list_(
pa.field(
"item",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("degrees", pa.float32(), False, {}),
pa.field("radians", pa.float32(), False, {}),
pa.field(
"craziness",
pa.list_(
pa.field(
"item",
pa.struct(
[
pa.field("single_float_optional", pa.float32(), True, {}),
pa.field("single_string_required", pa.utf8(), False, {}),
pa.field("single_string_optional", pa.utf8(), True, {}),
pa.field(
"many_floats_optional",
pa.list_(pa.field("item", pa.float32(), True, {})),
True,
{},
),
pa.field(
"many_strings_required",
pa.list_(pa.field("item", pa.utf8(), False, {})),
False,
{},
),
pa.field(
"many_strings_optional",
pa.list_(pa.field("item", pa.utf8(), True, {})),
True,
{},
),
pa.field("flattened_scalar", pa.float32(), False, {}),
pa.field(
"almost_flattened_scalar",
pa.struct([pa.field("value", pa.float32(), False, {})]),
False,
{},
),
pa.field("from_parent", pa.bool_(), True, {}),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"fixed_size_shenanigans",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"many_optional",
pa.list_(
pa.field(
"item",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("degrees", pa.float32(), False, {}),
pa.field("radians", pa.float32(), False, {}),
pa.field(
"craziness",
pa.list_(
pa.field(
"item",
pa.struct(
[
pa.field("single_float_optional", pa.float32(), True, {}),
pa.field("single_string_required", pa.utf8(), False, {}),
pa.field("single_string_optional", pa.utf8(), True, {}),
pa.field(
"many_floats_optional",
pa.list_(pa.field("item", pa.float32(), True, {})),
True,
{},
),
pa.field(
"many_strings_required",
pa.list_(pa.field("item", pa.utf8(), False, {})),
False,
{},
),
pa.field(
"many_strings_optional",
pa.list_(pa.field("item", pa.utf8(), True, {})),
True,
{},
),
pa.field("flattened_scalar", pa.float32(), False, {}),
pa.field(
"almost_flattened_scalar",
pa.struct([pa.field("value", pa.float32(), False, {})]),
False,
{},
),
pa.field("from_parent", pa.bool_(), True, {}),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"fixed_size_shenanigans",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
]
),
True,
{},
)
),
False,
{},
),
]
),
"rerun.testing.datatypes.AffixFuzzer4",
)
class AffixFuzzer4Array(BaseExtensionArray[AffixFuzzer4ArrayLike]):
_EXTENSION_NAME = "rerun.testing.datatypes.AffixFuzzer4"
_EXTENSION_TYPE = AffixFuzzer4Type
@staticmethod
def _native_to_pa_array(data: AffixFuzzer4ArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
AffixFuzzer4Type._ARRAY_TYPE = AffixFuzzer4Array
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(AffixFuzzer4Type())
def _affixfuzzer5_single_optional_union_converter(
x: datatypes.AffixFuzzer4Like | None,
) -> datatypes.AffixFuzzer4 | None:
if x is None:
return None
elif isinstance(x, datatypes.AffixFuzzer4):
return x
else:
return datatypes.AffixFuzzer4(x)
@define
class AffixFuzzer5:
single_optional_union: datatypes.AffixFuzzer4 | None = field(
default=None, converter=_affixfuzzer5_single_optional_union_converter
)
AffixFuzzer5Like = AffixFuzzer5
AffixFuzzer5ArrayLike = Union[
AffixFuzzer5,
Sequence[AffixFuzzer5Like],
]
# --- Arrow support ---
class AffixFuzzer5Type(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.struct(
[
pa.field(
"single_optional_union",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field(
"single_required",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("degrees", pa.float32(), False, {}),
pa.field("radians", pa.float32(), False, {}),
pa.field(
"craziness",
pa.list_(
pa.field(
"item",
pa.struct(
[
pa.field(
"single_float_optional", pa.float32(), True, {}
),
pa.field(
"single_string_required", pa.utf8(), False, {}
),
pa.field("single_string_optional", pa.utf8(), True, {}),
pa.field(
"many_floats_optional",
pa.list_(pa.field("item", pa.float32(), True, {})),
True,
{},
),
pa.field(
"many_strings_required",
pa.list_(pa.field("item", pa.utf8(), False, {})),
False,
{},
),
pa.field(
"many_strings_optional",
pa.list_(pa.field("item", pa.utf8(), True, {})),
True,
{},
),
pa.field("flattened_scalar", pa.float32(), False, {}),
pa.field(
"almost_flattened_scalar",
pa.struct(
[pa.field("value", pa.float32(), False, {})]
),
False,
{},
),
pa.field("from_parent", pa.bool_(), True, {}),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"fixed_size_shenanigans",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
]
),
False,
{},
),
pa.field(
"many_required",
pa.list_(
pa.field(
"item",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("degrees", pa.float32(), False, {}),
pa.field("radians", pa.float32(), False, {}),
pa.field(
"craziness",
pa.list_(
pa.field(
"item",
pa.struct(
[
pa.field(
"single_float_optional",
pa.float32(),
True,
{},
),
pa.field(
"single_string_required",
pa.utf8(),
False,
{},
),
pa.field(
"single_string_optional",
pa.utf8(),
True,
{},
),
pa.field(
"many_floats_optional",
pa.list_(
pa.field("item", pa.float32(), True, {})
),
True,
{},
),
pa.field(
"many_strings_required",
pa.list_(
pa.field("item", pa.utf8(), False, {})
),
False,
{},
),
pa.field(
"many_strings_optional",
pa.list_(
pa.field("item", pa.utf8(), True, {})
),
True,
{},
),
pa.field(
"flattened_scalar", pa.float32(), False, {}
),
pa.field(
"almost_flattened_scalar",
pa.struct(
[
pa.field(
"value", pa.float32(), False, {}
)
]
),
False,
{},
),
pa.field("from_parent", pa.bool_(), True, {}),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"fixed_size_shenanigans",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"many_optional",
pa.list_(
pa.field(
"item",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("degrees", pa.float32(), False, {}),
pa.field("radians", pa.float32(), False, {}),
pa.field(
"craziness",
pa.list_(
pa.field(
"item",
pa.struct(
[
pa.field(
"single_float_optional",
pa.float32(),
True,
{},
),
pa.field(
"single_string_required",
pa.utf8(),
False,
{},
),
pa.field(
"single_string_optional",
pa.utf8(),
True,
{},
),
pa.field(
"many_floats_optional",
pa.list_(
pa.field("item", pa.float32(), True, {})
),
True,
{},
),
pa.field(
"many_strings_required",
pa.list_(
pa.field("item", pa.utf8(), False, {})
),
False,
{},
),
pa.field(
"many_strings_optional",
pa.list_(
pa.field("item", pa.utf8(), True, {})
),
True,
{},
),
pa.field(
"flattened_scalar", pa.float32(), False, {}
),
pa.field(
"almost_flattened_scalar",
pa.struct(
[
pa.field(
"value", pa.float32(), False, {}
)
]
),
False,
{},
),
pa.field("from_parent", pa.bool_(), True, {}),
]
),
False,
{},
)
),
False,
{},
),
pa.field(
"fixed_size_shenanigans",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
]
),
True,
{},
)
),
False,
{},
),
]
),
True,
{},
)
]
),
"rerun.testing.datatypes.AffixFuzzer5",
)
class AffixFuzzer5Array(BaseExtensionArray[AffixFuzzer5ArrayLike]):
_EXTENSION_NAME = "rerun.testing.datatypes.AffixFuzzer5"
_EXTENSION_TYPE = AffixFuzzer5Type
@staticmethod
def _native_to_pa_array(data: AffixFuzzer5ArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
AffixFuzzer5Type._ARRAY_TYPE = AffixFuzzer5Array
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(AffixFuzzer5Type()) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/fuzzy.py | 0.63624 | 0.211783 | fuzzy.py | pypi |
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Literal, Sequence, Union
import pyarrow as pa
from attrs import define, field
from .._baseclasses import (
BaseExtensionArray,
BaseExtensionType,
)
from ._overrides import angle_init # noqa: F401
__all__ = ["Angle", "AngleArray", "AngleArrayLike", "AngleLike", "AngleType"]
@define(init=False)
class Angle:
"""Angle in either radians or degrees."""
def __init__(self, *args, **kwargs): # type: ignore[no-untyped-def]
angle_init(self, *args, **kwargs)
inner: float = field(converter=float)
"""
Radians (float):
3D rotation angle in radians. Only one of `degrees` or `radians` should be set.
Degrees (float):
3D rotation angle in degrees. Only one of `degrees` or `radians` should be set.
"""
kind: Literal["radians", "degrees"] = field(default="radians")
if TYPE_CHECKING:
AngleLike = Union[
Angle,
float,
]
AngleArrayLike = Union[
Angle,
float,
Sequence[AngleLike],
]
else:
AngleLike = Any
AngleArrayLike = Any
# --- Arrow support ---
class AngleType(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("Radians", pa.float32(), False, {}),
pa.field("Degrees", pa.float32(), False, {}),
]
),
"rerun.datatypes.Angle",
)
class AngleArray(BaseExtensionArray[AngleArrayLike]):
_EXTENSION_NAME = "rerun.datatypes.Angle"
_EXTENSION_TYPE = AngleType
@staticmethod
def _native_to_pa_array(data: AngleArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
AngleType._ARRAY_TYPE = AngleArray
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(AngleType()) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/angle.py | 0.861174 | 0.288886 | angle.py | pypi |
from __future__ import annotations
from typing import Sequence, Union
import pyarrow as pa
from attrs import define, field
from .. import datatypes
from .._baseclasses import (
BaseExtensionArray,
BaseExtensionType,
)
from ._overrides import translationrotationscale3d_init # noqa: F401
__all__ = [
"TranslationRotationScale3D",
"TranslationRotationScale3DArray",
"TranslationRotationScale3DArrayLike",
"TranslationRotationScale3DLike",
"TranslationRotationScale3DType",
]
def _translationrotationscale3d_translation_converter(x: datatypes.Vec3DLike | None) -> datatypes.Vec3D | None:
if x is None:
return None
elif isinstance(x, datatypes.Vec3D):
return x
else:
return datatypes.Vec3D(x)
def _translationrotationscale3d_rotation_converter(x: datatypes.Rotation3DLike | None) -> datatypes.Rotation3D | None:
if x is None:
return None
elif isinstance(x, datatypes.Rotation3D):
return x
else:
return datatypes.Rotation3D(x)
def _translationrotationscale3d_scale_converter(x: datatypes.Scale3DLike | None) -> datatypes.Scale3D | None:
if x is None:
return None
elif isinstance(x, datatypes.Scale3D):
return x
else:
return datatypes.Scale3D(x)
@define(init=False)
class TranslationRotationScale3D:
"""Representation of an affine transform via separate translation, rotation & scale."""
def __init__(self, *args, **kwargs): # type: ignore[no-untyped-def]
translationrotationscale3d_init(self, *args, **kwargs)
from_parent: bool = field(converter=bool)
"""
If true, the transform maps from the parent space to the space where the transform was logged.
Otherwise, the transform maps from the space to its parent.
"""
translation: datatypes.Vec3D | None = field(
default=None, converter=_translationrotationscale3d_translation_converter
)
"""
3D translation vector, applied last.
"""
rotation: datatypes.Rotation3D | None = field(
default=None, converter=_translationrotationscale3d_rotation_converter
)
"""
3D rotation, applied second.
"""
scale: datatypes.Scale3D | None = field(default=None, converter=_translationrotationscale3d_scale_converter)
"""
3D scale, applied first.
"""
TranslationRotationScale3DLike = TranslationRotationScale3D
TranslationRotationScale3DArrayLike = Union[
TranslationRotationScale3D,
Sequence[TranslationRotationScale3DLike],
]
# --- Arrow support ---
class TranslationRotationScale3DType(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.struct(
[
pa.field("translation", pa.list_(pa.field("item", pa.float32(), False, {}), 3), True, {}),
pa.field(
"rotation",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field(
"Quaternion", pa.list_(pa.field("item", pa.float32(), False, {}), 4), False, {}
),
pa.field(
"AxisAngle",
pa.struct(
[
pa.field(
"axis",
pa.list_(pa.field("item", pa.float32(), False, {}), 3),
False,
{},
),
pa.field(
"angle",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("Radians", pa.float32(), False, {}),
pa.field("Degrees", pa.float32(), False, {}),
]
),
False,
{},
),
]
),
False,
{},
),
]
),
True,
{},
),
pa.field(
"scale",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("ThreeD", pa.list_(pa.field("item", pa.float32(), False, {}), 3), False, {}),
pa.field("Uniform", pa.float32(), False, {}),
]
),
True,
{},
),
pa.field("from_parent", pa.bool_(), False, {}),
]
),
"rerun.datatypes.TranslationRotationScale3D",
)
class TranslationRotationScale3DArray(BaseExtensionArray[TranslationRotationScale3DArrayLike]):
_EXTENSION_NAME = "rerun.datatypes.TranslationRotationScale3D"
_EXTENSION_TYPE = TranslationRotationScale3DType
@staticmethod
def _native_to_pa_array(data: TranslationRotationScale3DArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
TranslationRotationScale3DType._ARRAY_TYPE = TranslationRotationScale3DArray
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(TranslationRotationScale3DType()) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/translation_rotation_scale3d.py | 0.910635 | 0.413921 | translation_rotation_scale3d.py | pypi |
from __future__ import annotations
from typing import Sequence, Union
import pyarrow as pa
from attrs import define, field
from .. import datatypes
from .._baseclasses import (
BaseExtensionArray,
BaseExtensionType,
)
from ._overrides import rotationaxisangle_angle_converter # noqa: F401
__all__ = [
"RotationAxisAngle",
"RotationAxisAngleArray",
"RotationAxisAngleArrayLike",
"RotationAxisAngleLike",
"RotationAxisAngleType",
]
def _rotationaxisangle_axis_converter(x: datatypes.Vec3DLike) -> datatypes.Vec3D:
if isinstance(x, datatypes.Vec3D):
return x
else:
return datatypes.Vec3D(x)
@define
class RotationAxisAngle:
"""3D rotation represented by a rotation around a given axis."""
axis: datatypes.Vec3D = field(converter=_rotationaxisangle_axis_converter)
"""
Axis to rotate around.
This is not required to be normalized.
If normalization fails (typically because the vector is length zero), the rotation is silently
ignored.
"""
angle: datatypes.Angle = field(converter=rotationaxisangle_angle_converter)
"""
How much to rotate around the axis.
"""
RotationAxisAngleLike = RotationAxisAngle
RotationAxisAngleArrayLike = Union[
RotationAxisAngle,
Sequence[RotationAxisAngleLike],
]
# --- Arrow support ---
class RotationAxisAngleType(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.struct(
[
pa.field("axis", pa.list_(pa.field("item", pa.float32(), False, {}), 3), False, {}),
pa.field(
"angle",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("Radians", pa.float32(), False, {}),
pa.field("Degrees", pa.float32(), False, {}),
]
),
False,
{},
),
]
),
"rerun.datatypes.RotationAxisAngle",
)
class RotationAxisAngleArray(BaseExtensionArray[RotationAxisAngleArrayLike]):
_EXTENSION_NAME = "rerun.datatypes.RotationAxisAngle"
_EXTENSION_TYPE = RotationAxisAngleType
@staticmethod
def _native_to_pa_array(data: RotationAxisAngleArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
RotationAxisAngleType._ARRAY_TYPE = RotationAxisAngleArray
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(RotationAxisAngleType()) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/rotation_axis_angle.py | 0.908411 | 0.473231 | rotation_axis_angle.py | pypi |
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Sequence, SupportsFloat, Union
import pyarrow as pa
from attrs import define, field
from .. import datatypes
from .._baseclasses import (
BaseExtensionArray,
BaseExtensionType,
)
from ._overrides import rotation3d_inner_converter # noqa: F401
__all__ = ["Rotation3D", "Rotation3DArray", "Rotation3DArrayLike", "Rotation3DLike", "Rotation3DType"]
@define
class Rotation3D:
"""A 3D rotation."""
inner: datatypes.Quaternion | datatypes.RotationAxisAngle = field(converter=rotation3d_inner_converter)
"""
Quaternion (datatypes.Quaternion):
Rotation defined by a quaternion.
AxisAngle (datatypes.RotationAxisAngle):
Rotation defined with an axis and an angle.
"""
if TYPE_CHECKING:
Rotation3DLike = Union[Rotation3D, datatypes.Quaternion, datatypes.RotationAxisAngle, Sequence[SupportsFloat]]
Rotation3DArrayLike = Union[
Rotation3D,
datatypes.Quaternion,
datatypes.RotationAxisAngle,
Sequence[Rotation3DLike],
]
else:
Rotation3DLike = Any
Rotation3DArrayLike = Any
# --- Arrow support ---
class Rotation3DType(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("Quaternion", pa.list_(pa.field("item", pa.float32(), False, {}), 4), False, {}),
pa.field(
"AxisAngle",
pa.struct(
[
pa.field("axis", pa.list_(pa.field("item", pa.float32(), False, {}), 3), False, {}),
pa.field(
"angle",
pa.dense_union(
[
pa.field("_null_markers", pa.null(), True, {}),
pa.field("Radians", pa.float32(), False, {}),
pa.field("Degrees", pa.float32(), False, {}),
]
),
False,
{},
),
]
),
False,
{},
),
]
),
"rerun.datatypes.Rotation3D",
)
class Rotation3DArray(BaseExtensionArray[Rotation3DArrayLike]):
_EXTENSION_NAME = "rerun.datatypes.Rotation3D"
_EXTENSION_TYPE = Rotation3DType
@staticmethod
def _native_to_pa_array(data: Rotation3DArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
Rotation3DType._ARRAY_TYPE = Rotation3DArray
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(Rotation3DType()) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/rotation3d.py | 0.863852 | 0.416233 | rotation3d.py | pypi |
from __future__ import annotations
from typing import Sequence, Union
import pyarrow as pa
from attrs import define, field
from .. import datatypes
from .._baseclasses import (
BaseExtensionArray,
BaseExtensionType,
)
from ._overrides import translationandmat3x3_init # noqa: F401
__all__ = [
"TranslationAndMat3x3",
"TranslationAndMat3x3Array",
"TranslationAndMat3x3ArrayLike",
"TranslationAndMat3x3Like",
"TranslationAndMat3x3Type",
]
def _translationandmat3x3_translation_converter(x: datatypes.Vec3DLike | None) -> datatypes.Vec3D | None:
if x is None:
return None
elif isinstance(x, datatypes.Vec3D):
return x
else:
return datatypes.Vec3D(x)
def _translationandmat3x3_matrix_converter(x: datatypes.Mat3x3Like | None) -> datatypes.Mat3x3 | None:
if x is None:
return None
elif isinstance(x, datatypes.Mat3x3):
return x
else:
return datatypes.Mat3x3(x)
@define(init=False)
class TranslationAndMat3x3:
"""
Representation of an affine transform via a 3x3 affine matrix paired with a translation.
First applies the matrix, then the translation.
"""
def __init__(self, *args, **kwargs): # type: ignore[no-untyped-def]
translationandmat3x3_init(self, *args, **kwargs)
from_parent: bool = field(converter=bool)
"""
If true, the transform maps from the parent space to the space where the transform was logged.
Otherwise, the transform maps from the space to its parent.
"""
translation: datatypes.Vec3D | None = field(default=None, converter=_translationandmat3x3_translation_converter)
"""
3D translation, applied after the matrix.
"""
matrix: datatypes.Mat3x3 | None = field(default=None, converter=_translationandmat3x3_matrix_converter)
"""
3x3 matrix for scale, rotation & shear.
"""
TranslationAndMat3x3Like = TranslationAndMat3x3
TranslationAndMat3x3ArrayLike = Union[
TranslationAndMat3x3,
Sequence[TranslationAndMat3x3Like],
]
# --- Arrow support ---
class TranslationAndMat3x3Type(BaseExtensionType):
def __init__(self) -> None:
pa.ExtensionType.__init__(
self,
pa.struct(
[
pa.field("translation", pa.list_(pa.field("item", pa.float32(), False, {}), 3), True, {}),
pa.field("matrix", pa.list_(pa.field("item", pa.float32(), False, {}), 9), True, {}),
pa.field("from_parent", pa.bool_(), False, {}),
]
),
"rerun.datatypes.TranslationAndMat3x3",
)
class TranslationAndMat3x3Array(BaseExtensionArray[TranslationAndMat3x3ArrayLike]):
_EXTENSION_NAME = "rerun.datatypes.TranslationAndMat3x3"
_EXTENSION_TYPE = TranslationAndMat3x3Type
@staticmethod
def _native_to_pa_array(data: TranslationAndMat3x3ArrayLike, data_type: pa.DataType) -> pa.Array:
raise NotImplementedError
TranslationAndMat3x3Type._ARRAY_TYPE = TranslationAndMat3x3Array
# TODO(cmc): bring back registration to pyarrow once legacy types are gone
# pa.register_extension_type(TranslationAndMat3x3Type()) | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/translation_and_mat3x3.py | 0.863852 | 0.433502 | translation_and_mat3x3.py | pypi |
from __future__ import annotations
from .angle import Angle, AngleArray, AngleArrayLike, AngleLike, AngleType
from .fuzzy import (
AffixFuzzer1,
AffixFuzzer1Array,
AffixFuzzer1ArrayLike,
AffixFuzzer1Like,
AffixFuzzer1Type,
AffixFuzzer2,
AffixFuzzer2Array,
AffixFuzzer2ArrayLike,
AffixFuzzer2Like,
AffixFuzzer2Type,
AffixFuzzer3,
AffixFuzzer3Array,
AffixFuzzer3ArrayLike,
AffixFuzzer3Like,
AffixFuzzer3Type,
AffixFuzzer4,
AffixFuzzer4Array,
AffixFuzzer4ArrayLike,
AffixFuzzer4Like,
AffixFuzzer4Type,
AffixFuzzer5,
AffixFuzzer5Array,
AffixFuzzer5ArrayLike,
AffixFuzzer5Like,
AffixFuzzer5Type,
FlattenedScalar,
FlattenedScalarArray,
FlattenedScalarArrayLike,
FlattenedScalarLike,
FlattenedScalarType,
)
from .mat3x3 import Mat3x3, Mat3x3Array, Mat3x3ArrayLike, Mat3x3Like, Mat3x3Type
from .mat4x4 import Mat4x4, Mat4x4Array, Mat4x4ArrayLike, Mat4x4Like, Mat4x4Type
from .point2d import Point2D, Point2DArray, Point2DArrayLike, Point2DLike, Point2DType
from .point3d import Point3D, Point3DArray, Point3DArrayLike, Point3DLike, Point3DType
from .quaternion import Quaternion, QuaternionArray, QuaternionArrayLike, QuaternionLike, QuaternionType
from .rotation3d import Rotation3D, Rotation3DArray, Rotation3DArrayLike, Rotation3DLike, Rotation3DType
from .rotation_axis_angle import (
RotationAxisAngle,
RotationAxisAngleArray,
RotationAxisAngleArrayLike,
RotationAxisAngleLike,
RotationAxisAngleType,
)
from .scale3d import Scale3D, Scale3DArray, Scale3DArrayLike, Scale3DLike, Scale3DType
from .transform3d import Transform3D, Transform3DArray, Transform3DArrayLike, Transform3DLike, Transform3DType
from .translation_and_mat3x3 import (
TranslationAndMat3x3,
TranslationAndMat3x3Array,
TranslationAndMat3x3ArrayLike,
TranslationAndMat3x3Like,
TranslationAndMat3x3Type,
)
from .translation_rotation_scale3d import (
TranslationRotationScale3D,
TranslationRotationScale3DArray,
TranslationRotationScale3DArrayLike,
TranslationRotationScale3DLike,
TranslationRotationScale3DType,
)
from .vec2d import Vec2D, Vec2DArray, Vec2DArrayLike, Vec2DLike, Vec2DType
from .vec3d import Vec3D, Vec3DArray, Vec3DArrayLike, Vec3DLike, Vec3DType
from .vec4d import Vec4D, Vec4DArray, Vec4DArrayLike, Vec4DLike, Vec4DType
__all__ = [
"AffixFuzzer1",
"AffixFuzzer1Array",
"AffixFuzzer1ArrayLike",
"AffixFuzzer1Like",
"AffixFuzzer1Type",
"AffixFuzzer2",
"AffixFuzzer2Array",
"AffixFuzzer2ArrayLike",
"AffixFuzzer2Like",
"AffixFuzzer2Type",
"AffixFuzzer3",
"AffixFuzzer3Array",
"AffixFuzzer3ArrayLike",
"AffixFuzzer3Like",
"AffixFuzzer3Type",
"AffixFuzzer4",
"AffixFuzzer4Array",
"AffixFuzzer4ArrayLike",
"AffixFuzzer4Like",
"AffixFuzzer4Type",
"AffixFuzzer5",
"AffixFuzzer5Array",
"AffixFuzzer5ArrayLike",
"AffixFuzzer5Like",
"AffixFuzzer5Type",
"Angle",
"AngleArray",
"AngleArrayLike",
"AngleLike",
"AngleType",
"FlattenedScalar",
"FlattenedScalarArray",
"FlattenedScalarArrayLike",
"FlattenedScalarLike",
"FlattenedScalarType",
"Mat3x3",
"Mat3x3Array",
"Mat3x3ArrayLike",
"Mat3x3Like",
"Mat3x3Type",
"Mat4x4",
"Mat4x4Array",
"Mat4x4ArrayLike",
"Mat4x4Like",
"Mat4x4Type",
"Point2D",
"Point2DArray",
"Point2DArrayLike",
"Point2DLike",
"Point2DType",
"Point3D",
"Point3DArray",
"Point3DArrayLike",
"Point3DLike",
"Point3DType",
"Quaternion",
"QuaternionArray",
"QuaternionArrayLike",
"QuaternionLike",
"QuaternionType",
"Rotation3D",
"Rotation3DArray",
"Rotation3DArrayLike",
"Rotation3DLike",
"Rotation3DType",
"RotationAxisAngle",
"RotationAxisAngleArray",
"RotationAxisAngleArrayLike",
"RotationAxisAngleLike",
"RotationAxisAngleType",
"Scale3D",
"Scale3DArray",
"Scale3DArrayLike",
"Scale3DLike",
"Scale3DType",
"Transform3D",
"Transform3DArray",
"Transform3DArrayLike",
"Transform3DLike",
"Transform3DType",
"TranslationAndMat3x3",
"TranslationAndMat3x3Array",
"TranslationAndMat3x3ArrayLike",
"TranslationAndMat3x3Like",
"TranslationAndMat3x3Type",
"TranslationRotationScale3D",
"TranslationRotationScale3DArray",
"TranslationRotationScale3DArrayLike",
"TranslationRotationScale3DLike",
"TranslationRotationScale3DType",
"Vec2D",
"Vec2DArray",
"Vec2DArrayLike",
"Vec2DLike",
"Vec2DType",
"Vec3D",
"Vec3DArray",
"Vec3DArrayLike",
"Vec3DLike",
"Vec3DType",
"Vec4D",
"Vec4DArray",
"Vec4DArrayLike",
"Vec4DLike",
"Vec4DType",
] | /rerun_sdk-0.8.1-cp38-abi3-win_amd64.whl/rerun_sdk/rerun/_rerun2/datatypes/__init__.py | 0.803405 | 0.347247 | __init__.py | pypi |
from inspect import signature, Parameter
from functools import wraps
from time import sleep
class MaxRetryError(Exception):
"""Maximum retries have been exceeded."""
pass
class _FunctionSignature:
"""Flags for parameters accepted by function signature."""
NORMAL = 1 << 0
ARGS = 1 << 1
KWARGS = 1 << 2
class rerun:
"""Retry decorator.
Wraps a function and retries it depending on the return value or an exception that is raised. Different
algorithms can be used to implement varying delays after each retry. See `constant`, `linear`, `exponential`,
and `fibonacci`.
Each of the functions that can be passed in (`on_delay`, `on_error`, `on_return`, and `on_retry`) can
either accept 0 parameters, the number of parameters as described below, or the wrapped function's args and
kwargs in addition to the parameters as described below.
For usage examples, see https://github.com/JaredLGillespie/rerun.me.
:param on_delay:
If iterable or callable function, should generate the time delays between successive retries. Each iteration
should yield an integer value representing the time delay in milliseconds. The iterable should ideally have
an internal limit in which to stop, see :func:`exponential` for an example.
If an integer or float, should represent a single delay in milliseconds.
If None type, no retries are performed.
:param on_error:
If a callable, should accept a single value (the error) which is raised and return a boolean value. True
denotes that the error should be handled and the function retried, while False allows it to bubble up
without continuing to retry the function.
If an iterable, should be a sequence of Exception types that can be handled. Exceptions that are not one of
these types cause the error to bubble up without continuing to retry the function.
If an Exception type, an exception that occurs of this type is handled. All others are bubbled up without
continuing to retry the function.
If None type, no errors are handled.
:param on_return:
If a callable, should accept a single value (the return value) which is return and return a boolean value.
True denotes that the return value should result in the function being retried, while False allows the
return value to be returned from the function.
If an iterable, should be a sequence of values that can be handled (i.e. the function is retried). Values
that are not equal to one of these are returned from the function.
If a single value, a return values that occurs that is equal is handled. All others are returned from the
function.
If None type, no return values are handled. Note that if the None type is actually desired to be handled, it
should be given as a sequence like so: `on_return=[None]`.
:param on_retry:
A callback that is called each time the function is retried. Two arguments are passed, the current delay and
the number of retries thus far.
:param retry_after_delay:
A boolean value indicating whether to call the `on_retry` callback before or after the delay is issued.
True indicates after, while False indicates before.
:type on_delay: iterable or callable or int or float or None
:type on_error: iterable or callable or Exception or None
:type on_return: iterable or callable or object or None
:type on_retry: callable or None
:type retry_after_delay: bool
:raises MaxRetryException:
If number of retries has been exceeded, determined by `on_delay` generator running.
"""
def __init__(self, on_delay=None, on_error=None, on_return=None, on_retry=None, retry_after_delay=False):
self._on_delay = on_delay
self._on_error = on_error
self._on_return = on_return
self._on_retry = on_retry
self._retry_after_delay = retry_after_delay
# Signatures
self._sig_delay = None if not callable(on_delay) else self._define_function_signature(on_delay)
self._sig_error = None if not self._error_is_callable() else self._define_function_signature(on_error)
self._sig_return = None if not callable(on_return) else self._define_function_signature(on_return)
self._sig_retry = None if not callable(on_retry) else self._define_function_signature(on_retry)
def __call__(self, func):
@wraps(func)
def func_wrapper(*args, **kwargs):
return self.run(func, *args, **kwargs)
return func_wrapper
def run(self, func, *args, **kwargs):
"""Executes a function using this as the wrapper.
:param func:
A function to wrap and call.
:param args:
Arguments to pass to the function.
:param kwargs:
Keyword arguments to pass to the function.
:type func: function
"""
try:
ret = func(*args, **kwargs)
if not self._should_handle_return(ret, *args, **kwargs):
return ret
except Exception as e:
if not self._should_handle_error(e, *args, **kwargs):
raise
if self._on_delay is None:
raise MaxRetryError('Maximum number of retries exceeded for {0}'.format(self._get_func_name(func)))
retries = 0
for delay in self._get_delay_sequence(*args, **kwargs):
retries += 1
if self._should_handle_retry(False):
self._call_with_sig(self._on_retry, self._sig_retry, (delay, retries), *args, **kwargs)
sleep(delay / 1000)
if self._should_handle_retry(True):
self._call_with_sig(self._on_retry, self._sig_retry, (delay, retries), *args, **kwargs)
try:
ret = func(*args, **kwargs)
if not self._should_handle_return(ret, *args, **kwargs):
return ret
except Exception as e:
if not self._should_handle_error(e, *args, **kwargs):
raise
raise MaxRetryError('Maximum number of retries exceeded for {0}'.format(self._get_func_name(func)))
def _call_with_sig(self, func, sig, internal_args, *args, **kwargs):
if not sig:
return func()
elif sig & (_FunctionSignature.ARGS | _FunctionSignature.KWARGS):
return func(*(internal_args + args), **kwargs)
else:
return func(*internal_args)
def _define_function_signature(self, func):
sig = None
for param in signature(func).parameters.values():
if param.kind == Parameter.POSITIONAL_OR_KEYWORD:
sig = (sig or _FunctionSignature.NORMAL) | _FunctionSignature.NORMAL
elif param.kind == Parameter.VAR_KEYWORD:
sig = (sig or _FunctionSignature.KWARGS) | _FunctionSignature.KWARGS
elif param.kind == Parameter.VAR_POSITIONAL:
sig = (sig or _FunctionSignature.ARGS) | _FunctionSignature.ARGS
return sig
def _error_is_callable(self):
return callable(self._on_error) and \
(not isinstance(self._on_error, type) or not issubclass(self._on_error, Exception))
def _get_delay_sequence(self, *args, **kwargs):
if callable(self._on_delay):
return self._call_with_sig(self._on_delay, self._sig_delay, (), *args, **kwargs)
elif self._is_iterable(self._on_delay):
return self._on_delay
return [self._on_delay]
def _get_func_name(self, func):
if hasattr(func, '__name__'):
return func.__name__
else:
# Partials are nameless, so grab the variable from the local symbol table
return [k for k, v in locals().items() if v == func][0]
def _is_iterable(self, obj):
try:
iter(obj)
return True
except TypeError:
return False
def _should_handle_error(self, error, *args, **kwargs):
if self._on_error is not None:
# Callables are OK, but not if an Exception is given, we have to make sure we don't
# call `issubclass` with a non-class object
if self._error_is_callable():
return self._call_with_sig(self._on_error, self._sig_error, (error,), *args, **kwargs)
elif self._is_iterable(self._on_error):
return isinstance(error, tuple(self._on_error))
else:
return isinstance(error, self._on_error)
return False
def _should_handle_return(self, value, *args, **kwargs):
if self._on_return is not None:
if callable(self._on_return):
return self._call_with_sig(self._on_return, self._sig_return, (value,), *args, **kwargs)
elif self._is_iterable(self._on_return):
return value in self._on_return
else:
return value == self._on_return
return False
def _should_handle_retry(self, after_delay):
if self._on_retry is not None:
return self._retry_after_delay == after_delay
return False
def constant(delay, limit):
"""Constant delay generator.
:param delay:
The delay in milliseconds.
:param limit:
The number of delays to yield.
:type delay: int or float
:type limit: int
:return:
A generator function which yields a sequence of delays.
:rtype: function
"""
def func():
if delay < 0:
raise ValueError('delay must be non-negative')
if limit < 0:
raise ValueError('limit must be non-negative')
for _ in range(limit):
yield delay
return func
def linear(start, increment, limit):
"""Linear delay generator.
Creates a function generator that yields a constant delay at each iteration.
:param start:
The starting delay in milliseconds.
:param increment:
The amount to increment the delay after each iteration.
:param limit:
The number of delays to yield.
:type start: int or float
:type increment: int or float
:type limit: int
:return:
A generator function which yields a sequence of delays.
:rtype: function
"""
def func():
if start < 0:
raise ValueError('start must be non-negative')
if limit < 0:
raise ValueError('limit must be non-negative')
if increment < 0 and start + increment * limit < 0:
raise ValueError('parameters will yield negative result')
delay = start
for _ in range(limit):
yield delay
delay += increment
return func
def exponential(base, multiplier, limit):
"""Exponential delay generator.
Creates a function generator that yields an exponentially increasing delay at each iteration.
:param base:
The base to raise to a power.
:param multiplier:
The amount to multiply the delay by for each iteration.
:param limit:
The number of delays to yield.
:type base: int or float
:type multiplier: int or float
:type limit: int
:return:
A generator function which yields a sequence of delays.
:rtype: function
"""
def func():
if base < 0:
raise ValueError('base must be non-negative')
if multiplier < 0:
raise ValueError('multiplier must be non-negative')
if limit < 0:
raise ValueError('limit must be non-negative')
delay = base
for exp in range(limit):
yield delay**exp * multiplier
return func
def fibonacci(multiplier, limit):
"""Fibonacci delay generator.
Creates a function generator that yields a fibonacci delay sequence.
:param multiplier:
The amount to multiply the delay by for each iteration.
:param limit:
The number of delays to yield.
:type multiplier: int or float
:type limit: int
:return:
A generator function which yields a sequence of delays.
:rtype: function
"""
def func():
if multiplier < 0:
raise ValueError('multiplier must be non-negative')
if limit < 0:
raise ValueError('limit must be non-negative')
a, b = 0, 1
for _ in range(limit):
a, b = b, a + b
yield a * multiplier
return func | /rerun.me-1.0.0-py3-none-any.whl/rerunme/__init__.py | 0.933051 | 0.555496 | __init__.py | pypi |
import torch
import torch.nn as nn
from torchvision.models import resnet50
try:
from torch.hub import load_state_dict_from_url
except ImportError:
from torch.utils.model_zoo import load_url as load_state_dict_from_url
model_urls = dict(
auc93='https://github.com/tobysuwindra/birdsimilarity/releases/download/BirdSimilarityModel/model_auc93.pth'
)
def load_state(arch, progress=True):
state = load_state_dict_from_url(model_urls.get(arch), progress=progress)
return state
def model_auc93(pretrained=True, progress=True):
model = BirdNetModel()
if pretrained:
state = load_state('auc93', progress)
model.load_state_dict(state['state_dict'])
return model
class Flatten(nn.Module):
def forward(self, x):
return x.view(x.size(0), -1)
class BirdNetModel(nn.Module):
def __init__(self, pretrained=False):
super(BirdNetModel, self).__init__()
self.model = resnet50(pretrained)
embedding_size = 128
num_classes = 500
self.cnn = nn.Sequential(
self.model.conv1,
self.model.bn1,
self.model.relu,
self.model.maxpool,
self.model.layer1,
self.model.layer2,
self.model.layer3,
self.model.layer4)
self.model.fc = nn.Sequential(
Flatten(),
nn.Linear(100352, embedding_size))
self.model.classifier = nn.Linear(embedding_size, num_classes)
def l2_norm(self, input):
input_size = input.size()
buffer = torch.pow(input, 2)
normp = torch.sum(buffer, 1).add_(1e-10)
norm = torch.sqrt(normp)
_output = torch.div(input, norm.view(-1, 1).expand_as(input))
output = _output.view(input_size)
return output
def freeze_all(self):
for param in self.model.parameters():
param.requires_grad = False
def unfreeze_all(self):
for param in self.model.parameters():
param.requires_grad = True
def freeze_fc(self):
for param in self.model.fc.parameters():
param.requires_grad = False
def unfreeze_fc(self):
for param in self.model.fc.parameters():
param.requires_grad = True
def freeze_only(self, freeze):
for name, child in self.model.named_children():
if name in freeze:
for param in child.parameters():
param.requires_grad = False
else:
for param in child.parameters():
param.requires_grad = True
def unfreeze_only(self, unfreeze):
for name, child in self.model.named_children():
if name in unfreeze:
for param in child.parameters():
param.requires_grad = True
else:
for param in child.parameters():
param.requires_grad = False
def forward(self, x):
x = self.cnn(x)
x = self.model.fc(x)
features = self.l2_norm(x)
alpha = 10
features = features * alpha
return features
def forward_classifier(self, x):
features = self.forward(x)
res = self.model.classifier(features)
return res | /res-birdnet-0.0.7.tar.gz/res-birdnet-0.0.7/res_birdnet/models.py | 0.886224 | 0.392803 | models.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.