code stringlengths 114 1.05M | path stringlengths 3 312 | quality_prob float64 0.5 0.99 | learning_prob float64 0.2 1 | filename stringlengths 3 168 | kind stringclasses 1
value |
|---|---|---|---|---|---|
Robots Exclusion Protocol Parser for Python
===========================================
[](https://travis-ci.org/seomoz/reppy)
`Robots.txt` parsing in Python.
Goals
=====
- __Fetching__ -- helper utilities for fetching and parsing `robots.txt`s, including
checking `cache-control` and `expires` headers
- __Support for newer features__ -- like `Crawl-Delay` and `Sitemaps`
- __Wildcard matching__ -- without using regexes, no less
- __Performance__ -- with >100k parses per second, >1M URL checks per second once parsed
- __Caching__ -- utilities to help with the caching of `robots.txt` responses
Installation
============
`reppy` is available on `pypi`:
```bash
pip install reppy
```
When installing from source, there are submodule dependencies that must also be fetched:
```bash
git submodule update --init --recursive
make install
```
Usage
=====
Checking when pages are allowed
-------------------------------
Two classes answer questions about whether a URL is allowed: `Robots` and
`Agent`:
```python
from reppy.robots import Robots
# This utility uses `requests` to fetch the content
robots = Robots.fetch('http://example.com/robots.txt')
robots.allowed('http://example.com/some/path/', 'my-user-agent')
# Get the rules for a specific agent
agent = robots.agent('my-user-agent')
agent.allowed('http://example.com/some/path/')
```
The `Robots` class also exposes properties `expired` and `ttl` to describe how
long the response should be considered valid. A `reppy.ttl` policy is used to
determine what that should be:
```python
from reppy.ttl import HeaderWithDefaultPolicy
# Use the `cache-control` or `expires` headers, defaulting to a 30 minutes and
# ensuring it's at least 10 minutes
policy = HeaderWithDefaultPolicy(default=1800, minimum=600)
robots = Robots.fetch('http://example.com/robots.txt', ttl_policy=policy)
```
Customizing fetch
-----------------
The `fetch` method accepts `*args` and `**kwargs` that are passed on to `requests.get`,
allowing you to customize the way the `fetch` is executed:
```python
robots = Robots.fetch('http://example.com/robots.txt', headers={...})
```
Matching Rules and Wildcards
----------------------------
Both `*` and `$` are supported for wildcard matching.
This library follows the matching [1996 RFC](http://www.robotstxt.org/norobots-rfc.txt)
describes. In the case where multiple rules match a query, the longest rules wins as
it is presumed to be the most specific.
Checking sitemaps
-----------------
The `Robots` class also lists the sitemaps that are listed in a `robots.txt`
```python
# This property holds a list of URL strings of all the sitemaps listed
robots.sitemaps
```
Delay
-----
The `Crawl-Delay` directive is per agent and can be accessed through that class. If
none was specified, it's `None`:
```python
# What's the delay my-user-agent should use
robots.agent('my-user-agent').delay
```
Determining the `robots.txt` URL
--------------------------------
Given a URL, there's a utility to determine the URL of the corresponding `robots.txt`.
It preserves the scheme and hostname and the port (if it's not the default port for the
scheme).
```python
# Get robots.txt URL for http://userinfo@example.com:8080/path;params?query#fragment
# It's http://example.com:8080/robots.txt
Robots.robots_url('http://userinfo@example.com:8080/path;params?query#fragment')
```
Caching
=======
There are two cache classes provided -- `RobotsCache`, which caches entire `reppy.Robots`
objects, and `AgentCache`, which only caches the `reppy.Agent` relevant to a client. These
caches duck-type the class that they cache for the purposes of checking if a URL is
allowed:
```python
from reppy.cache import RobotsCache
cache = RobotsCache(capacity=100)
cache.allowed('http://example.com/foo/bar', 'my-user-agent')
from reppy.cache import AgentCache
cache = AgentCache(agent='my-user-agent', capacity=100)
cache.allowed('http://example.com/foo/bar')
```
Like `reppy.Robots.fetch`, the cache constructory accepts a `ttl_policy` to inform the
expiration of the fetched `Robots` objects, as well as `*args` and `**kwargs` to be passed
to `reppy.Robots.fetch`.
Caching Failures
----------------
There's a piece of classic caching advice: "don't cache failures." However, this is not
always appropriate in certain circumstances. For example, if the failure is a timeout,
clients may want to cache this result so that every check doesn't take a very long time.
To this end, the `cache` module provides a notion of a cache policy. It determines what
to do in the case of an exception. The default is to cache a form of a disallowed response
for 10 minutes, but you can configure it as you see fit:
```python
# Do not cache failures (note the `ttl=0`):
from reppy.cache.policy import ReraiseExceptionPolicy
cache = AgentCache('my-user-agent', cache_policy=ReraiseExceptionPolicy(ttl=0))
# Cache and reraise failures for 10 minutes (note the `ttl=600`):
cache = AgentCache('my-user-agent', cache_policy=ReraiseExceptionPolicy(ttl=600))
# Treat failures as being disallowed
cache = AgentCache(
'my-user-agent',
cache_policy=DefaultObjectPolicy(ttl=600, lambda _: Agent().disallow('/')))
```
Development
===========
A `Vagrantfile` is provided to bootstrap a development environment:
```bash
vagrant up
```
Alternatively, development can be conducted using a `virtualenv`:
```bash
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
```
Tests
=====
Tests may be run in `vagrant`:
```bash
make test
```
Development
===========
Environment
-----------
To launch the `vagrant` image, we only need to
`vagrant up` (though you may have to provide a `--provider` flag):
```bash
vagrant up
```
With a running `vagrant` instance, you can log in and run tests:
```bash
vagrant ssh
make test
```
Running Tests
-------------
Tests are run with the top-level `Makefile`:
```bash
make test
```
PRs
===
These are not all hard-and-fast rules, but in general PRs have the following expectations:
- __pass Travis__ -- or more generally, whatever CI is used for the particular project
- __be a complete unit__ -- whether a bug fix or feature, it should appear as a complete
unit before consideration.
- __maintain code coverage__ -- some projects may include code coverage requirements as
part of the build as well
- __maintain the established style__ -- this means the existing style of established
projects, the established conventions of the team for a given language on new
projects, and the guidelines of the community of the relevant languages and
frameworks.
- __include failing tests__ -- in the case of bugs, failing tests demonstrating the bug
should be included as one commit, followed by a commit making the test succeed. This
allows us to jump to a world with a bug included, and prove that our test in fact
exercises the bug.
- __be reviewed by one or more developers__ -- not all feedback has to be accepted, but
it should all be considered.
- __avoid 'addressed PR feedback' commits__ -- in general, PR feedback should be rebased
back into the appropriate commits that introduced the change. In cases, where this
is burdensome, PR feedback commits may be used but should still describe the changed
contained therein.
PR reviews consider the design, organization, and functionality of the submitted code.
Commits
=======
Certain types of changes should be made in their own commits to improve readability. When
too many different types of changes happen simultaneous to a single commit, the purpose of
each change is muddled. By giving each commit a single logical purpose, it is implicitly
clear why changes in that commit took place.
- __updating / upgrading dependencies__ -- this is especially true for invocations like
`bundle update` or `berks update`.
- __introducing a new dependency__ -- often preceeded by a commit updating existing
dependencies, this should only include the changes for the new dependency.
- __refactoring__ -- these commits should preserve all the existing functionality and
merely update how it's done.
- __utility components to be used by a new feature__ -- if introducing an auxiliary class
in support of a subsequent commit, add this new class (and its tests) in its own
commit.
- __config changes__ -- when adjusting configuration in isolation
- __formatting / whitespace commits__ -- when adjusting code only for stylistic purposes.
New Features
------------
Small new features (where small refers to the size and complexity of the change, not the
impact) are often introduced in a single commit. Larger features or components might be
built up piecewise, with each commit containing a single part of it (and its corresponding
tests).
Bug Fixes
---------
In general, bug fixes should come in two-commit pairs: a commit adding a failing test
demonstrating the bug, and a commit making that failing test pass.
Tagging and Versioning
======================
Whenever the version included in `setup.py` is changed (and it should be changed when
appropriate using [http://semver.org/](http://semver.org/)), a corresponding tag should
be created with the same version number (formatted `v<version>`).
```bash
git tag -a v0.1.0 -m 'Version 0.1.0
This release contains an initial working version of the `crawl` and `parse`
utilities.'
git push --tags origin
```
| /reppy-0.4.14.tar.gz/reppy-0.4.14/README.md | 0.706292 | 0.895888 | README.md | pypi |
# repr-llm
<img src="https://github.com/rgbkrk/repr_llm/assets/836375/f1b8b252-7e70-4897-bfbf-e87d48bb46bd" height="64px" />
Create lightweight representations of Python objects for Large Language Model consumption
## Background
In Python, we have a way to represent our objects within interpreters: `repr`.
In IPython, it goes even further. We can register rich represenations of plots, tables, and all kinds of objects. As an example, developers can augment their objects with a `_repr_html_` method to expose a rich HTML version of their object. The most common example most Pythonistas know about is showing tables for their data inside notebooks via `pandas`.
```python
import pandas as pd
df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
df
```
| | a | b |
| --: | --: | --: |
| 0 | 1 | 4 |
| 1 | 2 | 5 |
| 2 | 3 | 6 |
This is a great way to show data in a notebook for humans. What if there was a way to provide a compact yet rich representation for Large Language Models?
## The Idea
The `repr_llm` package introduces the idea that python objects can emit a rich representation for LLM consumption. With the advent of [OpenAI's Code Interpreter](https://openai.com/blog/chatgpt-plugins#code-interpreter), [Noteable plugin for ChatGPT](https://noteable.io/chatgpt-plugin-for-notebook/), and [LangChain's Python REPL Tool](https://github.com/hwchase17/langchain/blob/fcb3a647997c6275e3d341abb032e5106ea39cac/langchain/tools/python/tool.py#L42C1-L42C1), we have massive opportunity to create rich visualizations for humans and rich text for models.
Let's begin by creating a `Book` class that has a regular `__repr__` and a `_repr_llm_`:
```python
class Book:
def __init__(self, title, author, year, genre):
self.title = title
self.author = author
self.year = year
self.genre = genre
def __repr__(self):
return f"Book('{self.title}', '{self.author}', {self.year}, '{self.genre}')"
def _repr_llm_(self):
return (f"A Book object representing '{self.title}' by {self.author}, "
f"published in the year {self.year}. Genre: {self.genre}. "
f"Instantiated with `{repr(self)}`"
)
from repr_llm import register_llm_formatter
ip = get_ipython() # Current IPython shell
register_llm_formatter(ip)
# This is how IPython creates the Out[*] prompt in the notebook
data, _ = ip.display_formatter.format(
Book('Attack of the Black Rectangles', 'Amy Sarig King', 2022, "Middle Grade")
)
data
```
```json
{
"text/plain": "Book('Attack of the Black Rectangles', 'Amy Sarig King', 2022, 'Middle Grade')",
"text/llm+plain": "A Book object representing 'Attack of the Black Rectangles' by Amy Sarig King, published in the year 2022. Genre: Middle Grade. Instantiated with `Book('Attack of the Black Rectangles', 'Amy Sarig King', 2022, 'Middle Grade')`"
}
```
## How it works
The `repr_llm` package provides a `register_llm_formatter` function that takes an IPython shell and registers a new formatter for the `text/llm+plain` mimetype.
When IPython goes to display an object, it will first check if the object has a `_repr_llm_` method. If it does, it will call that method and include the result as part of the representation for the object.
## FAQ
### Why not just use `_repr_markdown_`? (or `__repr__`)
The `_repr_markdown_` method is a great way to show rich text in a notebook. The reason is that it's meant for humans to read. Large Language Models can read that too. However, there are going to be times when `Markdown` is too big for the model (token limit) or too complex (too many tokens to understand).
Originally I was going to suggest more package authors use `_repr_markdown_` (and they should!). Then [Peter Wang](https://github.com/pzwang) suggested that we have a version written for the models, just like OpenAI's `ai-plugin.json` does, especially since the LLMs _can_ be a more advanced reader.
Any LLM user can still use `_repr_markdown_` to show rich text to the model. This provides an extra option that is _explicit_. It's a way for developers to say "this is what I want the model to see".
### Where does the `text/llm+plain` mimetype come from?
The mimetype is a convention being proposed. It's a way to say "this is a plain text representation for a Large Language Model". It's not a standard (yet!). It's a way to say "this is what I want the model to see" while we explore this space.
### Are there examples of this in the wild?
ChatGPT plugins provide something very similar to repr vs repr_llm with the `api_plugin_json` and OpenAPI specs, especially since there's a delineation of `description_for_human` and `description_for_model`.
### How did this originate?
While experimenting with Large Language Models directly [💬](https://platform.openai.com/docs/api-reference/chat/create) [🤗](https://huggingface.co/) and with tools like [genai](https://github.com/noteable-io/genai), [dangermode](https://github.com/rgbkrk/dangermode), and [langchain](https://github.com/hwchase17/langchain) I've been naturally converting representations of my data or text to markdown as a format that GPT models can understand and converse with a user about.
## What's next?
Convince as many library authors as possible, just like we did with `_repr_html_` to use `_repr_llm_` to provide a lightweight representation of their objects that is:
- Deterministic - no side effects
- Navigable - states what other functions can be run to get more information, create better plots, etc.
- Lightweight - take into account how much text a GPT model can read
- Safe - no secrets, no PII, no sensitive data
## Credit
Thank you to [Dave Shoup](https://github.com/shouples) for the conversations about pandas representation and how we can keep improving what we send for `Out[*]` to Large Language Models.
| /repr_llm-0.2.1.tar.gz/repr_llm-0.2.1/README.md | 0.705075 | 0.937669 | README.md | pypi |
# This module is an optional import due to importing numpy and pandas
PANDAS_INSTALLED = False
try:
import numpy as np
import pandas as pd
PANDAS_INSTALLED = True
except ImportError:
pass
def summarize_dataframe(df, sample_rows=5, sample_columns=20):
"""Create a summary of a Pandas DataFrame for Large Language Model Consumption.
Args:
df (Pandas DataFrame): The dataframe to be summarized.
sample_rows (int): The number of rows to sample
sample_columns (int): The number of columns to sample
Returns:
A markdown string with a summary of the dataframe
"""
if not PANDAS_INSTALLED:
raise ImportError("Pandas must be installed to use summarize_dataframe")
num_rows, num_cols = df.shape
# # Column Summary
# ## Missing value summary for all columns
missing_values = pd.DataFrame(df.isnull().sum(), columns=["Missing Values"])
missing_values["% Missing"] = missing_values["Missing Values"] / num_rows * 100
# ## Data type summary for all columns
column_info = pd.concat([df.dtypes, missing_values], axis=1).reset_index()
column_info.columns = ["Column Name", "Data Type", "Missing Values", "% Missing"]
column_info["Data Type"] = column_info["Data Type"].astype(str)
# TODO: Bring these back once we can ensure `describe` does not fail on some tables
# # Basic summary statistics for numerical and categorical columns
# # get basic statistical information for each column
# numerical_summary = df.describe(include=[np.number]).T.reset_index().rename(columns={'index': 'Column Name'})
has_categoricals = any(df.select_dtypes(include=['category', 'datetime', 'timedelta']).columns)
if has_categoricals:
categorical_describe = df.describe(include=['category', 'datetime', 'timedelta'])
categorical_summary = categorical_describe.T.reset_index().rename(columns={'index': 'Column Name'})
else:
categorical_summary = pd.DataFrame(columns=['Column Name'])
# Ignore any columns that do not have `:@computed_region` in the name, which are derived data not useful for
# summarization of sample data
filtered_columns = [col for col in df.columns if ":@computed_region" not in col]
# adjust sample size with filtered dataframe shape
sample_columns = min(sample_columns, len(filtered_columns))
sample_rows = min(sample_rows, df.shape[0])
sampled = df[filtered_columns].sample(sample_columns, axis=1).sample(sample_rows, axis=0)
tablefmt = "github"
# create the markdown string for output
output = (
f"## Dataframe Summary\n\n"
f"Number of Rows: {num_rows:,}\n\n"
f"Number of Columns: {num_cols:,}\n\n"
f"### Column Information\n\n{column_info.to_markdown(tablefmt=tablefmt)}\n\n"
# f"### Numerical Summary\n\n{numerical_summary.to_markdown(tablefmt=tablefmt)}\n\n"
f"### Categorical Summary\n\n{categorical_summary.to_markdown(tablefmt=tablefmt)}\n\n"
f"### Sample Data ({sample_rows}x{sample_columns})\n\n{sampled.to_markdown(tablefmt=tablefmt)}"
)
return output
def summarize_series(series, sample_size=5):
"""Create a summary of a Pandas Series for Large Language Model Consumption.
Args:
series (pd.Series): The series to be summarized.
sample_size (int): The number of values to sample
Returns:
A markdown string with a summary of the series
"""
if not PANDAS_INSTALLED:
raise ImportError("Pandas must be installed to use summarize_series")
# Get basic series information
num_values = len(series)
data_type = series.dtype
num_missing = series.isnull().sum()
percent_missing = num_missing / num_values * 100
# Get summary statistics based on the data type
if np.issubdtype(data_type, np.number):
summary_statistics = series.describe().to_frame().T
elif pd.api.types.is_string_dtype(data_type):
summary_statistics = series.describe(datetime_is_numeric=True).to_frame().T
else:
summary_statistics = series.describe().to_frame().T
# Sample data
sampled = series.sample(min(sample_size, num_values))
tablefmt = "github"
# Create the markdown string for output
output = (
f"## Series Summary\n\n"
f"Number of Values: {num_values:,}\n\n"
f"Data Type: {data_type}\n\n"
f"Missing Values: {num_missing:,} ({percent_missing:.2f}%)\n\n"
f"### Summary Statistics\n\n{summary_statistics.to_markdown(tablefmt=tablefmt)}\n\n"
f"### Sample Data ({sample_size})\n\n{sampled.to_frame().to_markdown(tablefmt=tablefmt)}"
)
return output
def format_dataframe_for_llm(df):
"""Format a dataframe for Large Language Model Consumption.
Args:
df (Pandas DataFrame): The dataframe to be formatted.
Returns:
A markdown string with a formatted dataframe
"""
if not PANDAS_INSTALLED:
raise ImportError("Pandas must be installed to use format_dataframe_for_llm")
with pd.option_context('display.max_rows', 5, 'display.html.table_schema', False, 'display.max_columns', 20):
num_columns = min(pd.options.display.max_columns, df.shape[1])
num_rows = min(pd.options.display.max_rows, df.shape[0])
return summarize_dataframe(df, sample_rows=num_rows, sample_columns=num_columns)
def format_series_for_llm(series):
"""Format a series for Large Language Model Consumption.
Args:
series (Pandas Series): The series to be formatted.
Returns:
A markdown string with a formatted series
"""
if not PANDAS_INSTALLED:
raise ImportError("Pandas must be installed to use format_series_for_llm")
with pd.option_context('display.max_rows', 5, 'display.html.table_schema', False, 'display.max_columns', 20):
num_cols = min(pd.options.display.max_columns, series.shape[0])
return summarize_series(series, sample_size=num_cols) | /repr_llm-0.2.1.tar.gz/repr_llm-0.2.1/repr_llm/pandas.py | 0.776623 | 0.665309 | pandas.py | pypi |
from collections.abc import Mapping, Sequence
from dataclasses import dataclass
from functools import partial
from typing import List as TypingList
from typing import Union
from repr_utils._base import ReprBase
from repr_utils._util import indent_each_line
@dataclass
class List(ReprBase):
values: Union[TypingList, Mapping]
numbered: bool = False
def __repr__(self) -> str:
return self._repr_markdown_()
def get_value_repr(self, item) -> ReprBase:
if isinstance(item, ReprBase):
return item
return List(item, self.numbered)
def _repr_html_(self) -> str:
if isinstance(self.values, Mapping):
values_string = "\n".join(
[
f"<li>{key}: {value}</li>"
if not self.instance_of_any(value, [Mapping, Sequence, ReprBase])
else f"<li>{key}</li>\n{self.get_value_repr(value)._repr_html_()}"
for key, value in self.values.items()
]
)
else:
values_string = "\n".join(
[
f"<li>{item}</li>"
if not self.instance_of_any(item, [Mapping, TypingList, ReprBase])
else self.get_value_repr(item)._repr_html_()
for item in self.values
]
)
return "<{type}>\n{values}\n</{type}>".format(
type="ol" if self.numbered else "ul", values=values_string
)
def __handle_nested_markdown(self, value):
return indent_each_line(self.get_value_repr(value)._repr_markdown_())
def __get_md_start(self, index: int):
if self.numbered:
return f"{index + 1}."
return "-"
def _repr_markdown_(self) -> str:
if isinstance(self.values, Mapping):
return "\n".join(
[
f" {self.__get_md_start(i)} {key}: {value}"
if not self.instance_of_any(value, [Mapping, Sequence, ReprBase])
else (
f" {self.__get_md_start(i)}"
f" {key}\n{self.__handle_nested_markdown(value)}"
)
for i, (key, value) in enumerate(self.values.items())
]
)
return "\n".join(
[
f" {self.__get_md_start(i)} {item}"
if not self.instance_of_any(item, [Mapping, TypingList, ReprBase])
else self.__handle_nested_markdown(item)
for i, item in enumerate(self.values)
]
)
def _repr_latex_(self) -> str:
if isinstance(self.values, Mapping):
values_string = "\n".join(
[
f"\\item {key}: {value}"
if not self.instance_of_any(value, [Mapping, Sequence, ReprBase])
else f"\\item {key}\n{self.get_value_repr(value)._repr_latex_()}"
for key, value in self.values.items()
]
)
else:
values_string = "\n".join(
[
f"\\item {item}"
if not self.instance_of_any(item, [Mapping, TypingList, ReprBase])
else self.get_value_repr(item)._repr_latex_()
for item in self.values
]
)
return "\\begin{{{type}}}\n{values}\n\\end{{{type}}}".format(
type="itemize" if self.numbered else "enumerate", values=values_string
)
@classmethod
def instance_of_any(cls, obj, types: TypingList[type], false_on_string=True):
if isinstance(obj, str) and false_on_string:
return False
type_matcher = partial(isinstance, obj)
return any(map(type_matcher, types)) | /repr-utils-0.2.1.tar.gz/repr-utils-0.2.1/src/repr_utils/list.py | 0.842896 | 0.240395 | list.py | pypi |
from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
def string_to_set(string_value):
return set(open(string_value).read().splitlines())
class Operator(object):
def __init__(self, name_of_set_operation, aliases):
self.name_of_set_operation = name_of_set_operation
self.aliases = aliases
def execute(self, set1, set2):
return getattr(set1, self.name_of_set_operation)(set2)
class OperatorReturningSet(Operator):
pass
class OperatorReturningBool(Operator):
pass
operators = [
# OperatorReturningBool('isdisjoint', []),
# OperatorReturningBool('issubset', []),
# OperatorReturningBool('issuperset', []),
OperatorReturningSet('union', ['|', '+', 'or']),
OperatorReturningSet('intersection', ['&', 'and']),
OperatorReturningSet('difference', ['-', 'minus']),
OperatorReturningSet('symmetric_difference', ['^']),
]
def string_to_operator(string_value):
for operator in operators:
if string_value == operator.name_of_set_operation or string_value in operator.aliases:
return operator
raise argparse.ArgumentTypeError('Unknown operator: %r' % string_value)
def description():
return '''Operators: \n%s
Examples
#Show all files in directory "a" which are not in directory "b":
setops <(cd a; find ) - <(cd b; find )
# Create some files for testing
echo foo > foo.txt
echo bar > bar.txt
echo foobar > foobar.txt
# All files minus files containing "foo"
user@host$ setops <(ls *.txt) - <(grep -l foo *.txt)
# All files containing "foo" or "bar" minus files which contain "foobar"
setops <(setops <(grep -l bar *.txt) + <(grep -l foo *.txt)) - <(grep -l foobar *.txt)
''' % ('\n'.join([' %s Aliases: %s' % (
operator.name_of_set_operation,
' '.join(operator.aliases)) for operator in operators]))
def main():
parser = argparse.ArgumentParser(description=description(),
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('set1', type=string_to_set)
parser.add_argument('operator', type=string_to_operator)
parser.add_argument('set2', type=string_to_set)
args = parser.parse_args()
for item in sorted(args.operator.execute(args.set1, args.set2)):
print(item) | /reprec-2021.34.0.tar.gz/reprec-2021.34.0/setops/__init__.py | 0.679923 | 0.177597 | __init__.py | pypi |
import mimetypes
import tempfile
from typing import Optional
from contracts import contract
from reprep import mime_implies_unicode_representation, RepRepDefaults
from zuper_commons.types import ZException
from .constants import MIME_JPG, MIME_PDF, MIME_PNG, MIME_SVG, mime_to_ext
from .datanode import DataNode
from .mpl import get_pylab_instance
from .node import Node
__all__ = ["PylabAttacher", "Attacher"]
from .types import MimeType, NID
class Attacher:
node: Node
nid: NID
mime: Optional[MimeType]
caption: Optional[str]
def __init__(
self, node: Node, nid: NID, mime: Optional[MimeType], caption: Optional[str]
):
self.node = node
self.nid = nid
self.mime = mime
self.caption = caption
if node.has_child(nid):
msg = "Node %s (id = %r) already has child %r" % (node, node.nid, nid)
raise ValueError(msg)
if self.mime is not None:
if self.mime in mime_to_ext:
suffix = "." + mime_to_ext[self.mime]
else:
suffix = mimetypes.guess_extension(self.mime)
if not suffix:
msg = "Cannot guess extension for MIME %r." % mime
raise ZException(msg)
if suffix == ".svgz":
suffix = ".svg"
else:
suffix = ".bin"
self.temp_file = tempfile.NamedTemporaryFile(suffix=suffix)
def __enter__(self):
return self.temp_file.name
def __exit__(self, _a, _b, _c):
data = open(self.temp_file.name, "rb").read()
if mime_implies_unicode_representation(self.mime):
data = data.decode("utf-8")
self.node.data(nid=self.nid, data=data, mime=self.mime, caption=self.caption)
self.temp_file.close()
class PylabAttacher:
node: Node
nid: NID
mime: Optional[MimeType]
caption: Optional[str]
@contract(node=Node, nid="valid_id", mime="None|unicode", caption="None|unicode")
def __init__(
self,
node: Node,
nid: NID,
mime: Optional[MimeType],
caption: Optional[str],
**figure_args
):
self.node = node
self.nid = nid
self.mime = mime
self.caption = caption
if self.mime is None:
self.mime = RepRepDefaults.default_image_format
if node.has_child(nid):
raise ValueError("Node %s already has child %r" % (node, nid))
suffix = mimetypes.guess_extension(self.mime)
if not suffix:
msg = "Cannot guess extension for MIME %r." % mime
raise ValueError(msg)
self.temp_file = tempfile.NamedTemporaryFile(suffix=suffix)
self.pylab = get_pylab_instance()
self.figure = self.pylab.figure(**figure_args)
def __enter__(self):
return self.pylab
def __exit__(self, exc_type, exc_value, traceback): # @UnusedVariable
if exc_type is not None:
# an error occurred. Close the figure and return false.
self.pylab.close()
return False
if not self.figure.axes:
raise Exception("You did not draw anything in the image.")
self.pylab.savefig(self.temp_file.name, **RepRepDefaults.savefig_params)
with open(self.temp_file.name, "rb") as f:
data = f.read()
self.temp_file.close()
image_node = DataNode(
nid=self.nid, data=data, mime=self.mime, caption=self.caption
)
# save other versions if needed
if (self.mime != MIME_PNG) and RepRepDefaults.save_extra_png:
with image_node.data_file("png", mime=MIME_PNG) as f2:
self.pylab.savefig(f2, **RepRepDefaults.savefig_params)
if (self.mime != MIME_SVG) and RepRepDefaults.save_extra_svg:
with image_node.data_file("svg", mime=MIME_SVG) as f2:
self.pylab.savefig(f2, **RepRepDefaults.savefig_params)
if (self.mime != MIME_PDF) and RepRepDefaults.save_extra_pdf:
with image_node.data_file("pdf", mime=MIME_PDF) as f2:
self.pylab.savefig(f2, **RepRepDefaults.savefig_params)
self.pylab.close()
self.node.add_child(image_node)
self.node.add_to_autofigure(image_node)
@contract(parent=Node, nid="valid_id", rgb="array[HxWx(3|4)]")
def data_rgb_imp(
parent: Node, nid: NID, rgb, mime=MIME_PNG, caption: Optional[str] = None
):
from .graphics import Image_from_array, rgb_zoom
# zoom images smaller than 50
if max(rgb.shape[0], rgb.shape[1]) < 50: # XXX config
rgb = rgb_zoom(rgb, 10)
pil_image = Image_from_array(rgb)
with parent.data_file(nid=nid, mime=mime, caption=caption) as f:
if mime == MIME_PNG:
params = {}
if mime == MIME_JPG:
params = dict(quality=95, optimize=True)
pil_image.save(f, **params)
return parent[nid] | /reprep-z6-6.0.5.tar.gz/reprep-z6-6.0.5/src/reprep/helpers.py | 0.703651 | 0.203075 | helpers.py | pypi |
from .reduction import Reduction
from .reduction_display import ReductionDisplay
from .storage import RepRepStats
from .with_description import WithDescription
from contracts import contract, new_contract
from reprep.report_utils.storing import StoreResultsDict
__all__ = [
"DataView",
]
class DataView(WithDescription):
""" This class defines how we view the data. """
NOT_AVAILABLE = "n/a"
@contract(source=WithDescription, reduction=Reduction, display=ReductionDisplay)
def __init__(self, source, reduction, display, *args, **kwargs):
"""
:param name: ID for this view
:param source: the source field in the data
:param reduce: the reduction function (one defined in RepRepStats)
:param display: the display function (from number to string) (one defined in RepRepStats)
:param symbol: A LaTeX expression.
:param desc: A free-form string.
"""
super(DataView, self).__init__(*args, **kwargs)
self.source = source
self.reduction = reduction
self.display = display
def __repr__(self):
return "DataView(%r,%r,%r)" % (self.source, self.reduction, self.display)
@contract(samples=StoreResultsDict, returns="tuple(*,*,*)")
def reduce(self, samples):
"""
Returns all stages: raw_data, reduction, display.
"""
field = self.source.get_name()
data = list(samples.field_or_value_field(field))
reduced = self.reduction.function(data)
if reduced is None:
display = DataView.NOT_AVAILABLE
else:
display = self.display.function(reduced)
return data, reduced, display
@staticmethod
def from_string(s, source_fields={}):
"""
Accepts the formats:
- source = source/one/string
- source/reduction = source/reduction/string
- source/reduction/display
- source//display => source/one/display
"""
tokens = s.split("/")
if len(tokens) == 1:
source = tokens[0]
reduction = "one"
display = "string"
elif len(tokens) == 2:
source = tokens[0]
reduction = tokens[1]
if len(reduction) == 0:
reduction = "one"
display = "string"
elif len(tokens) == 3:
source = tokens[0]
reduction = tokens[1]
if len(reduction) == 0:
reduction = "one"
display = tokens[2]
else:
msg = "Wrong format %r" % s
raise ValueError(msg)
name = "%s_%s" % (source, reduction)
source = source_fields[source]
reduction = RepRepStats.get_reduction(reduction)
display = RepRepStats.get_display(display)
symbol = reduction.get_symbol() % source.get_symbol()
sdesc = source.get_desc()
if sdesc:
sdesc = sdesc[0].lower() + sdesc[1:]
desc = reduction.get_desc() % sdesc
return DataView(
name=name,
source=source,
reduction=reduction,
display=display,
desc=desc,
symbol=symbol,
)
new_contract("DataView", DataView) | /reprep-z6-6.0.5.tar.gz/reprep-z6-6.0.5/src/reprep/report_utils/statistics/structures/data_view.py | 0.854703 | 0.279202 | data_view.py | pypi |
from contracts import contract
from reprep import MIME_RST, Report, logger
from reprep.report_utils.storing.store_results import StoreResults
from reprep.report_utils.storing.store_results_dict import StoreResultsDict
from reprep.report_utils.statistics.structures.with_description import WithDescription
from reprep.report_utils.statistics.structures.data_view import DataView
@contract(
samples=StoreResults,
rows_field="unicode",
cols_fields="list[>=1](unicode)",
source_descs="dict(unicode:WithDescription)",
)
def table_by_rows(id_report, samples, rows_field, cols_fields, source_descs):
samples2 = StoreResultsDict(samples)
class Missing(dict):
def __missing__(self, key):
logger.warning("Description for %r missing." % key)
d = WithDescription(name=key, symbol="\\text{%s}" % key, desc=None)
self[key] = d
return d
source_descs = Missing(source_descs)
r = Report(id_report)
data_views = [DataView.from_string(x, source_descs) for x in cols_fields]
# data: list of list of list
rows_field, data, reduced, display = summarize_data(
samples2, rows_field, data_views
)
rows = ["$%s$" % source_descs[x].get_symbol() for x in rows_field]
cols = ["$%s$" % x.get_symbol() for x in data_views]
r.table("table", data=display, cols=cols, rows=rows)
r.data("table_data", data=reduced, caption="Data without presentation applied.")
r.data("table_data_source", data=data, caption="Source data, before reduction.")
row_desc = "\n".join(
[
"- $%s$: %s" % (x.get_symbol(), x.get_desc())
for x in list(map(source_descs.__getitem__, rows_field))
]
)
col_desc = "\n".join(
["- $%s$: %s" % (x.get_symbol(), x.get_desc()) for x in data_views]
)
r.text("row_desc", rst_escape_slash(row_desc), mime=MIME_RST)
r.text("col_desc", rst_escape_slash(col_desc), mime=MIME_RST)
return r
@contract(
samples=StoreResultsDict,
rows_field="unicode",
cols_fields="list[C](DataView)",
returns="tuple( list[R], list[R](list[C]), "
"list[R](list[C]), list[R](list[C]) )",
)
def summarize_data(samples, rows_field, cols_fields):
"""
returns rows, data, reduced, display
"""
def reduce_data(data_view, samples):
try:
return data_view.reduce(samples)
except:
msg = "Error while applying the view\n\t%s\nto the " "samples\n\t%s" % (
data_view,
samples,
)
logger.error(msg)
raise
rows = []
alldata = []
for row, row_samples in samples.groups_by_field_value(rows_field):
row_data = [reduce_data(view, row_samples) for view in cols_fields]
alldata.append(row_data)
rows.append(row)
data = [[x[0] for x in row] for row in alldata]
reduced = [[x[1] for x in row] for row in alldata]
display = [[x[2] for x in row] for row in alldata]
return rows, data, reduced, display
def rst_escape_slash(s):
""" Replace a slash with two, useful for RST """
return s.replace("\\", "\\\\") | /reprep-z6-6.0.5.tar.gz/reprep-z6-6.0.5/src/reprep/report_utils/statistics/tables/tables_misc.py | 0.589716 | 0.436262 | tables_misc.py | pypi |
import numpy as np
def plot_horizontal_line(pylab, y, *args, **kwargs):
""" Plots an horizontal line across the plot using current bounds. """
a = pylab.axis()
pylab.plot([a[0], a[1]], [y, y], *args, **kwargs)
def plot_vertical_line(pylab, x, *args, **kwargs):
""" Plots a vertical line across the plot using current bounds. """
a = pylab.axis()
pylab.plot([x, x], [a[2], a[3]], *args, **kwargs)
def y_axis_balanced(pylab, extra_space=0.1, show0=True):
a = pylab.axis()
y_max = a[3]
y_min = a[2]
D = np.max([np.abs(y_max), np.abs(y_min)])
D *= 1 + extra_space
pylab.axis((a[0], a[1], -D, +D))
if show0:
plot_horizontal_line(pylab, 0, "k--")
def y_axis_positive(pylab, extra_space=0.1, show0=True):
a = pylab.axis()
y_max = a[3]
y_min = -y_max * extra_space
y_max *= 1 + extra_space
pylab.axis((a[0], a[1], y_min, y_max))
if show0:
plot_horizontal_line(pylab, 0, "k--")
def x_axis_extra_space_right(pylab, fraction=0.1):
a = pylab.axis()
D = a[1] - a[0]
extra = D * fraction
pylab.axis((a[0], a[1] + extra, a[2], a[3]))
def y_axis_extra_space(pylab, extra_space=0.1):
a = pylab.axis()
D = a[3] - a[2]
extra = D * extra_space
pylab.axis((a[0], a[1], a[2] - extra, a[3] + extra))
def x_axis_extra_space(pylab, extra_space=0.1):
a = pylab.axis()
D = a[1] - a[0]
extra = D * extra_space
pylab.axis((a[0] - extra, a[1] + extra, a[2], a[3]))
def x_axis_balanced(pylab, extra_space=0.1):
a = pylab.axis()
D = a[1] - a[0]
extra = D * extra_space
pylab.axis((a[0] - extra, a[1] + extra, a[2], a[3]))
def x_axis_set(pylab, xmin, xmax):
a = pylab.axis()
pylab.axis((xmin, xmax, a[2], a[3]))
def y_axis_set(pylab, ymin, ymax):
a = pylab.axis()
pylab.axis((a[0], a[1], ymin, ymax))
def y_axis_set_min(pylab, ymin):
a = pylab.axis()
pylab.axis((a[0], a[1], ymin, a[3]))
def turn_all_axes_off(pylab):
""" Turns everything off. (TODO) """
axes = pylab.gca()
axes.set_frame_on(False)
pylab.setp(axes.get_xticklabels(), visible=False)
pylab.setp(axes.get_yticklabels(), visible=False)
axes.xaxis.offsetText.set_visible(False)
axes.yaxis.offsetText.set_visible(False)
axes.xaxis.set_ticks_position("none")
axes.yaxis.set_ticks_position("none")
for _, spine in axes.spines.items():
spine.set_color("none") | /reprep-z6-6.0.5.tar.gz/reprep-z6-6.0.5/src/reprep/plot_utils/axes.py | 0.681621 | 0.634458 | axes.py | pypi |
from reprep.plot_utils.spines import set_spines_look_A
ieee_colsize = 1.57 * 2
def ieee_spines(pylab):
set_spines_look_A(
pylab, outward_offset=5, linewidth=1, markersize=2, markeredgewidth=0.5
)
def ieee_fonts(pylab):
# See http://matplotlib.sourceforge.net
# /users/customizing.html#matplotlibrc-sample
params = {
"axes.labelsize": 8,
# 'text.fontsize': 8,
"font.size": 8,
"legend.fontsize": 8,
"xtick.labelsize": 6,
"ytick.labelsize": 6,
"lines.markersize": 1,
"lines.markeredgewidth": 0,
# 'axes.color_cycle': ['k', 'm', 'g', 'c', 'm', 'y', 'k'],
"legend.fancybox": True,
"legend.frameon": False,
"legend.numpoints": 1,
"legend.markerscale": 2,
"legend.labelspacing": 0.2,
"legend.columnspacing": 1,
"legend.borderaxespad": 0.1
# 'font.family': 'Times New Roman',
# 'font.serif': ['Times New Roman', 'Times'],
# 'font.size': 8
# 'text.usetex': True
}
pylab.rcParams.update(params)
from matplotlib import rc
# cmr10 works but no '-' sign
rc(
"font",
**{
"family": "serif",
"serif": ["Bitstream Vera Serif", "Times New Roman", "Palatino"],
"size": 8.0,
}
)
# rc('font', **{'family': 'cmr10',
# 'serif': ['cmr10', 'Times New Roman', 'Palatino'],
# 'size': 8.0})
def style_ieee_halfcol_xy(pylab, ratio=3.0 / 4):
"""
Note: not sure if should be called before plotting, or after.
Find out and write it here.
ratio=1 to have a square one
"""
f = pylab.gcf()
f.set_size_inches((ieee_colsize / 2, ieee_colsize / 2 * ratio))
ieee_fonts(pylab)
ieee_spines(pylab)
def style_ieee_fullcol_xy(pylab, ratio=3.0 / 4):
f = pylab.gcf()
f.set_size_inches((ieee_colsize, ieee_colsize * ratio))
ieee_fonts(pylab)
ieee_spines(pylab)
# # update the font size of the x and y axes
# fontsize=16
# pylab.plot([1,2,3],[4,5,6])
# ax = pylab.gca()
# for tick in ax.xaxis.get_major_ticks():
# tick.label1.set_fontsize(fontsize)
# for tick in ax.yaxis.get_major_ticks():
# tick.label1.set_fontsize(fontsize)
# Can also update the tick label font using set_fontname('Helvetica')
# See also the Text class in the matplotlib api doc. | /reprep-z6-6.0.5.tar.gz/reprep-z6-6.0.5/src/reprep/plot_utils/styles.py | 0.518302 | 0.342049 | styles.py | pypi |
from contracts import contract
import numpy as np
@contract(a="array", top_percent=">=0,<=90")
def skim_top(a, top_percent):
""" Cuts off the top percentile """
threshold = np.percentile(a.flat, 100 - top_percent)
return np.minimum(a, threshold)
@contract(
value="array[HxW],H>0,W>0",
max_value="None|number",
min_value="None|number",
skim=">=0,<=90",
)
def get_scaled_values(value, min_value=None, max_value=None, skim=0):
"""
Returns dictionary with entries:
- value01 Values in [0, 1], no inf, nan, where it is set 0.5.
- isnan Values were NaN.
- isinf Values were +-Inf
- isfin Values weren't Inf or NaN
- clipped_ub Values were clipped
- clipped_lb
- flat Boolean if there wasn't a range
- min_value
- max_value
"""
value = value.copy().astype("float32")
isfin = np.isfinite(value)
isnan = np.isnan(value)
isinf = np.isinf(value)
if skim != 0:
# TODO: skim bottom?
value = skim_top(value, skim)
if max_value is None or min_value is None:
value[value == +np.Inf] = -np.Inf
value[value == -np.Inf] = -np.Inf
vmax = np.nanmax(value)
value[value == +np.Inf] = +np.Inf
value[value == -np.Inf] = +np.Inf
vmin = np.nanmin(value)
bounds = (vmin, vmax)
if max_value is None:
max_value = bounds[1]
if min_value is None:
min_value = bounds[0]
# but what about +- inf?
assert np.isfinite(min_value)
assert np.isfinite(max_value)
# Put values for filling in
a_value = min_value
value[isinf] = a_value
value[isnan] = a_value
if max_value == min_value:
scaled = np.empty_like(value)
scaled.fill(a_value)
flat = True
else:
scaled = (value - min_value) * (1.0 / (max_value - min_value))
flat = False
clipped_ub = scaled > 1
clipped_lb = scaled < 0
# Cut at the thresholds
scaled01 = np.maximum(scaled, 0)
scaled01 = np.minimum(scaled01, 1)
return dict(
scaled01=scaled01,
isnan=isnan,
isinf=isinf,
isfin=isfin,
flat=flat,
min_value=min_value,
max_value=max_value,
clipped_ub=clipped_ub,
clipped_lb=clipped_lb,
) | /reprep-z6-6.0.5.tar.gz/reprep-z6-6.0.5/src/reprep/graphics/scaling.py | 0.512205 | 0.413832 | scaling.py | pypi |
from contracts import contract
from numpy import maximum, minimum, zeros
import numpy as np
from .scaling import skim_top
__all__ = [
"posneg",
"posneg_hinton",
]
@contract(
value="array[HxW],H>0,W>0",
max_value="None|float",
skim=">=0,<=90",
nan_color="color_spec",
)
def posneg_hinton(
value, max_value=None, skim=0, nan_color=[0.5, 0.5, 0.5], properties=None
):
value = value.astype("float32")
# value = value.squeeze().copy() # squeeze: (1,1) -> ()
value = value.copy()
isfinite = np.isfinite(value)
isnan = np.logical_not(isfinite)
# set nan to 0
value[isnan] = 0
if max_value is None:
abs_value = abs(value)
if skim != 0:
abs_value = skim_top(abs_value, skim)
max_value = np.max(abs_value)
from reprep.graphics.filter_scale import scale
rgb_p = scale(
value,
min_value=0,
max_value=max_value,
min_color=[0.5, 0.5, 0.5],
max_color=[1.0, 1.0, 1.0],
nan_color=[1, 0.6, 0.6],
flat_color=[0.5, 0.5, 0.5],
)
rgb_n = scale(
value,
min_value=-max_value,
max_value=0,
max_color=[0.5, 0.5, 0.5],
min_color=[0.0, 0.0, 0.0],
nan_color=[1, 0.6, 0.6],
flat_color=[0.5, 0.5, 0.5],
)
w_p = value >= 0
# w_z = value == 0
w_n = value <= 0
H, W = value.shape
rgb = np.zeros((H, W, 3), "uint8")
for i in range(3):
rgb[w_p, i] = rgb_p[w_p, i]
rgb[w_n, i] = rgb_n[w_n, i]
rgb[isnan, i] = nan_color[i]
# rgb[w_z, i] = 128
return rgb
@contract(
value="array[HxW],H>0,W>0",
max_value="None|float",
skim=">=0,<=90",
nan_color="color_spec",
)
def posneg(value, max_value=None, skim=0, nan_color=[0.5, 0.5, 0.5], properties=None):
"""
Converts a 2D value to normalized display.
(red=positive, blue=negative)
"""
value = value.astype("float32")
# value = value.squeeze().copy() # squeeze: (1,1) -> ()
value = value.copy()
isfinite = np.isfinite(value)
isnan = np.logical_not(isfinite)
# set nan to 0
value[isnan] = 0
if max_value is None:
abs_value = abs(value)
if skim != 0:
abs_value = skim_top(abs_value, skim)
max_value = np.max(abs_value)
if max_value == 0:
# In this case, it means that all is 0
max_value = 1 # don't divide by 0 later
assert np.isfinite(max_value)
positive = minimum(maximum(value, 0), max_value) / max_value
negative = maximum(minimum(value, 0), -max_value) / -max_value
positive_part = (positive * 255).astype("uint8")
negative_part = (negative * 255).astype("uint8")
result = zeros((value.shape[0], value.shape[1], 3), dtype="uint8")
anysign = maximum(positive_part, negative_part)
R = 255 - negative_part[:, :]
G = 255 - anysign
B = 255 - positive_part[:, :]
# remember the nans
R[isnan] = nan_color[0] * 255
G[isnan] = nan_color[1] * 255
B[isnan] = nan_color[2] * 255
result[:, :, 0] = R
result[:, :, 1] = G
result[:, :, 2] = B
# TODO: colorbar
if properties is not None:
properties["min_value"] = -max_value
properties["max_value"] = max_value
properties["nan_color"] = nan_color
bar_shape = (512, 128)
bar = np.vstack([np.linspace(-1, 1, bar_shape[0])] * bar_shape[1]).T
properties["color_bar"] = posneg(bar)
return result | /reprep-z6-6.0.5.tar.gz/reprep-z6-6.0.5/src/reprep/graphics/filter_posneg.py | 0.586523 | 0.570511 | filter_posneg.py | pypi |
from __future__ import unicode_literals
from django.db import models, migrations
import django.contrib.gis.db.models.fields
class JSONField(models.TextField):
"""Mocks jsonfield 0.92's column-type behaviour"""
def db_type(self, connection):
if connection.vendor == 'postgresql' and connection.pg_version >= 90300:
return 'json'
else:
return super(JSONField, self).db_type(connection)
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Boundary',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('set_name', models.CharField(max_length=100, help_text='A generic singular name for the boundary.')),
('slug', models.SlugField(max_length=200, help_text="The boundary's unique identifier within the set, used as a path component in URLs.")),
('external_id', models.CharField(max_length=64, help_text='An identifier of the boundary, which should be unique within the set.')),
('name', models.CharField(db_index=True, max_length=192, help_text='The name of the boundary.')),
('metadata', JSONField(default=dict, help_text='The attributes of the boundary from the shapefile, as a dictionary.', blank=True)),
('shape', django.contrib.gis.db.models.fields.MultiPolygonField(srid=4326, help_text='The geometry of the boundary in EPSG:4326.')),
('simple_shape', django.contrib.gis.db.models.fields.MultiPolygonField(srid=4326, help_text='The simplified geometry of the boundary in EPSG:4326.')),
('centroid', django.contrib.gis.db.models.fields.PointField(srid=4326, help_text='The centroid of the boundary in EPSG:4326.', null=True)),
('extent', JSONField(blank=True, help_text='The bounding box of the boundary as a list like [xmin, ymin, xmax, ymax] in EPSG:4326.', null=True)),
('label_point', django.contrib.gis.db.models.fields.PointField(spatial_index=False, srid=4326, blank=True, help_text='The point at which to place a label for the boundary in EPSG:4326, used by represent-maps.', null=True)),
],
options={
'verbose_name_plural': 'boundaries',
'verbose_name': 'boundary',
},
bases=(models.Model,),
),
migrations.CreateModel(
name='BoundarySet',
fields=[
('slug', models.SlugField(primary_key=True, help_text="The boundary set's unique identifier, used as a path component in URLs.", serialize=False, max_length=200, editable=False)),
('name', models.CharField(max_length=100, help_text='The plural name of the boundary set.', unique=True)),
('singular', models.CharField(max_length=100, help_text='A generic singular name for a boundary in the set.')),
('authority', models.CharField(max_length=256, help_text='The entity responsible for publishing the data.')),
('domain', models.CharField(max_length=256, help_text='The geographic area covered by the boundary set.')),
('last_updated', models.DateField(help_text='The most recent date on which the data was updated.')),
('source_url', models.URLField(help_text='A URL to the source of the data.', blank=True)),
('notes', models.TextField(help_text='Free-form text notes, often used to describe changes that were made to the original source data.', blank=True)),
('licence_url', models.URLField(help_text='A URL to the licence under which the data is made available.', blank=True)),
('extent', JSONField(blank=True, help_text="The set's boundaries' bounding box as a list like [xmin, ymin, xmax, ymax] in EPSG:4326.", null=True)),
('start_date', models.DateField(blank=True, help_text="The date from which the set's boundaries are in effect.", null=True)),
('end_date', models.DateField(blank=True, help_text="The date until which the set's boundaries are in effect.", null=True)),
('extra', JSONField(blank=True, help_text='Any additional metadata.', null=True)),
],
options={
'ordering': ('name',),
'verbose_name_plural': 'boundary sets',
'verbose_name': 'boundary set',
},
bases=(models.Model,),
),
migrations.AddField(
model_name='boundary',
name='set',
field=models.ForeignKey(related_name='boundaries', to='boundaries.BoundarySet', on_delete=models.CASCADE, help_text='The set to which the boundary belongs.'),
preserve_default=True,
),
migrations.AlterUniqueTogether(
name='boundary',
unique_together=set([('slug', 'set')]),
),
] | /represent-boundaries-0.9.4.tar.gz/represent-boundaries-0.9.4/boundaries/migrations/0001_initial.py | 0.720663 | 0.267815 | 0001_initial.py | pypi |
.. _examples:
Examples
========
In the following example we will create a forward controller and apply it to the rocketball gym environment.
We will perform retrospective context inference to infer which type of rocket is currently controlled.
Furthermore, we will perform prospective action inference to infer which actions are suitable in order for the rocket to reach a certain target.
Let's start with some imports:
.. code-block:: python
>>> import numpy as np
>>> import torch
>>> import gym
>>> from reprise.action_inference import ActionInference
>>> from reprise.context_inference import ContextInference
>>> from reprise.gym.rocketball.agent import Agent
We want to be able to reproduce this example:
.. code-block:: python
>>> np.random.seed(123)
>>> torch.manual_seed(123) # doctest: +ELLIPSIS
<torch._C.Generator object at ...>
Set context size which is the number of contexts neurons our neural network will have.
Furthermore set the sizes of an action and an observation, which are 4 and 2 for the rocketball environment, respectively (4 thrusts on each rocket, x and y coordinates).
Since our neural network will be a forward controller, its input size is given by the sum of the context, action, and observation size.
Its output size is determined by the observation size.
We want our neural network to have a hidden size of 8.
.. code-block:: python
>>> context_size = 2
>>> action_size = 4
>>> observation_size = 2
>>> input_size = context_size + action_size + observation_size
>>> hidden_size = 8
Now we define our neural network.
In this case, we will use a very simple definition for an LSTM:
.. code-block:: python
>>> class LSTM(torch.nn.Module):
... def __init__(self, input_size, hidden_size,
... num_layers, observation_size):
... super(LSTM, self).__init__()
... self.lstm = torch.nn.LSTM(
... input_size=input_size,
... hidden_size=hidden_size,
... num_layers=num_layers)
... self.fc = torch.nn.Linear(hidden_size, observation_size)
...
... def forward(self, x, state=None):
... x, state = self.lstm(x, state)
... x = self.fc(x)
... return x, state
We can already create an instance of it together with some initial hidden and cell states.
We create a second pair of hidden and cell states for context inference.
This could also be the time to load saved weights into the model, which we will skip here.
.. code-block:: python
>>> model = LSTM(input_size, hidden_size, 1, observation_size)
>>> lstm_h = torch.zeros(1, 1, hidden_size)
>>> lstm_c = torch.zeros(1, 1, hidden_size)
>>> lstm_state = [lstm_h, lstm_c]
>>> lstm_state_ci = [lstm_h.clone(), lstm_c.clone()]
Before we create the action and context inference objects, we need to define proper loss functions.
During context inference, we will compare past predictions with actual past observations.
During action inference, we will compare future predictions with our desired target.
In this example, we assume that the model was trained on deltas of observations.
Therefore, the inputs and outputs of the model are deltas and we need to accumulate the outputs in the loss function for action inference:
.. code-block:: python
>>> criterion = torch.nn.MSELoss()
>>>
>>> def ci_loss(outputs, observations):
... return criterion(torch.cat(outputs, dim=0),
... torch.cat(observations, dim=0))
>>>
>>> def ai_loss(outputs, targets):
... return criterion(torch.cumsum(
... torch.cat(outputs, dim=0), dim=0), targets)
Now we can create an action inference object.
We first define an initial policy and the optimizer which shall be used to optimize this policy.
Together with the action inference loss function, this these objects are passed to the action inference constructor.
.. code-block:: python
>>> ai_horizon = 10
>>> policy = torch.rand([ai_horizon, 1, action_size])
>>> optimizer = torch.optim.Adam(
... [policy], lr=0.1, betas=(0.9, 0.999))
>>> ai = ActionInference(
... model=model,
... policy=policy,
... optimizer=optimizer,
... inference_cycles=3,
... criterion=ai_loss)
Initialization of context inference works similar.
First, we create an intial context.
Usually, during context inference, also the hidden state furthest in the past is adapted.
The opt accessor function tells the context inference algorithm, which parts of the state should be optimized exactly.
Here, we only use the hidden state ([state[0]]), but we could also optimize the hidden and cell state ([state[0], state[1]]).
After creating the optimizer, we pass everything to the context inference constructor.
.. code-block:: python
>>> context = torch.zeros([1, 1, context_size])
>>> def opt_accessor(state):
... return [state[0]]
>>> params = [{'params': [context], 'lr': 0.1},
... {'params': opt_accessor(lstm_state), 'lr': 0.0001}]
>>> optimizer = torch.optim.Adam(params)
>>> ci = ContextInference(
... model=model,
... initial_model_state=lstm_state_ci,
... context=context,
... optimizer=optimizer,
... inference_length=5,
... inference_cycles=5,
... criterion=ci_loss,
... opt_accessor=opt_accessor)
Now we define an initial position, delta, and a tensor representing the randomly chosen target of an agent.
We use gym to create the environment and add our agent.
.. code-block:: python
>>> position = torch.Tensor([[[0, 1]]])
>>> targets = torch.cat(
... ai_horizon *
... [torch.Tensor(
... [[np.random.uniform(-1.5, 1.5),
... np.random.uniform(0, 2)]])])
>>> targets = targets[:, None, :]
>>> delta = torch.zeros([1, 1, 2])
>>> env = gym.make('reprise.gym:rocketball-v0')
>>> env.reset()
>>> agent = Agent(id='foo', mode=0, init_pos=np.array([0, 1]), color='black')
>>> agent.update_target(targets[0][0].numpy())
>>> env.add_agent(agent)
>>> action = torch.zeros([4])
Now everything is in place and we can actually loop over the environment to control our rocket.
.. code-block:: python
>>> for t in range(50):
... observation = env.step([action.numpy()])
... position_old = position.clone()
... position = torch.Tensor(observation[0][0][1])
... position = position[None, None, :]
... delta_old = delta.clone()
... delta = position - position_old
...
... x_t = torch.zeros([1, 1, input_size])
... x_t[0, 0, :context_size] = context.detach()
... x_t[0, 0, context_size:context_size + action_size] = action
... x_t[0, 0, -observation_size:] = delta_old
...
... with torch.no_grad():
... y_t, lstm_state = model.forward(x_t, lstm_state)
... context, _, states = ci.infer_contexts(
... x_t[:, :, context_size:], delta)
... lstm_state = (
... states[-1][0].clone().detach(),
... states[-1][1].clone().detach())
... policy, _, _ = ai.infer_actions(
... delta, lstm_state, context.clone().detach().repeat(
... policy.shape[0], 1, 1), targets - position)
... action = policy[0][0].detach()
To look into the context and policy of the last time step you can do:
.. code-block:: python
>>> print(context) # doctest: +ELLIPSIS
tensor([[[7.8..., 9.1...]]], requires_grad=True)
>>> print(policy) # doctest: +ELLIPSIS
tensor([[[ 6..., -7..., -6..., 7...]],
...
... [[ 4..., -7..., -6..., 7...]]], grad_fn=<CloneBackward>)
| /reprise-1.0.0.tar.gz/reprise-1.0.0/doc/source/examples.rst | 0.970813 | 0.925432 | examples.rst | pypi |
from collections.abc import Iterable
import os
import json
import itertools
import tkinter as tk
def Reprlist_inster_use(a, i, lo):
if len(i) > 1 and len(set(i)) == 1:
j = i[0].ljust(lo)
if len(j) > lo:
i.append(a)
else:
i.append(j)
else:
i.append(a)
def iterable(obj):
return isinstance(obj, Iterable)
def shape(obj):
if isinstance(obj, str):
return (len(obj),),
obj = list(obj)
if len(obj) == 0:
return 0,
r = ()
try:
n = [len(i) for i in obj]
except TypeError:
if all(not iterable(i) or isinstance(i, str) for i in obj):
return (tuple(len(str(i)) for i in obj),)
else:
ts = {type(i) for i in obj}
if ts == {str}:
return (tuple(n),)
if len(ts) == 1:
if len(set(n)) == 1:
r += (len(n),)
return r + shape(obj[0])
for i in obj:
for j in i:
if not \
not iterable(j) or isinstance(j, str):
l = {sum(len(str(j)) for j in i) for i in obj}
if len(l) == 1:
return r + (len(n),) + (tuple(l),)
else:
raise ValueError("Irregular iterable got.")
else:
raise ValueError("Irregular iterable got.")
def reshape(val, nd):
if all(iterable(i) for i in val):
if all(isinstance(i, str) for i in val):
s = p = 0
r = []
val = ''.join(val)
for i in nd:
p = p + i
r.append(val[s:p])
s = p
return r
else:
r = []
for i in val:
r.append(reshape(i, nd))
return r
else:
return reshape([str(i) for i in val], nd)
def get_global_config():
with open(os.path.join(os.path.dirname(__file__), 'config_global.json'), encoding="utf-8") as fp:
return json.load(fp)
def _check_rule_value(old):
if any(not ((isinstance(i, int) and i > 3) or i is None)
for i in (old['maxstring'],
old['maxline'])):
raise ValueError("Key 'maxstring','maxline' need be int and > 3.")
if not isinstance(old['startswith'], str):
raise TypeError("key 'startswith' need str, got",
type(old['startswith']))
def check_rule_value(dic):
_check_rule_value(dic['r'])
_check_rule_value(dic['s'])
_check_rule_value(dic['f'])
if not isinstance(dic['f']['linebreaks'], str):
raise TypeError('The line breaks need be str. got ',
type(dic['f']['linebreaks']))
def _set_maxstring(n):
if n is None:
def gen(iterable):
'A func complexed by `complex_rule`.'
return iterable
else:
n1 = (n + 1) // 2
n2 = n1 - n
def gen(iterable):
'A func complexed by `complex_rule`.'
if len(iterable) > n:
return '...'.join([iterable[:n1], iterable[n2:]])
else:
return iterable
return gen
def _set_maxline(n):
if n is None:
def gen(iterable):
'A func complexed by `complex_rule`.'
return enumerate(iterable)
else:
n1 = (n + 1) // 2
n2 = n1 - n
def gen(iterable):
'A func complexed by `complex_rule`.'
l = len(iterable)
if l < n:
yield from enumerate(iterable)
else:
yield from enumerate(iterable[:n1])
yield from (('.', '',),) * 3
yield from enumerate(iterable[n2:], l - n1)
return gen
def comp_rule(dic):
rst = []
for k in 'rs':
rst.append(_set_maxstring(dic[k]['maxstring']))
rst.append(_set_maxline(dic[k]['maxline']))
else:
return rst | /reprlist.py-1.3.9.tar.gz/reprlist.py-1.3.9/reprlist/tools.py | 0.416085 | 0.191007 | tools.py | pypi |
from collections.abc import Iterable
import json, os, tkinter as tk, itertools
def Reprlist_inster_use(a, i, lo):
if len(i) > 1 and len(set(i)) == 1:
j = i[0].ljust(lo)
if len(j) > lo:
i.append(a)
else:
i.append(j)
else:
i.append(a)
def iterable(obj):
return isinstance(obj, Iterable)
def shape(obj):
if isinstance(obj, str):
return ((len(obj),),)
obj = list(obj)
if len(obj) == 0:
return (0,)
r = ()
try:
n = [len(i) for i in obj]
except TypeError:
if all(not iterable(i) or isinstance(i, str) \
for i in obj):
return (tuple(len(str(i)) for i in obj),)
else:
ts = {type(i) for i in obj}
if ts == {str}:
return (tuple(n),)
if len(ts) == 1:
if len(set(n)) == 1:
r += (len(n),)
return r + shape(obj[0])
for i in obj:
for j in i:
if not \
not iterable(j) or isinstance(j, str):
l = {sum(len(str(j)) for j in i) for i in obj}
if len(l) == 1:
return r + (len(n),) + (tuple(l),)
else:
raise ValueError("Irregular iterable got.")
else:
raise ValueError("Irregular iterable got.")
def reshape(val, nd):
if all(iterable(i) for i in val):
if all(isinstance(i, str) for i in val):
s = p = 0
r = []
val = ''.join(val)
for i in nd:
p = p + i
r.append(val[s:p])
s = p
return r
else:
r = []
for i in val:
r.append(reshape(i, nd))
return r
else:
return reshape([str(i) for i in val], nd)
def get_global_rule():
return json.load(open(os.path.dirname(__file__) + os.path.sep + 'globalrule.json'))
def _check_rule_value(old):
if any(not ((isinstance(i, int) and i > 3) or i is None)
for i in (old['maxstring'], old['maxline'])):
raise ValueError("Key 'maxstring','maxline' need be int and > 3.")
if not isinstance(old['startswith'], str):
raise TypeError("key 'startswith' need str, got", type(old['startswith']))
def check_rule_value(dic):
_check_rule_value(dic['r'])
_check_rule_value(dic['s'])
_check_rule_value(dic['f'])
if not isinstance(dic['f']['linebreaks'], str):
raise TypeError('The line breaks need be str. got ', type(dic['f']['linebreaks']))
def _set_maxstring(n):
if n is None:
def gen(iterable):
'A func complexed by `complex_rule`.'
return iterable
else:
n1 = (n + 1) // 2
n2 = n1 - n
def gen(iterable):
'A func complexed by `complex_rule`.'
if len(iterable) > n:
return '...'.join([iterable[:n1], iterable[n2:]])
else:
return iterable
return gen
def _set_maxline(n):
if n is None:
def gen(iterable):
'A func complexed by `complex_rule`.'
return enumerate(iterable)
else:
n1 = (n + 1) // 2
n2 = n1 - n
def gen(iterable):
'A func complexed by `complex_rule`.'
l = len(iterable)
if l < n:
yield from enumerate(iterable)
else:
yield from enumerate(iterable[:n1])
yield from (('.', '',),) * 3
yield from enumerate(iterable[n2:], l - n1)
return gen
def comp_rule(dic):
rst = []
for k in 'rs':
rst.append(_set_maxstring(dic[k]['maxstring']))
rst.append(_set_maxline(dic[k]['maxline']))
else:
return rst | /reprlist.py-1.3.9.tar.gz/reprlist.py-1.3.9/reprlist/_tools.py | 0.431105 | 0.21261 | _tools.py | pypi |
import calendar
import os
import stat
import struct
import time
import zipfile
from dataclasses import dataclass
from typing import Callable, Optional
# FIXME: support more types, needs test cases
# https://sources.debian.org/src/unzip/6.0-27/zipinfo.c/#L1887
SYS_FAT, SYS_UNX, SYS_NTF = (0, 3, 11)
SYSTEM = {SYS_FAT: "fat", SYS_UNX: "unx", SYS_NTF: "ntf"}
# https://sources.debian.org/src/unzip/6.0-27/zipinfo.c/#L2086
EXE_EXTS = {"com", "exe", "btm", "cmd", "bat"}
# https://sources.debian.org/src/unzip/6.0-27/zipinfo.c/#L1896
COMPRESS_TYPE = {
zipfile.ZIP_STORED: "stor",
zipfile.ZIP_DEFLATED: "def", # DEFLATE_TYPE char is appended
zipfile.ZIP_BZIP2: "bzp2",
zipfile.ZIP_LZMA: "lzma",
}
# https://sources.debian.org/src/unzip/6.0-27/zipinfo.c/#L1886
# normal, maximum, fast, superfast
DEFLATE_TYPE = "NXFS"
EXTRA_DATA_INFO = {
# extra, data descriptor
(False, False): "-",
(False, True): "l",
(True, False): "x",
(True, True): "X",
}
@dataclass(frozen=True)
class Time:
"""Unix time from extra field (UT or UX)."""
mtime: int
atime: Optional[int]
ctime: Optional[int]
class Error(RuntimeError):
pass
# FIXME
# https://sources.debian.org/src/unzip/6.0-27/zipinfo.c/#L1097
# https://sources.debian.org/src/zip/3.0-12/zip.h/#L211
def format_info(info: zipfile.ZipInfo, *, extended: bool = False,
long: bool = False) -> str:
r"""
Format ZIP entry info.
>>> zf = zipfile.ZipFile("test/data/crlf.apk")
>>> info1 = zf.getinfo("META-INF/")
>>> info2 = zf.getinfo("resources.arsc")
>>> info3 = zf.getinfo("LICENSE.GPLv3")
>>> format_info(info1)
'-rw---- 2.0 fat 0 bx defN 17-May-15 11:25 META-INF/'
>>> format_info(info1, extended=True)
'drw---- 2.0 fat 0 bx 2 defN 2017-05-15 11:25:18 00000000 META-INF/'
>>> format_info(info2)
'-rw---- 1.0 fat 896 b- stor 09-Jan-01 00:00 resources.arsc'
>>> format_info(info3)
'-rw------- 3.0 unx 35823 t- defN 80-Jan-01 00:00 LICENSE.GPLv3'
>>> info3.external_attr |= 0o644 << 16
>>> format_info(info3, long=True)
'-rw-r--r-- 3.0 unx 35823 t- 12289 defN 80-Jan-01 00:00 LICENSE.GPLv3'
>>> format_info(info3, extended=True)
'-rw-r--r-- 3.0 unx 35823 t- 12289 defN 1980-01-01 00:00:00 cece3b93 LICENSE.GPLv3'
>>> info4 = zipfile.ZipInfo("foo\n.com")
>>> info4.file_size = info4.compress_size = 0
>>> format_info(info4)
'-rwx--- 2.0 unx 0 b- stor 80-Jan-01 00:00 foo^J.com'
>>> info4.extra = b'UT\x05\x00\x01\x00\x00\x00\x00'
>>> format_info(info4)
'-rwx--- 2.0 unx 0 bx stor 70-Jan-01 00:00 foo^J.com'
>>> info4.extra = b'UX\x08\x00\x80h\x9e>|h\x9e>'
>>> format_info(info4)
'-rwx--- 2.0 unx 0 bx stor 03-Apr-17 08:40 foo^J.com'
"""
if t := extra_field_time(info.extra):
date_time = tuple(time.localtime(t.mtime))[:6]
else:
date_time = info.date_time
perm = format_permissions(info)
if extended and info.filename.endswith("/"):
perm = "d" + perm[1:] # directory
vers = "{}.{}".format(info.create_version // 10,
info.create_version % 10)
syst = SYSTEM.get(info.create_system, "???")
xinf = "t" if info.internal_attr == 1 else "b" # text/binary
if info.flag_bits & 1:
xinf = xinf.upper() # encrypted
xinf += EXTRA_DATA_INFO[(bool(info.extra), bool(info.flag_bits & 0x08))]
comp = COMPRESS_TYPE.get(info.compress_type, "????")
if info.compress_type == zipfile.ZIP_DEFLATED:
comp += DEFLATE_TYPE[(info.flag_bits >> 1) & 3]
if extended:
dt = "{}-{:02d}-{:02d}".format(*date_time[:3])
tm = "{:02d}:{:02d}:{:02d}".format(*date_time[3:])
else:
dt = "{:02d}-{}-{:02d}".format(
date_time[0] % 100,
calendar.month_abbr[date_time[1]] or "000",
date_time[2]
)
tm = "{:02d}:{:02d}".format(*date_time[3:5])
fields = [f"{perm:<11}", vers, syst, f"{info.file_size:>8}", xinf]
if long or extended:
fields.append(f"{info.compress_size:>8}")
fields += [comp, dt, tm]
if extended:
fields.append(f"{info.CRC:08x}")
fields.append(printable_filename(info.filename))
return " ".join(fields)
# FIXME
# https://sources.debian.org/src/unzip/6.0-27/zipinfo.c/#L2064
def format_permissions(info: zipfile.ZipInfo) -> str:
"""
Format ZIP entry Unix or FAT permissions.
>>> zf = zipfile.ZipFile("test/data/crlf.apk")
>>> info1 = zf.getinfo("META-INF/")
>>> info2 = zf.getinfo("resources.arsc")
>>> info3 = zf.getinfo("LICENSE.GPLv3")
>>> format_permissions(info1)
'-rw----'
>>> format_permissions(info2)
'-rw----'
>>> format_permissions(info3)
'-rw-------'
>>> info3.external_attr = info3.external_attr | (0o644 << 16)
>>> format_permissions(info3)
'-rw-r--r--'
>>> info4 = zipfile.ZipInfo("foo.com")
>>> format_permissions(info4)
'-rwx---'
"""
hi = info.external_attr >> 16
if hi and info.create_system in (SYS_UNX, SYS_FAT):
return stat.filemode(hi)
exe = os.path.splitext(info.filename)[1][1:].lower() in EXE_EXTS
xat = info.external_attr & 0xFF
return "".join((
'd' if xat & 0x10 else '-',
'r',
'-' if xat & 0x01 else 'w',
'x' if xat & 0x10 or exe else '-',
'a' if xat & 0x20 else '-',
'h' if xat & 0x02 else '-',
's' if xat & 0x04 else '-',
))
# https://sources.debian.org/src/zip/3.0-12/zip.h/#L217
# https://sources.debian.org/src/zip/3.0-12/zipfile.c/#L6544
def extra_field_time(extra: bytes, local: bool = False) -> Optional[Time]:
r"""
Get unix time from extra field (UT or UX).
>>> t = extra_field_time(b'UT\x05\x00\x01\x00\x00\x00\x00')
>>> t
Time(mtime=0, atime=None, ctime=None)
>>> tuple(time.localtime(t.mtime))
(1970, 1, 1, 0, 0, 0, 3, 1, 0)
>>> t = extra_field_time(b'UT\x05\x00\x01\xda\xe9\xe36')
>>> t
Time(mtime=920906202, atime=None, ctime=None)
>>> tuple(time.localtime(t.mtime))
(1999, 3, 8, 15, 16, 42, 0, 67, 0)
>>> t = extra_field_time(b'UX\x08\x00\x80h\x9e>|h\x9e>')
>>> t
Time(mtime=1050568828, atime=1050568832, ctime=None)
>>> tuple(time.localtime(t.mtime))
(2003, 4, 17, 8, 40, 28, 3, 107, 0)
>>> tuple(time.localtime(t.atime))
(2003, 4, 17, 8, 40, 32, 3, 107, 0)
>>> t = extra_field_time(b'UX\x08\x00\xda\xe9\xe36\xda\xe9\xe36')
>>> t
Time(mtime=920906202, atime=920906202, ctime=None)
>>> tuple(time.localtime(t.mtime))
(1999, 3, 8, 15, 16, 42, 0, 67, 0)
>>> tuple(time.localtime(t.atime))
(1999, 3, 8, 15, 16, 42, 0, 67, 0)
"""
while len(extra) >= 4:
hdr_id, size = struct.unpack("<HH", extra[:4])
if size > len(extra) - 4:
break
if hdr_id == 0x5455 and size >= 1:
flags = extra[4]
if flags & 0x1 and size >= 5:
mtime = int.from_bytes(extra[5:9], "little")
atime = ctime = None
if local:
if flags & 0x2 and size >= 9:
atime = int.from_bytes(extra[9:13], "little")
if flags & 0x4 and size >= 13:
ctime = int.from_bytes(extra[13:17], "little")
return Time(mtime, atime, ctime)
elif hdr_id == 0x5855 and size >= 8:
atime = int.from_bytes(extra[4:8], "little")
mtime = int.from_bytes(extra[8:12], "little")
return Time(mtime, atime, None)
extra = extra[size + 4:]
return None
def printable_filename(s: str) -> str:
r"""
Replace ASCII control characters with caret notation (e.g. ^M, ^J) and other
non-printable characters with backslash escapes like repr().
>>> printable_filename("foo bar.baz")
'foo bar.baz'
>>> printable_filename("foo\r\n\x7fbar\x82\u0fff.baz")
'foo^M^J^?bar\\x82\\u0fff.baz'
"""
t = []
for c in s:
if c.isprintable():
t.append(c)
elif (i := ord(c)) in range(32):
t.append("^" + chr(i + 64))
elif i == 127:
t.append("^?")
else:
t.append(repr(c)[1:-1])
return "".join(t)
def zipinfo(zip_file: str, *, extended: bool = False, long: bool = False,
fmt: Callable[..., str] = format_info) -> None:
"""List ZIP entries, like Info-ZIP's zipinfo(1)."""
with zipfile.ZipFile(zip_file) as zf:
size = os.path.getsize(zip_file)
ents = len(zf.infolist())
tot_u = tot_c = 0
print(f"Archive: {printable_filename(zip_file)}")
print(f"Zip file size: {size} bytes, number of entries: {ents}")
if ents:
for info in zf.infolist():
tot_u += info.file_size
tot_c += info.compress_size
print(fmt(info, extended=extended, long=long))
if info.flag_bits & 1: # encrypted
tot_c -= 12 # don't count extra 12 header bytes
s = "" if ents == 1 else "s"
r = _cfactor(tot_u, tot_c)
print(f"{ents} file{s}, {tot_u} bytes uncompressed, "
f"{tot_c} bytes compressed: {r}")
else:
print("Empty zipfile.")
# https://sources.debian.org/src/unzip/6.0-27/list.c/#L708
# https://sources.debian.org/src/unzip/6.0-27/zipinfo.c/#L913
def _cfactor(u: int, c: int) -> str:
"""
Compression factor as reported by Info-ZIP's zipinfo(1).
>>> _cfactor(0, 0)
'0.0%'
>>> _cfactor(400, 300)
'25.0%'
>>> _cfactor(300, 400)
'-33.3%'
>>> _cfactor(84624808, 25825672)
'69.5%'
>>> _cfactor(938, 492)
'47.5%'
>>> _cfactor(3070914, 1205461)
'60.8%'
"""
if not u:
r, s = 0, ""
else:
f, d = (1, u // 1000) if u > 2000000 else (1000, u)
if u >= c:
r, s = (f * (u - c) + (d >> 1)) // d, ""
else:
r, s = (f * (c - u) + (d >> 1)) // d, "-"
return f"{s}{r//10}.{r%10}%"
def zip_filenames(zip_file: str) -> None:
"""List ZIP entry filenames, one per line."""
with zipfile.ZipFile(zip_file) as zf:
for name in zf.namelist():
print(printable_filename(name))
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(prog="zipinfo.py")
parser.add_argument("-1", "--filenames-only", action="store_true",
help="only print filenames, one per line")
parser.add_argument("-e", "--extended", action="store_true",
help="use extended output format")
parser.add_argument("-l", "--long", action="store_true",
help="use long output format")
parser.add_argument("zipfile", metavar="ZIPFILE")
args = parser.parse_args()
if args.filenames_only and not (args.extended or args.long):
zip_filenames(args.zipfile)
else:
zipinfo(args.zipfile, extended=args.extended, long=args.long)
# vim: set tw=80 sw=4 sts=4 et fdm=marker : | /repro_apk-0.2.3-py3-none-any.whl/repro_apk/zipinfo.py | 0.443118 | 0.17849 | zipinfo.py | pypi |
import struct
import zipfile
import zlib
from fnmatch import fnmatch
from typing import Any, Dict
ATTRS = ("compress_type", "create_system", "create_version", "date_time",
"external_attr", "extract_version", "flag_bits")
LEVELS = (9, 6, 4, 1)
class Error(RuntimeError):
pass
# FIXME: is there a better alternative?
class ReproducibleZipInfo(zipfile.ZipInfo):
"""Reproducible ZipInfo hack."""
_compresslevel: int
_override: Dict[str, Any] = {}
def __init__(self, zinfo: zipfile.ZipInfo, **override: Any) -> None:
# pylint: disable=W0231
if override:
self._override = {**self._override, **override}
for k in self.__slots__:
if hasattr(zinfo, k):
setattr(self, k, getattr(zinfo, k))
def __getattribute__(self, name: str) -> Any:
if name != "_override":
try:
return self._override[name]
except KeyError:
pass
return object.__getattribute__(self, name)
def fix_compresslevel(input_apk: str, output_apk: str, compresslevel: int,
*patterns: str, verbose: bool = False) -> None:
if not patterns:
raise ValueError("No patterns")
with open(input_apk, "rb") as fh_raw:
with zipfile.ZipFile(input_apk) as zf_in:
with zipfile.ZipFile(output_apk, "w") as zf_out:
for info in zf_in.infolist():
attrs = {attr: getattr(info, attr) for attr in ATTRS}
zinfo = ReproducibleZipInfo(info, **attrs)
tofix = any(fnmatch(info.filename, p) for p in patterns)
level = None
if info.compress_type == 8:
fh_raw.seek(info.header_offset)
n, m = struct.unpack("<HH", fh_raw.read(30)[26:30])
fh_raw.seek(info.header_offset + 30 + m + n)
ccrc = 0
size = info.compress_size
while size > 0:
ccrc = zlib.crc32(fh_raw.read(min(size, 4096)), ccrc)
size -= 4096
with zf_in.open(info) as fh_in:
comps = {lvl: zlib.compressobj(lvl, 8, -15) for lvl in LEVELS}
ccrcs = {lvl: 0 for lvl in LEVELS}
while True:
data = fh_in.read(4096)
if not data:
break
for lvl in LEVELS:
ccrcs[lvl] = zlib.crc32(comps[lvl].compress(data), ccrcs[lvl])
for lvl in LEVELS:
if ccrc == zlib.crc32(comps[lvl].flush(), ccrcs[lvl]):
level = lvl
break
else:
raise Error(f"Unable to determine compresslevel for {info.filename!r}")
elif tofix or info.compress_type != 0:
raise Error(f"Unsupported compress_type {info.compress_type}")
if tofix:
print(f"fixing {info.filename!r}...")
zinfo._compresslevel = compresslevel
else:
if verbose:
print(f"copying {info.filename!r}...")
if level is not None:
zinfo._compresslevel = level
if verbose and level is not None:
print(f" compresslevel={level}")
with zf_in.open(info) as fh_in:
with zf_out.open(zinfo, "w") as fh_out:
while True:
data = fh_in.read(4096)
if not data:
break
fh_out.write(data)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(prog="fix-compresslevel.py")
parser.add_argument("-v", "--verbose", action="store_true")
parser.add_argument("input_apk", metavar="INPUT_APK")
parser.add_argument("output_apk", metavar="OUTPUT_APK")
parser.add_argument("compresslevel", metavar="COMPRESSLEVEL", type=int)
parser.add_argument("patterns", metavar="PATTERN", nargs="+")
args = parser.parse_args()
fix_compresslevel(args.input_apk, args.output_apk, args.compresslevel,
*args.patterns, verbose=args.verbose)
# vim: set tw=80 sw=4 sts=4 et fdm=marker : | /repro_apk-0.2.3-py3-none-any.whl/repro_apk/fix_compresslevel.py | 0.401453 | 0.152064 | fix_compresslevel.py | pypi |
import struct
import zipfile
import zlib
from typing import Any, Dict, Tuple
# https://android.googlesource.com/platform/tools/base
# profgen/profgen/src/main/kotlin/com/android/tools/profgen/ArtProfileSerializer.kt
PROF_MAGIC = b"pro\x00"
PROFM_MAGIC = b"prm\x00"
PROF_001_N = b"001\x00"
PROF_005_O = b"005\x00"
PROF_009_O_MR1 = b"009\x00"
PROF_010_P = b"010\x00"
PROF_015_S = b"015\x00"
PROFM_001_N = b"001\x00"
PROFM_002 = b"002\x00"
ASSET_PROF = "assets/dexopt/baseline.prof"
ASSET_PROFM = "assets/dexopt/baseline.profm"
ATTRS = ("compress_type", "create_system", "create_version", "date_time",
"external_attr", "extract_version", "flag_bits")
LEVELS = (9, 6, 4, 1)
class Error(RuntimeError):
pass
# FIXME: is there a better alternative?
class ReproducibleZipInfo(zipfile.ZipInfo):
"""Reproducible ZipInfo hack."""
_compresslevel: int
_override: Dict[str, Any] = {}
def __init__(self, zinfo: zipfile.ZipInfo, **override: Any) -> None:
# pylint: disable=W0231
if override:
self._override = {**self._override, **override}
for k in self.__slots__:
if hasattr(zinfo, k):
setattr(self, k, getattr(zinfo, k))
def __getattribute__(self, name: str) -> Any:
if name != "_override":
try:
return self._override[name]
except KeyError:
pass
return object.__getattribute__(self, name)
def sort_baseline(input_file: str, output_file: str) -> None:
with open(input_file, "rb") as fhi:
data = _sort_baseline(fhi.read())
with open(output_file, "wb") as fho:
fho.write(data)
def sort_baseline_apk(input_apk: str, output_apk: str) -> None:
with open(input_apk, "rb") as fh_raw:
with zipfile.ZipFile(input_apk) as zf_in:
with zipfile.ZipFile(output_apk, "w") as zf_out:
for info in zf_in.infolist():
attrs = {attr: getattr(info, attr) for attr in ATTRS}
zinfo = ReproducibleZipInfo(info, **attrs)
if info.compress_type == 8:
fh_raw.seek(info.header_offset)
n, m = struct.unpack("<HH", fh_raw.read(30)[26:30])
fh_raw.seek(info.header_offset + 30 + m + n)
ccrc = 0
size = info.compress_size
while size > 0:
ccrc = zlib.crc32(fh_raw.read(min(size, 4096)), ccrc)
size -= 4096
with zf_in.open(info) as fh_in:
comps = {lvl: zlib.compressobj(lvl, 8, -15) for lvl in LEVELS}
ccrcs = {lvl: 0 for lvl in LEVELS}
while True:
data = fh_in.read(4096)
if not data:
break
for lvl in LEVELS:
ccrcs[lvl] = zlib.crc32(comps[lvl].compress(data), ccrcs[lvl])
for lvl in LEVELS:
if ccrc == zlib.crc32(comps[lvl].flush(), ccrcs[lvl]):
zinfo._compresslevel = lvl
break
else:
raise Error(f"Unable to determine compresslevel for {info.filename!r}")
elif info.compress_type != 0:
raise Error(f"Unsupported compress_type {info.compress_type}")
if info.filename == ASSET_PROFM:
print(f"replacing {info.filename!r}...")
zf_out.writestr(zinfo, _sort_baseline(zf_in.read(info)))
else:
with zf_in.open(info) as fh_in:
with zf_out.open(zinfo, "w") as fh_out:
while True:
data = fh_in.read(4096)
if not data:
break
fh_out.write(data)
# FIXME
# Supported .prof: none
# Supported .profm: 002
# Unsupported .profm: 001 N
def _sort_baseline(data: bytes) -> bytes:
magic, data = _split(data, 4)
version, data = _split(data, 4)
if magic == PROF_MAGIC:
raise Error(f"Unsupported prof version {version!r}")
elif magic == PROFM_MAGIC:
if version == PROFM_002:
return PROFM_MAGIC + PROFM_002 + sort_profm_002(data)
else:
raise Error(f"Unsupported profm version {version!r}")
else:
raise Error(f"Unsupported magic {magic!r}")
def sort_profm_002(data: bytes) -> bytes:
num_dex_files, uncompressed_data_size, compressed_data_size, data = _unpack("<HII", data)
profiles = []
if len(data) != compressed_data_size:
raise Error("Compressed data size does not match")
data = zlib.decompress(data)
if len(data) != uncompressed_data_size:
raise Error("Uncompressed data size does not match")
for _ in range(num_dex_files):
profile = data[:4]
profile_idx, profile_key_size, data = _unpack("<HH", data)
profile_key, data = _split(data, profile_key_size)
profile += profile_key + data[:6]
num_type_ids, num_class_ids, data = _unpack("<IH", data)
class_ids, data = _split(data, num_class_ids * 2)
profile += class_ids
profiles.append((profile_key, profile))
if data:
raise Error("Expected end of data")
srtd = b"".join(int.to_bytes(i, 2, "little") + p[1][2:]
for i, p in enumerate(sorted(profiles)))
cdata = zlib.compress(srtd, 1)
hdr = struct.pack("<HII", num_dex_files, uncompressed_data_size, len(cdata))
return hdr + cdata
def _unpack(fmt: str, data: bytes) -> Any:
assert all(c in "<BHI" for c in fmt)
size = fmt.count("B") + 2 * fmt.count("H") + 4 * fmt.count("I")
return struct.unpack(fmt, data[:size]) + (data[size:],)
def _split(data: bytes, size: int) -> Tuple[bytes, bytes]:
return data[:size], data[size:]
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(prog="sort-baseline.py")
parser.add_argument("--apk", action="store_true")
parser.add_argument("input_prof_or_apk", metavar="INPUT_PROF_OR_APK")
parser.add_argument("output_prof_or_apk", metavar="OUTPUT_PROF_OR_APK")
args = parser.parse_args()
if args.apk:
sort_baseline_apk(args.input_prof_or_apk, args.output_prof_or_apk)
else:
sort_baseline(args.input_prof_or_apk, args.output_prof_or_apk)
# vim: set tw=80 sw=4 sts=4 et fdm=marker : | /repro_apk-0.2.3-py3-none-any.whl/repro_apk/sort_baseline.py | 0.493409 | 0.174375 | sort_baseline.py | pypi |
import struct
import zipfile
import zlib
from fnmatch import fnmatch
from typing import Any, Dict, Tuple
ATTRS = ("compress_type", "create_system", "create_version", "date_time",
"external_attr", "extract_version", "flag_bits")
LEVELS = (9, 6, 4, 1)
class Error(RuntimeError):
pass
# FIXME: is there a better alternative?
class ReproducibleZipInfo(zipfile.ZipInfo):
"""Reproducible ZipInfo hack."""
_compresslevel: int
_override: Dict[str, Any] = {}
def __init__(self, zinfo: zipfile.ZipInfo, **override: Any) -> None:
# pylint: disable=W0231
if override:
self._override = {**self._override, **override}
for k in self.__slots__:
if hasattr(zinfo, k):
setattr(self, k, getattr(zinfo, k))
def __getattribute__(self, name: str) -> Any:
if name != "_override":
try:
return self._override[name]
except KeyError:
pass
return object.__getattribute__(self, name)
def fix_newlines(input_apk: str, output_apk: str, *patterns: str,
replace: Tuple[str, str] = ("\n", "\r\n"), verbose: bool = False) -> None:
if not patterns:
raise ValueError("No patterns")
with open(input_apk, "rb") as fh_raw:
with zipfile.ZipFile(input_apk) as zf_in:
with zipfile.ZipFile(output_apk, "w") as zf_out:
for info in zf_in.infolist():
attrs = {attr: getattr(info, attr) for attr in ATTRS}
zinfo = ReproducibleZipInfo(info, **attrs)
if info.compress_type == 8:
fh_raw.seek(info.header_offset)
n, m = struct.unpack("<HH", fh_raw.read(30)[26:30])
fh_raw.seek(info.header_offset + 30 + m + n)
ccrc = 0
size = info.compress_size
while size > 0:
ccrc = zlib.crc32(fh_raw.read(min(size, 4096)), ccrc)
size -= 4096
with zf_in.open(info) as fh_in:
comps = {lvl: zlib.compressobj(lvl, 8, -15) for lvl in LEVELS}
ccrcs = {lvl: 0 for lvl in LEVELS}
while True:
data = fh_in.read(4096)
if not data:
break
for lvl in LEVELS:
ccrcs[lvl] = zlib.crc32(comps[lvl].compress(data), ccrcs[lvl])
for lvl in LEVELS:
if ccrc == zlib.crc32(comps[lvl].flush(), ccrcs[lvl]):
zinfo._compresslevel = lvl
break
else:
raise Error(f"Unable to determine compresslevel for {info.filename!r}")
elif info.compress_type != 0:
raise Error(f"Unsupported compress_type {info.compress_type}")
if any(fnmatch(info.filename, p) for p in patterns):
print(f"fixing {info.filename!r}...")
zf_out.writestr(zinfo, zf_in.read(info).decode().replace(*replace))
else:
if verbose:
print(f"copying {info.filename!r}...")
with zf_in.open(info) as fh_in:
with zf_out.open(zinfo, "w") as fh_out:
while True:
data = fh_in.read(4096)
if not data:
break
fh_out.write(data)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(prog="fix-newlines.py")
parser.add_argument("--from-crlf", action="store_true")
parser.add_argument("--to-crlf", dest="from_crlf", action="store_false")
parser.add_argument("-v", "--verbose", action="store_true")
parser.add_argument("input_apk", metavar="INPUT_APK")
parser.add_argument("output_apk", metavar="OUTPUT_APK")
parser.add_argument("patterns", metavar="PATTERN", nargs="+")
args = parser.parse_args()
replace = ("\r\n", "\n") if args.from_crlf else ("\n", "\r\n")
fix_newlines(args.input_apk, args.output_apk, *args.patterns,
replace=replace, verbose=args.verbose)
# vim: set tw=80 sw=4 sts=4 et fdm=marker : | /repro_apk-0.2.3-py3-none-any.whl/repro_apk/fix_newlines.py | 0.405096 | 0.157817 | fix_newlines.py | pypi |
import os
import json
import csv
from itertools import chain
import hashlib
import git
from git import InvalidGitRepositoryError, RepositoryDirtyError
from .utils import prune_files
def hash_file(filepath, m=None):
'''
Hash the contents of a file
Parameters
----------
filepath : str
A string pointing to the file you want to hash
m : hashlib hash object, optional (default is None to create a new object)
hash_file updates m with the contents of filepath and returns m
Returns
-------
hashlib hash object
'''
assert os.path.exists(filepath), "Path {} does not exist".format(filepath)
if m is None:
m = hashlib.sha512()
with open(filepath, 'rb') as f:
# The following construction lets us read f in chunks,
# instead of loading an arbitrary file in all at once.
while True:
b = f.read(2**10)
if not b:
break
m.update(b)
return m
def modified_walk(folder, ignore_subdirs=[], ignore_exts=[], ignore_dot_files=True):
'''
A wrapper on os.walk() to return a list of paths inside directory "folder"
that do not meet the ignore criteria.
Parameters
----------
folder : str
a filepath
ignore_subdirs : list of str, optional
a list of subdirectories to ignore. Must include folder in the filepath.
ignore_exts : list of str, optional
a list of file extensions to ignore.
ignore_dot_files : bool
Returns
-------
list[str]
A list of accepted paths
'''
assert os.path.exists(folder), "Path {} does not exist".format(folder)
path_list = []
for path, directories, files in os.walk(folder):
# loop over files in the top directory
for f in sorted(files):
root, ext = os.path.splitext(f)
if not (
(ext in ignore_exts) or
(ignore_dot_files and root.startswith(".")) or
(path in ignore_subdirs)
):
path_list.append(os.path.join(path, f))
return path_list
def hash_dir_by_file(folder, **kwargs):
'''
Create a dictionary mapping filepaths to hashes. Includes all files
inside folder unless they meet some ignore criteria. See modified_walk
for details.
Parameters
----------
folder : str
filepath
**kwargs : dict
passed through to modified_walk
Returns
-------
dict (str : str)
'''
assert os.path.exists(folder), "Path {} does not exist".format(folder)
assert os.path.isdir(folder), "Provided input {} not a directory".format(folder)
hashes = {}
for path in modified_walk(folder, **kwargs):
hashes[path] = hash_file(path).hexdigest()
return hashes
def hash_dir_full(folder, **kwargs):
'''
Creates a hash and sequentially updates it with each file in folder.
Includes all files inside folder unless they meet some ignore criteria
detailed in :func:`modified_walk`.
Parameters
----------
folder : str
filepath
**kwargs : dict
passed through to modified_walk
Returns
-------
str
'''
assert os.path.exists(folder), "Path {} does not exist".format(folder)
assert os.path.isdir(folder), "Provided input {} not a directory".format(folder)
m = hashlib.sha512()
for path in sorted(modified_walk(folder, **kwargs)):
m = hash_file(path, m)
return m.hexdigest()
def hash_input(input_data):
"""
Hash directory with input data.
Parameters
----------
input_data: str
Path to directory with input data.
Returns
-------
str
Hash of the directory.
"""
if os.path.isdir(input_data):
return hash_dir_full(input_data)
elif os.path.isfile(input_data):
return hash_file(input_data).hexdigest()
else:
raise AssertionError("Provided input {} is not a file or directory".format(input_data))
def hash_output(output_data):
"""
Hash analysis output files.
Parameters
----------
output_data:
Path to output data directory.
Returns
-------
dict (str : str)
"""
if os.path.isdir(output_data):
return hash_dir_by_file(output_data)
elif os.path.isfile(output_data):
return {output_data: hash_file(output_data).hexdigest()}
else:
raise AssertionError("Provided input {} is not a file or directory".format(output_data))
def hash_code(repo_path, catalogue_dir):
"""
Get commit digest for current HEAD commit
Returns the current HEAD commit digest for the code that is run.
If the current working directory is dirty (or has untracked files other
than those held in `catalogue_dir`), it raises a `RepositoryDirtyError`.
Parameters
----------
repo_path: str
Path to analysis directory git repository.
catalogue_dir: str
Path to directory with catalogue output files.
Returns
-------
str
Git commit digest for the current HEAD commit of the git repository
"""
try:
repo = git.Repo(repo_path, search_parent_directories=True)
except InvalidGitRepositoryError:
raise InvalidGitRepositoryError("provided code directory is not a valid git repository")
untracked = prune_files(repo.untracked_files, catalogue_dir)
if repo.is_dirty() or len(untracked) != 0:
raise RepositoryDirtyError(repo, "git repository contains uncommitted changes")
return repo.head.commit.hexsha
def construct_dict(timestamp, args):
"""
Create dictionary with hashes of input files.
Parameters
----------
timestamp : str
Datetime.
args : obj
Command line input arguments (argparse.Namespace).
Returns
-------
dict
A dictionary with hashes of all inputs.
"""
results = {
"timestamp": {
args.command: timestamp
},
"input_data": {
args.input_data : hash_input(args.input_data)
},
"code": {
args.code : hash_code(args.code, args.catalogue_results)
}
}
if hasattr(args, 'output_data'):
results["output_data"] = {}
results["output_data"].update({args.output_data : hash_output(args.output_data)})
return results
def store_hash(hash_dict, timestamp, store, ext="json"):
"""
Save hash information to <timestamp.ext> file.
Parameters
----------
hash_dict: dict { str: dict }
hash dictionary after completing analysis
timestamp: str
timestamp (will be used as name of file)
store: str
directory where to store the file
ext: str
the extension of the file to store the hash info in, default is "json"
Returns
-------
None
"""
os.makedirs(store, exist_ok=True)
with open(os.path.join(store, "{}.{}".format(timestamp, ext)),"w") as f:
json.dump(hash_dict, f)
def load_hash(filepath):
"""
Load hashes from json file.
Parameters
----------
filepath : str
path to json file to be loaded
Returns
-------
dict { str : dict }
"""
with open(filepath, "r") as f:
return json.load(f)
def save_csv(hash_dict, timestamp, store):
"""
Save hash information to CSV file
Dumps the relevant hash information into a line in a CSV file. If the file does not
exist, a new file is created. If the file exists, it appends the record to the existing
file as long as the header information is consistent with the desired output format.
Parameters
----------
hash_dict: dict { str: dict }
hash dictionary after completing analysis
timestamp: str
timestamp (will be used as an id for this run)
store: str
path to CSV file where
Returns
-------
None
"""
headers = ["id" ,"disengage", "engage", "input_data", "input_hash",
"code", "code_hash", "output_data", "output_file1", "output_hash1"]
os.makedirs(os.path.dirname(store), exist_ok=True)
try:
needs_header = False
with open(store, 'r') as f:
line = f.readline().strip().split(",")
print(line)
assert line == headers, "Existing CSV file header is not formatted correctly"
except FileNotFoundError:
needs_header = True
finally:
with open(store, 'a') as f:
fwriter = csv.writer(f)
if needs_header:
fwriter.writerow(headers)
output_key = list(hash_dict["output_data"].keys())[0]
fwriter.writerow([timestamp, hash_dict["timestamp"]["disengage"], hash_dict["timestamp"]["engage"]] +
list(hash_dict["input_data"].keys()) + list(hash_dict["input_data"].values()) +
list(hash_dict["code"].keys()) + list(hash_dict["code"].values()) +
[ output_key ] +
list(chain.from_iterable((i, j) for (i, j) in zip(hash_dict["output_data"][output_key].keys(),
hash_dict["output_data"][output_key].values()))))
def load_csv(filepath, timestamp):
"""
Load hashes from a specific time stamp from a CSV file
Load hash information from a CSV file from a specific time stamp. Returns a hash
dictionary of the standard form outlined above.
The timestamp must be a 15 character timestamp string. If the specific entry is not found
in the CSV file, an EOFError is thrown. Also performs a number of checks of the length
of the existing record, and confirms that the timestamps and hashes are of the correct
length.
Parameters
----------
filepath : str
path to CSV file to be loaded
timestamp : str
timestamp of desired analysis to be loaded. Must be a 15 character string of the form
"%Y%m%d-%H%M%S"
Returns
-------
dict { str : dict }
"""
assert isinstance(timestamp, str)
assert len(timestamp) == 15, "bad format for timestamp"
found_record = None
with open(filepath, "r") as f:
freader = csv.reader(f)
for line in freader:
if line[0] == timestamp:
found_record = list(line)
break
if found_record is None:
raise EOFError("Unable to find desired record in {}".format(filepath))
assert len(found_record) >= 9, "bad length for record {} in {}".format(timestamp, filepath)
assert len(found_record) % 2 == 0, "bad length for record {} in {}".format(timestamp, filepath)
for i in range(3):
assert len(found_record[i]) == 15
for i in [4] + list(range(9, len(found_record), 2)):
assert len(found_record[i]) == 128
assert len(found_record[6]) == 40
result = {
"timestamp": {
"disengage": found_record[1],
"engage" : found_record[2]
},
"input_data": {
found_record[3] : found_record[4]
},
"code": {
found_record[5] : found_record[6]
},
"output_data": {
found_record[7]: { found_record[i]: found_record[i + 1] for i in range(8,len(found_record), 2)}
}
}
return result | /repro_catalogue-1.0.0-py3-none-any.whl/catalogue/catalogue.py | 0.757436 | 0.33614 | catalogue.py | pypi |
import os
import json
import git
from git import InvalidGitRepositoryError
from . import catalogue as ct
from .compare import compare_hashes, print_comparison
from .utils import create_timestamp, check_paths_exists, prune_files
def git_query(repo_path, catalogue_dir, commit_changes=False):
"""
Check status of a git repository
Checks the git status of the repository on the provided path
- if clean, returns true
- if there are uncommitted changes (including untracked files other than
those held in `catalogue_dir`), and the `commit_changes` flag is
`True`, offer the user to stage and commit all tracked (and untracked)
files and continue (returns `True`)
otherwise, returns `False`
If the `commit_changes` flag is `True` and the user accepts the offer
to commit changes, a new branch is created with the name
`"catalogue-%Y%m%d-%H%M%S"` and all tracked files that have been
changed & any untracked files will be staged and committed.
Parameters:
------------
repo_path : str
path to the code directory
catalogue_dir : str
path to directory with catalogue output files
commit_changes : bool
boolean indicating if the user should be prompted to stage and commit changes
(optional, default is False)
Returns:
---------
Boolean indicating if git directory is clean
"""
try:
repo = git.Repo(repo_path, search_parent_directories=True)
except InvalidGitRepositoryError:
raise InvalidGitRepositoryError("provided code directory is not a valid git repository")
untracked = prune_files(repo.untracked_files, catalogue_dir)
if repo.is_dirty() or (len(untracked) != 0):
if commit_changes:
print("Working directory contains uncommitted changes.")
print("Do you want to stage and commit all changes? (y/[n])")
user_choice = input().strip().lower()
if user_choice == "y" or user_choice == "yes":
timestamp = create_timestamp()
new_branch = repo.create_head("catalogue-" + timestamp)
new_branch.checkout()
changed_files = [ item.a_path for item in repo.index.diff(None) ]
changed_files.extend(untracked)
repo.index.add(changed_files)
repo.index.commit("repro-catalogue tool auto commit at " + timestamp)
return True
elif user_choice == "n" or user_choice == "no" or user_choice == "":
return False
else:
print("Unrecognized response, leaving repository unchaged")
return False
else:
return False
else:
return True
def engage(args):
"""
The `catalogue engage` command.
The engage command is used prior to running an analysis. The `args` contain
paths to `input_data` and `code` (which must be a git repo).
The engage command:
- does a `git_query()` check of the `code` repo
(engage does not proceed unless the repo is clean)
- gets hashes for the input_data and code (from `construct_dict()`)
- saves the hashes to a `.lock` file
Once engaged (a `.lock` file exists), the command cannot be run again until
`disengage` has been run.
Parameters:
------------
args : obj
Command line input arguments (argparse.Namespace).
Returns:
---------
None
"""
assert check_paths_exists(args), 'Not all provided filepaths exist.'
if git_query(args.code, args.catalogue_results, True):
try:
assert not os.path.exists(os.path.join(args.catalogue_results, ".lock"))
except AssertionError:
print("Already engaged (.lock file exists). To disengage run 'catalogue disengage...'")
print("See 'catalogue disengage --help' for details")
else:
hash_dict = ct.construct_dict(create_timestamp(), args)
ct.store_hash(hash_dict, "", args.catalogue_results, ext="lock")
print("'catalogue engage' succeeded. Proceed with analysis")
def disengage(args):
"""
The `catalogue disengage` command.
The disengage command is run just after finishing an analysis. It cannot be run
unless `engage` was run first.
The `args` contain paths to `input_data`, `code` (must be a git repo) and `output_data`.
The disengage command:
- reads hashes stored in the `.lock` file created during `engage`
- gets hashes for the `input_data`, `code` and `output_data` (from `construct_dict()`)
- compares the two sets of hashes
(if `input_data` and `code` hashes match, saves the hashes to a file)
- prints the results of the comparison
Parameters:
------------
args : obj
Command line input arguments (argparse.Namespace).
Returns:
---------
None
"""
assert check_paths_exists(args), 'Not all provided filepaths exist.'
timestamp = create_timestamp()
try:
LOCK_FILE_PATH = os.path.join(args.catalogue_results, ".lock")
lock_dict = ct.load_hash(LOCK_FILE_PATH)
os.remove(LOCK_FILE_PATH)
except FileNotFoundError:
print("Not currently engaged (could not find .lock file). To engage run 'catalogue engage...'")
print("See 'catalogue engage --help' for details")
else:
hash_dict = ct.construct_dict(timestamp, args)
compare = compare_hashes(hash_dict, lock_dict)
# check if 'input_data' and 'code' were in matches
assert "matches" in compare.keys(), "Error in constructing comparison dictionary"
if 'input_data' in compare["matches"] and 'code' in compare["matches"]:
# add engage timestamp to hash_dict
hash_dict["timestamp"].update({"engage": lock_dict["timestamp"]["engage"]})
if args.csv is None:
ct.store_hash(hash_dict, timestamp, args.catalogue_results)
else:
ct.save_csv(hash_dict, timestamp, os.path.join(args.catalogue_results, args.csv))
print_comparison(compare) | /repro_catalogue-1.0.0-py3-none-any.whl/catalogue/engage.py | 0.794943 | 0.344912 | engage.py | pypi |
import argparse
import textwrap
from .engage import engage, disengage
from .compare import compare
def main():
"""
Main function
This is the main function that is called when running the tool. The function parses
the arguments supplied and calls the appropriate function (`engage`, `disengage`,
or `compare`). The details of each of these functions is described in
the appropriate docstrings.
engage
-------
When running in `engage` mode, two options can be set: `--input_data` and `--code`.
These arguments should be strings specifing the directories where the input data
and code reside, respectively. Defaults are assumed to be `"data"` (relative to the
current working directory) for the input data argument, and the current directory
for the code. The code argument must be a git repository, or a directory whose
parent directory contains a git repository.
disengage
---------
When running in `disengage` mode, three options can be set: `--input_data`, `--code`
and `--output_data`. The meaning and defaults for `--input_data` and `--code` are the
same as when in `engage` mode (see above). The `--output_data` argument should also
be a string, specifying the directory with the analysis results. The default for
the `output_data` argument is `"results"` (relative to the current working directory).
Optionally, to save results in a CSV file set `--csv` to the desired filename (will
create a new file or append to an existing one).
compare
-------
The `compare` mode is used to check if two hashes are identical, or if the corrent
state of the input, code, and output match the state from a previous run. The
`compare` mode accepts either 1 or 2 unnamed input arguments which are strings holding the
path of existing json files output from the code. If 2 inputs are given, `compare` checks the
two inputs, while if 1 input is given that input is compared to the current state.
If 1 input is given, the usual flags for input, code, and output paths apply.
Comparisons can also be made from a CSV file -- set the --csv flag to the desired CSV
file where results are saved and then provide one or two timestamps as standard input.
Note that if `compare` mode is used with 1 input, any use of flags to set data or code
paths must come before the hash file due to how arguments are parsed.
"""
parser = argparse.ArgumentParser(
description="",
formatter_class=argparse.RawTextHelpFormatter)
# declare shared arguments here
common_parser = argparse.ArgumentParser(add_help=False)
common_parser.add_argument(
'--input_data',
type=str,
metavar='input_data',
help=textwrap.dedent("This argument should be the path (full or relative) to the directory" +
" containing the input data. Default value is data."),
default='data')
common_parser.add_argument(
'--code',
type=str,
metavar='code',
help=textwrap.dedent("This argument should be the path (full or relative) to the code directory." +
" The code directory must be a git repository, or must have a parent directory" +
" that is a git repository. Default is the current working directory."),
default='.')
common_parser.add_argument(
'--catalogue_results',
type=str,
metavar='catalogue_results',
help=textwrap.dedent("This argument should be the path (full or relative) to the directory where any" +
" files created by catalogue should be stored. It cannot be the same as the `code`" +
" directory. Default is catalogue_results."),
default='catalogue_results'
)
output_parser = argparse.ArgumentParser(add_help=False)
output_parser.add_argument(
'--output_data',
type=str,
metavar='output_data',
help=textwrap.dedent("This argument should be the path (full or relative) to the directory" +
" containing the analysis output data. Default value is results."),
default="results")
output_parser.add_argument(
"--csv",
type=str,
metavar="csv",
help=textwrap.dedent("If output to CSV is desired, set this to the desired filename (the file " +
"will be placed in the 'catalogue_results' directory). Optional, default is None " +
"for no CSV output"),
default=None)
# create subparsers
subparsers = parser.add_subparsers(dest="command")
engage_parser = subparsers.add_parser(
"engage", parents=[common_parser], description="", help=""
)
engage_parser.set_defaults(func=engage)
compare_parser = subparsers.add_parser("compare", parents=[common_parser, output_parser],
description="", help="")
compare_parser.set_defaults(func=compare)
compare_parser.add_argument("hashes", type=str, nargs='+', help="")
disengage_parser = subparsers.add_parser(
"disengage", parents=[common_parser, output_parser], description="", help=""
)
disengage_parser.set_defaults(func=disengage)
args = parser.parse_args()
assert args.code != args.catalogue_results, "The 'catalogue_results' and 'code' paths cannot be the same"
args.func(args)
if __name__ == "__main__":
main() | /repro_catalogue-1.0.0-py3-none-any.whl/catalogue/parser.py | 0.708112 | 0.688167 | parser.py | pypi |
import itertools
from collections import OrderedDict
import numpy as np
from repro_eval.config import TRIM_THRESH, exclude
def trim(run, thresh=TRIM_THRESH):
"""
Use this function to trim a run to a length of a document length specified by thresh.
@param run: The run to be trimmed.
@param thresh: The threshold value of the run length.
"""
for topic, docs in run.items():
run[topic] = dict(list(run[topic].items())[:thresh])
def arp(topic_scores):
"""
This function computes the Average Retrieval Performance (ARP) according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
The ARP score is defined by the mean across the different topic scores of a run.
@param topic_scores: Topic scores of an evaluated run.
@return: The ARP score.
"""
return np.array(list(topic_scores.values())).mean()
def _arp_scores(run):
"""
Helping function returning a generator for determining the Average Retrieval Performance (ARP) scores.
@param run: The run to be evaluated.
@return: Generator with ARP scores for each trec_eval evaluation measure.
"""
measures_all = list(list(run.values())[0].keys())
measures_valid = [m for m in measures_all if m not in exclude]
topics = run.keys()
for measure in measures_valid:
yield measure, np.array(list([run.get(topic).get(measure) for topic in topics])).mean()
def arp_scores(run):
"""
This function computes the Average Retrieval Performance (ARP) scores according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
The ARP score is defined by the mean across the different topic scores of a run.
For all measures outputted by trec_eval, the ARP scores will be determined.
@param run: The run to be evaluated.
@return: Dictionary containing the ARP scores for every measure outputted by trec_eval.
"""
return dict(_arp_scores(run))
def _topic_scores(run_scores):
"""
Helping function returning a generator for determining the topic scores for each measure.
@param run_scores: The run scores of the previously evaluated run.
@return: Generator with topic scores for each trec_eval evaluation measure.
"""
measures_all = list(list(run_scores.values())[0].keys())
measures_valid = [m for m in measures_all if m not in exclude]
topics = run_scores.keys()
for measure in measures_valid:
yield measure, [run_scores.get(topic).get(measure) for topic in topics]
def topic_scores(run_scores):
"""
Use this function for a dictionary that contains the topic scores for each measure outputted by trec_eval.
@param run_scores: The run scores of the previously evaluated run.
@return: Dictionary containing the topic scores for every measure outputted by trec_eval.
"""
return dict(_topic_scores(run_scores))
def print_base_adv(measure_topic, repro_measure, base_value, adv_value=None):
"""
Pretty print output in trec_eval inspired style. Use this for printing baseline and/or advanced results.
@param measure_topic: The topic number.
@param repro_measure: Name of the reproduction/replication measure.
@param base_value: Value of the evaluated baseline run.
@param adv_value: Value of the evaluated advanced run.
"""
if adv_value:
fill = ('{:3s}' if base_value < 0 else '{:4s}')
print(('{:25s}{:8s}{:8s}{:.4f}' + fill + '{:8s}{:.4f}').format(measure_topic, repro_measure,
'BASE', base_value, ' ', 'ADV', adv_value))
else:
print('{:25s}{:8s}{:8s}{:.4f}'.format(measure_topic, repro_measure, 'BASE', base_value))
def print_simple_line(measure, repro_measure, value):
"""
Use this for printing lines with trec_eval and reproduction/replication measures.
Pretty print output in trec_eval inspired style.
@param measure: Name of the trec_eval measure.
@param repro_measure: Name of the reproduction/replication measure.
@param value: Value of the evaluated run.
@return:
"""
print('{:25s}{:8s}{:.4f}'.format(measure, repro_measure, value))
def break_ties(run):
"""
Use this function to break score ties like it is implemented in trec_eval.
Documents with the same score will be sorted in reverse alphabetical order.
:param run: Run with score ties. Nested dictionary structure (cf. pytrec_eval)
:return: Reordered run
"""
for topic, ranking in run.items():
docid_score_tuple = list(ranking.items())
reordered_ranking = []
for k, v in itertools.groupby(docid_score_tuple, lambda item: item[1]):
reordered_ranking.extend(sorted(v, reverse=True))
run[topic] = OrderedDict(reordered_ranking)
return run | /repro_eval-0.4.0-py3-none-any.whl/repro_eval/util.py | 0.870446 | 0.639764 | util.py | pypi |
import pytrec_eval
from repro_eval.util import trim, break_ties
from repro_eval.measure.statistics import ttest
from repro_eval.measure.overall_effects import ER, deltaRI
from repro_eval.measure.document_order import ktau_union as ktu, RBO
from repro_eval.measure.effectiveness import rmse as RMSE, nrmse as nRMSE
from repro_eval.config import ERR_MSG
class Evaluator(object):
"""
An abstract evaluator that holds the original baseline and advanced run as well as
the reproduced/replicated baseline and advanced run.
"""
def __init__(self, **kwargs):
self.qrel_orig_path = kwargs.get('qrel_orig_path', None)
self.run_b_orig_path = kwargs.get('run_b_orig_path', None)
self.run_a_orig_path = kwargs.get('run_a_orig_path', None)
self.run_b_rep_path = kwargs.get('run_b_rep_path', None)
self.run_a_rep_path = kwargs.get('run_a_rep_path', None)
self.run_b_orig = None
self.run_a_orig = None
self.run_b_rep = None
self.run_a_rep = None
self.run_b_orig_score = None
self.run_a_orig_score = None
self.run_b_rep_score = None
self.run_a_rep_score = None
if self.qrel_orig_path:
with open(self.qrel_orig_path, 'r') as f_qrel:
qrel_orig = pytrec_eval.parse_qrel(f_qrel)
self.rel_eval = pytrec_eval.RelevanceEvaluator(qrel_orig, pytrec_eval.supported_measures)
if self.run_b_orig_path:
with open(self.run_b_orig_path, 'r') as f_run:
self.run_b_orig = pytrec_eval.parse_run(f_run)
self.run_b_orig = {t: self.run_b_orig[t] for t in sorted(self.run_b_orig)}
if self.run_a_orig_path:
with open(self.run_a_orig_path, 'r') as f_run:
self.run_a_orig = pytrec_eval.parse_run(f_run)
self.run_a_orig = {t: self.run_a_orig[t] for t in sorted(self.run_a_orig)}
if self.run_b_rep_path:
with open(self.run_b_rep_path, 'r') as f_run:
self.run_b_rep = pytrec_eval.parse_run(f_run)
self.run_b_rep = {t: self.run_b_rep[t] for t in sorted(self.run_b_rep)}
if self.run_a_rep_path:
with open(self.run_a_rep_path, 'r') as f_run:
self.run_a_rep = pytrec_eval.parse_run(f_run)
self.run_a_rep = {t: self.run_a_rep[t] for t in sorted(self.run_a_rep)}
def trim(self, t=None, run=None):
"""
Trims all runs of the Evaluator to the length specified by the threshold value t.
@param t: Threshold parameter or number of top-k documents to be considered.
@param run: If run is not None, only the provided run will be trimmed.
"""
if run:
run = break_ties(run)
if t:
trim(run, thresh=t)
else:
trim(run)
return
if self.run_b_orig:
self.run_b_orig = break_ties(self.run_b_orig)
if t:
trim(self.run_b_orig, thresh=t)
else:
trim(self.run_b_orig)
if self.run_a_orig:
self.run_a_orig = break_ties(self.run_a_orig)
if t:
trim(self.run_a_orig, thresh=t)
else:
trim(self.run_a_orig)
if self.run_b_rep:
self.run_b_rep = break_ties(self.run_b_rep)
if t:
trim(self.run_b_rep, thresh=t)
else:
trim(self.run_b_rep)
if self.run_a_rep:
self.run_a_rep = break_ties(self.run_a_rep)
if t:
trim(self.run_a_rep, thresh=t)
else:
trim(self.run_a_rep)
def evaluate(self, run=None):
"""
Evaluates the original baseline and advanced run if available.
@param run: Reproduced or replicated run that will be evaluated.
"""
if self.run_b_orig:
self.run_b_orig = break_ties(self.run_b_orig)
self.run_b_orig_score = self.rel_eval.evaluate(self.run_b_orig)
if self.run_a_orig:
self.run_a_orig = break_ties(self.run_a_orig)
self.run_a_orig_score = self.rel_eval.evaluate(self.run_a_orig)
def er(self, run_b_score=None, run_a_score=None, run_b_path=None, run_a_path=None, print_feedback=False):
"""
Determines the Effect Ratio (ER) according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
The ER value is determined by the ratio between the mean improvements
of the original and reproduced/replicated experiments.
@param run_b_score: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_score: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@return: Dictionary containing the ER values for the specified run combination.
"""
if print_feedback:
print('Determining Effect Ratio (ER)')
if self.run_b_orig_score and self.run_a_orig_score and run_b_path and run_a_path:
with open(run_b_path, 'r') as b_run, open(run_a_path, 'r') as a_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval_rpl.evaluate(run_b_rep) if hasattr(self, 'rel_eval_rpl') else self.rel_eval.evaluate(run_b_rep)
run_a_rep = pytrec_eval.parse_run(a_run)
run_a_rep = {t: run_a_rep[t] for t in sorted(run_a_rep)}
run_a_rep_score = self.rel_eval_rpl.evaluate(run_a_rep) if hasattr(self, 'rel_eval_rpl') else self.rel_eval.evaluate(run_a_rep)
return ER(orig_score_b=self.run_b_orig_score, orig_score_a=self.run_a_orig_score,
rep_score_b=run_b_rep_score, rep_score_a=run_a_rep_score, pbar=print_feedback)
if self.run_b_orig_score and self.run_a_orig_score and run_b_score and run_a_score:
return ER(orig_score_b=self.run_b_orig_score, orig_score_a=self.run_a_orig_score,
rep_score_b=run_b_score, rep_score_a=run_a_score, pbar=print_feedback)
if self.run_b_orig_score and self.run_a_orig_score and self.run_b_rep_score and self.run_a_rep_score:
return ER(orig_score_b=self.run_b_orig_score, orig_score_a=self.run_a_orig_score,
rep_score_b=self.run_b_rep_score, rep_score_a=self.run_a_rep_score, pbar=print_feedback)
else:
print(ERR_MSG)
def dri(self, run_b_score=None, run_a_score=None, run_b_path=None, run_a_path=None, print_feedback=False):
"""
Determines the Delta Relative Improvement (DeltaRI) according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
The DeltaRI value is determined by the difference between the relative improvements
of the original and reproduced/replicated experiments.
@param run_b_score: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_score: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@return: Dictionary containing the DRI values for the specified run combination.
"""
if print_feedback:
print('Determining Delta Relative Improvement (DRI)')
if self.run_b_orig_score and self.run_a_orig_score and run_b_path and run_a_path:
with open(run_b_path, 'r') as b_run, open(run_a_path, 'r') as a_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval_rpl.evaluate(run_b_rep) if hasattr(self, 'rel_eval_rpl') else self.rel_eval.evaluate(run_b_rep)
run_a_rep = pytrec_eval.parse_run(a_run)
run_a_rep = {t: run_a_rep[t] for t in sorted(run_a_rep)}
run_a_rep_score = self.rel_eval_rpl.evaluate(run_a_rep) if hasattr(self, 'rel_eval_rpl') else self.rel_eval.evaluate(run_a_rep)
return deltaRI(orig_score_b=self.run_b_orig_score, orig_score_a=self.run_a_orig_score,
rep_score_b=run_b_rep_score, rep_score_a=run_a_rep_score, pbar=print_feedback)
if self.run_b_orig_score and self.run_a_orig_score and run_b_score and run_a_score:
return deltaRI(orig_score_b=self.run_b_orig_score, orig_score_a=self.run_a_orig_score,
rep_score_b=run_b_score, rep_score_a=run_a_score, pbar=print_feedback)
if self.run_b_orig_score and self.run_a_orig_score and self.run_b_rep_score and self.run_a_rep_score:
return deltaRI(orig_score_b=self.run_b_orig_score, orig_score_a=self.run_a_orig_score,
rep_score_b=self.run_b_rep_score, rep_score_a=self.run_a_rep_score, pbar=print_feedback)
else:
print(ERR_MSG)
def _ttest(self, rpd=True, run_b_score=None, run_a_score=None, print_feedback=False):
"""
Conducts either a paired (reproducibility) or unpaired (replicability) two-sided t-test according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
@param rpd: Boolean indicating if the evaluated runs are reproduced.
@param run_b_score: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_score: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@return: Dictionary with p-values that compare the score distributions of the baseline and advanced run.
"""
if self.run_b_orig_score and (self.run_b_rep_score or run_b_score):
if run_b_score and run_a_score:
if print_feedback:
print('Determining p-values of t-test for baseline and advanced run.')
return {'baseline': ttest(self.run_b_orig_score, run_b_score, rpd=rpd, pbar=print_feedback),
'advanced': ttest(self.run_a_orig_score, run_a_score, rpd=rpd, pbar=print_feedback)}
if run_b_score:
if print_feedback:
print('Determining p-values of t-test for baseline run.')
return {'baseline': ttest(self.run_b_orig_score, run_b_score, rpd=rpd, pbar=print_feedback)}
if self.run_a_orig_score and self.run_a_rep_score:
if print_feedback:
print('Determining p-values of t-test for baseline and advanced run.')
return {'baseline': ttest(self.run_b_orig_score, self.run_b_rep_score, rpd=rpd, pbar=print_feedback),
'advanced': ttest(self.run_a_orig_score, self.run_a_rep_score, rpd=rpd, pbar=print_feedback)}
else:
if print_feedback:
print('Determining p-values of t-test for baseline run.')
return {'baseline': ttest(self.run_b_orig_score, self.run_b_rep_score, rpd=rpd, pbar=print_feedback)}
else:
print(ERR_MSG)
class RpdEvaluator(Evaluator):
"""
The Reproducibility Evaluator is used for quantifying the different levels of reproduction for runs that were
derived from the same test collection used in the original experiment.
"""
def evaluate(self, run=None):
"""
Evaluates the scores of the original and reproduced baseline and advanced runs.
If a (reproduced) run is provided only this one will be evaluated and a dictionary with the corresponding
scores is returned.
@param run: A reproduced run. If not specified, the original and reproduced runs of the the RpdEvaluator will
be used instead.
@return: If run is specified, a dictionary with the corresponding scores is returned.
"""
if run:
return self.rel_eval.evaluate(run)
super(RpdEvaluator, self).evaluate()
if self.run_b_rep:
self.run_b_rep = break_ties(self.run_b_rep)
self.run_b_rep_score = self.rel_eval.evaluate(self.run_b_rep)
if self.run_a_rep:
self.run_a_rep = break_ties(self.run_a_rep)
self.run_a_rep_score = self.rel_eval.evaluate(self.run_a_rep)
def ktau_union(self, run_b_rep=None, run_a_rep=None, run_b_path=None, run_a_path=None, print_feedback=False):
"""
Determines Kendall's tau Union (KTU) between the original and reproduced document orderings
according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
@param run_b_rep: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_rep: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_b_path: Path to another reproduced baseline run,
if not provided the reproduced baseline run of the RpdEvaluator object will be used instead.
@param run_a_path: Path to another reproduced advanced run,
if not provided the reproduced advanced run of the RpdEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@return: Dictionary with KTU values that compare the document orderings of the original and reproduced runs.
"""
if self.run_b_orig and run_b_path:
if self.run_a_orig and run_a_path:
if print_feedback:
print("Determining Kendall's tau Union (KTU) for baseline and advanced run.")
with open(run_b_path, 'r') as b_run, open(run_a_path, 'r') as a_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_a_rep = pytrec_eval.parse_run(a_run)
run_a_rep = {t: run_a_rep[t] for t in sorted(run_a_rep)}
return {'baseline': ktu(self.run_b_orig, run_b_rep, pbar=print_feedback),
'advanced': ktu(self.run_a_orig, run_a_rep, pbar=print_feedback)}
else:
if print_feedback:
print("Determining Kendall's tau Union (KTU) for baseline run.")
with open(run_b_path, 'r') as b_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
return {'baseline': ktu(self.run_b_orig, run_b_rep, pbar=print_feedback)}
if self.run_b_orig and run_b_rep:
if self.run_a_orig and run_a_rep:
if print_feedback:
print("Determining Kendall's tau Union (KTU) for baseline and advanced run.")
return {'baseline': ktu(self.run_b_orig, run_b_rep, pbar=print_feedback),
'advanced': ktu(self.run_a_orig, run_a_rep, pbar=print_feedback)}
else:
if print_feedback:
print("Determining Kendall's tau Union (KTU) for baseline run.")
return {'baseline': ktu(self.run_b_orig, run_b_rep, pbar=print_feedback)}
if self.run_b_orig and self.run_b_rep:
if self.run_a_orig and self.run_a_rep:
if print_feedback:
print("Determining Kendall's tau Union (KTU) for baseline and advanced run.")
return {'baseline': ktu(self.run_b_orig, self.run_b_rep, pbar=print_feedback),
'advanced': ktu(self.run_a_orig, self.run_a_rep, pbar=print_feedback)}
else:
if print_feedback:
print("Determining Kendall's tau Union (KTU) for baseline run.")
return {'baseline': ktu(self.run_b_orig, self.run_b_rep, pbar=print_feedback)}
else:
print(ERR_MSG)
def rbo(self, run_b_rep=None, run_a_rep=None, run_b_path=None, run_a_path=None, print_feedback=False, misinfo=True):
"""
Determines the Rank-Biased Overlap (RBO) between the original and reproduced document orderings
according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
@param run_b_rep: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_rep: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_b_path: Path to another reproduced baseline run,
if not provided the reproduced baseline run of the RpdEvaluator object will be used instead.
@param run_a_path: Path to another reproduced advanced run,
if not provided the reproduced advanced run of the RpdEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@param misinfo: Use the RBO implementation that is also used in the TREC Health Misinformation Track.
See also: https://github.com/claclark/Compatibility
@return: Dictionary with RBO values that compare the document orderings of the original and reproduced runs.
"""
if self.run_b_orig and run_b_path:
if self.run_a_orig and run_a_path:
if print_feedback:
print("Determining Rank-biased Overlap (RBO) for baseline and advanced run.")
with open(run_b_path, 'r') as b_run, open(run_a_path, 'r') as a_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_a_rep = pytrec_eval.parse_run(a_run)
run_a_rep = {t: run_a_rep[t] for t in sorted(run_a_rep)}
return {'baseline': RBO(self.run_b_orig, run_b_rep, pbar=print_feedback, misinfo=misinfo),
'advanced': RBO(self.run_a_orig, run_a_rep, pbar=print_feedback, misinfo=misinfo)}
else:
if print_feedback:
print("Determining Rank-biased Overlap (RBO) for baseline run.")
with open(run_b_path, 'r') as b_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
return {'baseline': RBO(self.run_b_orig, run_b_rep, pbar=print_feedback, misinfo=misinfo)}
if self.run_b_orig and run_b_rep:
if self.run_a_orig and run_a_rep:
if print_feedback:
print("Determining Rank-biased Overlap (RBO) for baseline and advanced run.")
return {'baseline': RBO(self.run_b_orig, run_b_rep, pbar=print_feedback, misinfo=misinfo),
'advanced': RBO(self.run_a_orig, run_a_rep, pbar=print_feedback, misinfo=misinfo)}
else:
if print_feedback:
print("Determining Rank-biased Overlap (RBO) for baseline run.")
return {'baseline': RBO(self.run_b_orig, run_b_rep, pbar=print_feedback, misinfo=misinfo)}
if self.run_b_orig and self.run_b_rep:
if self.run_a_orig and self.run_a_rep:
if print_feedback:
print("Determining Rank-biased Overlap (RBO) for baseline and advanced run.")
return {'baseline': RBO(self.run_b_orig, self.run_b_rep, pbar=print_feedback, misinfo=misinfo),
'advanced': RBO(self.run_a_orig, self.run_a_rep, pbar=print_feedback, misinfo=misinfo)}
else:
if print_feedback:
print("Determining Rank-biased Overlap (RBO) for baseline run.")
return {'baseline': RBO(self.run_b_orig, self.run_b_rep, pbar=print_feedback, misinfo=misinfo)}
else:
print(ERR_MSG)
def rmse(self, run_b_score=None, run_a_score=None, run_b_path=None, run_a_path=None, print_feedback=False):
"""
Determines the Root Mean Square Error (RMSE) according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
@param run_b_score: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_score: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_b_path: Path to another reproduced baseline run,
if not provided the reproduced baseline run of the RpdEvaluator object will be used instead.
@param run_a_path: Path to another reproduced advanced run,
if not provided the reproduced advanced run of the RpdEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@return: Dictionary with RMSE values that measure the closeness
between the topics scores of the original and reproduced runs.
"""
if self.run_b_orig and run_b_path:
if self.run_a_orig and run_a_path:
if print_feedback:
print("Determining Root Mean Square Error (RMSE) for baseline and advanced run.")
with open(run_b_path, 'r') as b_run, open(run_a_path, 'r') as a_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval.evaluate(run_b_rep)
run_a_rep = pytrec_eval.parse_run(a_run)
run_a_rep = {t: run_a_rep[t] for t in sorted(run_a_rep)}
run_a_rep_score = self.rel_eval.evaluate(run_a_rep)
return {'baseline': RMSE(self.run_b_orig_score, run_b_rep_score, pbar=print_feedback),
'advanced': RMSE(self.run_a_orig_score, run_a_rep_score, pbar=print_feedback)}
else:
if print_feedback:
print("Determining Root Mean Square Error (RMSE) for baseline run.")
with open(run_b_path, 'r') as b_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval.evaluate(run_b_rep)
return {'baseline': RMSE(self.run_b_orig_score, run_b_rep_score, pbar=print_feedback)}
if self.run_b_orig_score and run_b_score:
if self.run_a_orig_score and run_a_score:
if print_feedback:
print("Determining Root Mean Square Error (RMSE) for baseline and advanced run.")
return {'baseline': RMSE(self.run_b_orig_score, run_b_score, pbar=print_feedback),
'advanced': RMSE(self.run_a_orig_score, run_a_score, pbar=print_feedback)}
else:
if print_feedback:
print("Determining Root Mean Square Error (RMSE) for baseline run.")
return {'baseline': RMSE(self.run_b_orig_score, run_b_score, pbar=print_feedback)}
if self.run_b_orig_score and self.run_b_rep_score:
if self.run_a_orig_score and self.run_a_rep_score:
if print_feedback:
print("Determining Root Mean Square Error (RMSE) for baseline and advanced run.")
return {'baseline': RMSE(self.run_b_orig_score, self.run_b_rep_score, pbar=print_feedback),
'advanced': RMSE(self.run_a_orig_score, self.run_a_rep_score, pbar=print_feedback)}
else:
if print_feedback:
print("Determining Root Mean Square Error (RMSE) for baseline run.")
return {'baseline': RMSE(self.run_b_orig_score, self.run_b_rep_score, pbar=print_feedback)}
else:
print(ERR_MSG)
def nrmse(self, run_b_score=None, run_a_score=None, run_b_path=None, run_a_path=None, print_feedback=False):
"""
Determines the normalized Root Mean Square Error (RMSE).
@param run_b_score: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_score: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_b_path: Path to another reproduced baseline run,
if not provided the reproduced baseline run of the RpdEvaluator object will be used instead.
@param run_a_path: Path to another reproduced advanced run,
if not provided the reproduced advanced run of the RpdEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@return: Dictionary with nRMSE values that measure the closeness
between the topics scores of the original and reproduced runs.
"""
if self.run_b_orig and run_b_path:
if self.run_a_orig and run_a_path:
if print_feedback:
print("Determining normalized Root Mean Square Error (RMSE) for baseline and advanced run.")
with open(run_b_path, 'r') as b_run, open(run_a_path, 'r') as a_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval.evaluate(run_b_rep)
run_a_rep = pytrec_eval.parse_run(a_run)
run_a_rep = {t: run_a_rep[t] for t in sorted(run_a_rep)}
run_a_rep_score = self.rel_eval.evaluate(run_a_rep)
return {'baseline': nRMSE(self.run_b_orig_score, run_b_rep_score, pbar=print_feedback),
'advanced': nRMSE(self.run_a_orig_score, run_a_rep_score, pbar=print_feedback)}
else:
if print_feedback:
print("Determining normalized Root Mean Square Error (RMSE) for baseline run.")
with open(run_b_path, 'r') as b_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval.evaluate(run_b_rep)
return {'baseline': nRMSE(self.run_b_orig_score, run_b_rep_score, pbar=print_feedback)}
if self.run_b_orig_score and run_b_score:
if self.run_a_orig_score and run_a_score:
if print_feedback:
print("Determining normalized Root Mean Square Error (RMSE) for baseline and advanced run.")
return {'baseline': nRMSE(self.run_b_orig_score, run_b_score, pbar=print_feedback),
'advanced': nRMSE(self.run_a_orig_score, run_a_score, pbar=print_feedback)}
else:
if print_feedback:
print("Determining normalized Root Mean Square Error (RMSE) for baseline run.")
return {'baseline': nRMSE(self.run_b_orig_score, run_b_score, pbar=print_feedback)}
if self.run_b_orig_score and self.run_b_rep_score:
if self.run_a_orig_score and self.run_a_rep_score:
if print_feedback:
print("Determining Root Mean Square Error (RMSE) for baseline and advanced run.")
return {'baseline': nRMSE(self.run_b_orig_score, self.run_b_rep_score, pbar=print_feedback),
'advanced': nRMSE(self.run_a_orig_score, self.run_a_rep_score, pbar=print_feedback)}
else:
if print_feedback:
print("Determining normalized Root Mean Square Error (RMSE) for baseline run.")
return {'baseline': nRMSE(self.run_b_orig_score, self.run_b_rep_score, pbar=print_feedback)}
else:
print(ERR_MSG)
def ttest(self, run_b_score=None, run_a_score=None, run_b_path=None, run_a_path=None, print_feedback=False):
"""
Conducts a paired two-tailed t-test for reproduced runs that were derived from the same test collection
as in the original experiment.
@param run_b_score: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_score: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_b_path: Path to another reproduced baseline run,
if not provided the reproduced baseline run of the RpdEvaluator object will be used instead.
@param run_a_path: Path to another reproduced advanced run,
if not provided the reproduced advanced run of the RpdEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@return: Dictionary with p-values that compare the score distributions of the baseline and advanced run.
"""
if run_b_path:
if run_a_path:
with open(run_b_path, 'r') as b_run, open(run_a_path, 'r') as a_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval.evaluate(run_b_rep)
run_a_rep = pytrec_eval.parse_run(a_run)
run_a_rep = {t: run_a_rep[t] for t in sorted(run_a_rep)}
run_a_rep_score = self.rel_eval.evaluate(run_a_rep)
return self._ttest(run_b_score=run_b_rep_score, run_a_score=run_a_rep_score, print_feedback=print_feedback)
else:
with open(run_b_path, 'r') as b_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval.evaluate(run_b_rep)
return self._ttest(run_b_score=run_b_rep_score, run_a_score=None, print_feedback=print_feedback)
return self._ttest(run_b_score=run_b_score, run_a_score=run_a_score, print_feedback=print_feedback)
class RplEvaluator(Evaluator):
"""
The Replicability Evaluator is used for quantifying the different levels of replication for runs that were
derived from a test collection not used in the original experiment.
"""
def __init__(self, **kwargs):
super(RplEvaluator, self).__init__(**kwargs)
self.qrel_rpl_path = kwargs.get('qrel_rpl_path', None)
if self.qrel_rpl_path:
with open(self.qrel_rpl_path, 'r') as f_qrel:
qrel_rpl = pytrec_eval.parse_qrel(f_qrel)
self.rel_eval_rpl = pytrec_eval.RelevanceEvaluator(qrel_rpl, pytrec_eval.supported_measures)
def evaluate(self, run=None):
"""
Evaluates the scores of the original and replicated baseline and advanced runs.
If a (replicated) run is provided only this one will be evaluated and a dictionary with the corresponding
scores is returned.
@param run: A replicated run. If not specified, the original and replicated runs of the the RplEvaluator will
be used instead.
@return: If run is specified, a dictionary with the corresponding scores is returned.
"""
if run:
return self.rel_eval_rpl.evaluate(run)
super(RplEvaluator, self).evaluate()
if self.run_b_rep:
self.run_b_rep = break_ties(self.run_b_rep)
self.run_b_rep_score = self.rel_eval_rpl.evaluate(self.run_b_rep)
if self.run_a_rep:
self.run_a_rep = break_ties(self.run_a_rep)
self.run_a_rep_score = self.rel_eval_rpl.evaluate(self.run_a_rep)
def ttest(self, run_b_score=None, run_a_score=None, run_b_path=None, run_a_path=None, print_feedback=False):
"""
Conducts an un-paired two-tailed t-test for replicated runs that were derived from a test collection
not used in the original experiment.
@param run_b_score: Scores of the baseline run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_a_score: Scores of the advanced run,
if not provided the scores of the RpdEvaluator object will be used instead.
@param run_b_path: Path to another replicated baseline run,
if not provided the replicated baseline run of the RplEvaluator object will be used instead.
@param run_a_path: Path to another replicated advanced run,
if not provided the replicated advanced run of the RplEvaluator object will be used instead.
@param print_feedback: Boolean value indicating if feedback on progress should be printed.
@return: Dictionary with p-values that compare the score distributions of the baseline and advanced run.
"""
if run_b_path:
if run_a_path:
with open(run_b_path, 'r') as b_run, open(run_a_path, 'r') as a_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval_rpl.evaluate(run_b_rep)
run_a_rep = pytrec_eval.parse_run(a_run)
run_a_rep = {t: run_a_rep[t] for t in sorted(run_a_rep)}
run_a_rep_score = self.rel_eval_rpl.evaluate(run_a_rep)
return self._ttest(rpd=False, run_b_score=run_b_rep_score, run_a_score=run_a_rep_score, print_feedback=print_feedback)
else:
with open(run_b_path, 'r') as b_run:
run_b_rep = pytrec_eval.parse_run(b_run)
run_b_rep = {t: run_b_rep[t] for t in sorted(run_b_rep)}
run_b_rep_score = self.rel_eval_rpl.evaluate(run_b_rep)
return self._ttest(rpd=False, run_b_score=run_b_rep_score, run_a_score=None, print_feedback=print_feedback)
return self._ttest(rpd=False, run_b_score=run_b_score, run_a_score=run_a_score, print_feedback=print_feedback) | /repro_eval-0.4.0-py3-none-any.whl/repro_eval/Evaluator.py | 0.666931 | 0.309154 | Evaluator.py | pypi |
import os
import json
import platform
import pkg_resources
import warnings
from collections import defaultdict
from io import BytesIO, TextIOWrapper
import cpuinfo
import pytrec_eval
import git
from ruamel.yaml import YAML
from repro_eval import Evaluator
META_START = '# ir_metadata.start'
META_END = '# ir_metadata.end'
class PrimadExperiment:
"""
The PrimadExperiment is used to determine the reproducibility measures
between a reference run and a set of one or more reproduced run files.
Depending on the type of the PRIMAD experiment, several reproducibility
measures can be determined.
@param ref_base_path: Path to a single run file that corresponds to the
original (or reference) baseline of the experiments.
@param ref_adv_path: Path to a single run file that corresponds to the
original (or reference) baseline of the experiments.
@param primad: String with lower and upper case letters depending on which
PRIMAD components have changed in the experiments, e.g.,
"priMad" when only the Method changes due to parameter sweeps.
@param rep_base: List containing paths to run files that reproduce the
original (or reference) baseline run.
@param rpd_qrels: Qrels file that is used to evaluate the reproducibility of
the experiments, i.e., it used to evaluate runs that are
derived from the same test collection.
@param rep_adv: List containing paths to run files that reproduce the
original (or reference) advanced run.
@param rpl_qrels: Qrels file that is used to evaluate the replicability of
the experiments, i.e., it is used to evaluate runs that are
derived from a different test collection. Please note that
"rpd_qrels" has to be provided too.
"""
def __init__(self, **kwargs):
self.ref_base_path = kwargs.get('ref_base_path', None)
if self.ref_base_path:
self.ref_base_run = MetadataHandler.strip_metadata(self.ref_base_path)
else:
self.ref_base_run = None
self.ref_adv_path = kwargs.get('ref_adv_path', None)
if self.ref_adv_path:
self.ref_adv_run = MetadataHandler.strip_metadata(self.ref_adv_path)
else:
self.ref_adv_run = None
self.primad = kwargs.get('primad', None)
self.rep_base = kwargs.get('rep_base', None)
self.rpd_qrels = kwargs.get('rpd_qrels', None)
self.rep_adv = kwargs.get('rep_adv', None)
self.rpl_qrels = kwargs.get('rpl_qrels', None)
if self.rpl_qrels:
self.rep_eval = Evaluator.RplEvaluator(qrel_orig_path=self.rpd_qrels,
qrel_rpl_path=self.rpl_qrels)
with open(self.rpd_qrels, 'r') as f_rpd_qrels, open(self.rpl_qrels, 'r') as f_rpl_qrels:
qrels = pytrec_eval.parse_qrel(f_rpd_qrels)
self.rpd_rel_eval = pytrec_eval.RelevanceEvaluator(qrels, pytrec_eval.supported_measures)
qrels = pytrec_eval.parse_qrel(f_rpl_qrels)
self.rpl_rel_eval = pytrec_eval.RelevanceEvaluator(qrels, pytrec_eval.supported_measures)
elif self.primad[-1].islower(): # check if data component is the same
self.rep_eval = Evaluator.RpdEvaluator(qrel_orig_path=self.rpd_qrels)
with open(self.rpd_qrels, 'r') as f_qrels:
qrels = pytrec_eval.parse_qrel(f_qrels)
self.rpd_rel_eval = pytrec_eval.RelevanceEvaluator(qrels, pytrec_eval.supported_measures)
else:
raise ValueError('Please provide a correct combination of qrels and PRIMAD type.')
def get_primad_type(self):
"""
This method returns a string that identifies the type of the
PRIMAD experiment.
@return: String with lower and upper case letters depending on which
PRIMAD components have changed in the experiments, e.g.,
"priMad" when only the Method changes due to parameter sweeps.
"""
return self.primad
def evaluate(self):
"""
This method validates the PRIMAD experiment in accordance with the given
"primad" identifier. Currently, the following experiments are supported.
- priMad: Parameter sweep
- PRIMAd: Reproducibility evaluation on the same test collection
- PRIMAD: Generalizability evaluation
@return: Dictionary containing the average retrieval performance and
the reproducibility measures for each run.
"""
if self.primad == 'priMad':
if self.ref_adv_run is None and self.rep_adv is None:
evaluations = {}
self.rep_eval.run_b_orig = self.ref_base_run
self.rep_eval.evaluate()
for rep_run_path in self.rep_base + [self.ref_base_path]:
run_evaluations = {}
rep_run = MetadataHandler.strip_metadata(rep_run_path)
scores = self.rpd_rel_eval.evaluate(rep_run)
run_evaluations['arp'] = scores
run_evaluations['ktu'] = self.rep_eval.ktau_union(run_b_rep=rep_run).get('baseline')
run_evaluations['rbo'] = self.rep_eval.rbo(run_b_rep=rep_run).get('baseline')
run_evaluations['rmse'] = self.rep_eval.nrmse(run_b_score=scores).get('baseline')
run_evaluations['pval'] = self.rep_eval.ttest(run_b_score=scores).get('baseline')
run_name = os.path.basename(rep_run_path)
evaluations[run_name] = run_evaluations
return evaluations
if self.primad == 'PRIMAd':
evaluations = {}
self.rep_eval.run_b_orig = self.ref_base_run
self.rep_eval.run_a_orig = self.ref_adv_run
self.rep_eval.trim(t=1000)
self.rep_eval.evaluate()
pairs = self._find_pairs(rep_base=self.rep_base, rep_adv=self.rep_adv)
pairs = pairs + [{'base': self.ref_base_path, 'adv': self.ref_adv_path}]
for pair in pairs:
pair_evaluations = {}
rep_run_base = MetadataHandler.strip_metadata(pair.get('base'))
rep_meta_base = MetadataHandler.read_metadata(pair.get('base'))
rep_run_adv = MetadataHandler.strip_metadata(pair.get('adv'))
rep_meta_adv = MetadataHandler.read_metadata(pair.get('adv'))
self.rep_eval.trim(t=1000, run=rep_run_base)
self.rep_eval.trim(t=1000, run=rep_run_adv)
scores_base = self.rpd_rel_eval.evaluate(rep_run_base)
scores_adv = self.rpd_rel_eval.evaluate(rep_run_adv)
arp = {'baseline': scores_base, 'advanced': scores_adv}
pair_evaluations['arp'] = arp
pair_evaluations['ktu'] = self.rep_eval.ktau_union(run_b_rep=rep_run_base, run_a_rep=rep_run_adv)
pair_evaluations['rbo'] = self.rep_eval.rbo(run_b_rep=rep_run_base, run_a_rep=rep_run_adv)
pair_evaluations['rmse'] = self.rep_eval.nrmse(run_b_score=scores_base, run_a_score=scores_adv)
pair_evaluations['er'] = self.rep_eval.er(run_b_score=scores_base, run_a_score=scores_adv)
pair_evaluations['dri'] = self.rep_eval.dri(run_b_score=scores_base, run_a_score=scores_adv)
pair_evaluations['pval'] = self.rep_eval.ttest(run_b_score=scores_base, run_a_score=scores_adv)
if rep_meta_base.get('actor').get('team') == rep_meta_adv.get('actor').get('team'):
expid = rep_meta_base.get('actor').get('team')
else:
expid = '_'.join([rep_meta_base.get('tag'), rep_meta_adv.get('tag')])
evaluations[expid] = pair_evaluations
return evaluations
if self.primad == 'PRIMAD':
evaluations = {}
self.rep_eval.run_b_orig = self.ref_base_run
self.rep_eval.run_a_orig = self.ref_adv_run
self.rep_eval.trim(t=1000)
self.rep_eval.evaluate()
pairs = self._find_pairs(rep_base=self.rep_base, rep_adv=self.rep_adv)
pairs = pairs
for pair in pairs:
pair_evaluations = {}
rep_run_base = MetadataHandler.strip_metadata(pair.get('base'))
rep_meta_base = MetadataHandler.read_metadata(pair.get('base'))
rep_run_adv = MetadataHandler.strip_metadata(pair.get('adv'))
rep_meta_adv = MetadataHandler.read_metadata(pair.get('adv'))
self.rep_eval.trim(t=1000, run=rep_run_base)
self.rep_eval.trim(t=1000, run=rep_run_adv)
scores_base = self.rpl_rel_eval.evaluate(rep_run_base)
scores_adv = self.rpl_rel_eval.evaluate(rep_run_adv)
arp = {'baseline': scores_base, 'advanced': scores_adv}
pair_evaluations['arp'] = arp
pair_evaluations['er'] = self.rep_eval.er(run_b_score=scores_base, run_a_score=scores_adv)
pair_evaluations['dri'] = self.rep_eval.dri(run_b_score=scores_base, run_a_score=scores_adv)
pair_evaluations['pval'] = self.rep_eval.ttest(run_b_score=scores_base, run_a_score=scores_adv)
expid = '_'.join([rep_meta_base.get('tag'), rep_meta_adv.get('tag')])
evaluations[expid] = pair_evaluations
return evaluations
else:
raise ValueError('The specified type of the PRIMAD experiments is not supported yet.')
def _find_pairs(self, rep_base, rep_adv):
"""
This method finds pairs between lists of baseline and advanced runs.
A pair is defined by the highest number of matching PRIMAD components.
@param rep_base: List with baseline runs.
@param rep_adv: List with advanced runs.
@return: List with dictionaries containing paths to a baseline and an
advanced run.
"""
pairs = []
for brp in rep_base:
br = MetadataHandler.read_metadata(run_path=brp)
arp = None
cnt = 0
for _arp in rep_adv:
_cnt = 0
ar = MetadataHandler.read_metadata(run_path=_arp)
for k,v in br.items():
if v == ar.get(k):
_cnt += 1
if _cnt > cnt:
cnt = _cnt
arp = _arp
pairs.append({'base': brp, 'adv': arp})
return pairs
class MetadataAnalyzer:
"""
The MetadataAnalyzer is used to analyze set of different run files in
reference to a run that has be be provided upon instantiation. The
analyze_directory() method returns a dictionary with PRIMAD identifiers as
keys and lists with the corresponding run paths as values.
@param run_path: Path to the reference run file.
"""
def __init__(self, run_path):
self.reference_run_path = run_path
self.reference_run = MetadataHandler.strip_metadata(run_path)
self.reference_metadata = MetadataHandler.read_metadata(run_path)
def set_reference(self, run_path):
"""
Use this method to set a new reference run.
@param run_path: Path to the new reference run file.
"""
self.reference_run_path = run_path
self.reference_run = MetadataHandler.strip_metadata(run_path)
self.reference_metadata = MetadataHandler.read_metadata(run_path)
def analyze_directory(self, dir_path):
"""
Use this method to analyze the specified directory in comparison to the
reference run.
@param dir_path: Path to the directory.
"""
components = ['platform', 'research goal', 'implementation', 'method', 'actor', 'data']
primad = {}
files = os.listdir(dir_path)
for _file in files:
file_path = os.path.join(dir_path, _file)
if file_path == self.reference_run_path:
continue
_metadata = MetadataHandler.read_metadata(file_path)
primad_str = ''
for component in components:
if self.reference_metadata[component] != _metadata[component]:
primad_str += component[0].upper()
else:
primad_str += component[0]
primad[file_path] = primad_str
experiments = defaultdict(list)
for k, v in primad.items():
experiments[v].append(k)
return experiments
@staticmethod
def filter_by_baseline(ref_run, runs):
"""
Use this method to filter a list of runs wrt. to the baseline that is
specified under "research goal/evaluation/baseline" of a given reference run.
@param ref_run: The reference with the baseline.
@param runs: A list of run paths that is filtered.
"""
run_tag = MetadataHandler.read_metadata(ref_run).get('tag')
filtered_list = []
for run in runs:
_metadata = MetadataHandler.read_metadata(run)
baseline = _metadata.get('research goal').get('evaluation').get('baseline')[0]
if baseline == run_tag:
filtered_list.append(run)
return filtered_list
@staticmethod
def filter_by_test_collection(test_collection, runs):
"""
Use this method to filter a list of runs wrt. to the test collection
specified under "data/test_collection".
@param test_collection: Name of the test collection.
@param runs: A list of run paths that is filtered.
"""
filtered_list = []
for run in runs:
_metadata = MetadataHandler.read_metadata(run)
name = _metadata.get('data').get('test collection').get('name')
if test_collection == name:
filtered_list.append(run)
return filtered_list
class MetadataHandler:
"""
Use the MetadataHandler for in- and output operations of annotated run files.
@param run_path: Path the run file without metadata annotations. It is also
possible to load an already annotated run and modify it with
the MetadataHandler.
@param metadata_path: Path to the YAML file containing the metadata that
should be added to the run file.
"""
def __init__(self, run_path, metadata_path=None):
self.run_path = run_path
if metadata_path:
self._metadata = MetadataHandler.read_metadata_template(metadata_path)
else:
self._metadata = MetadataHandler.read_metadata(run_path)
def get_metadata(self):
"""
Use this method to get the currently set metadata annotations.
@return: Nested dictionary containing the metadata annotations.
"""
return self._metadata
def set_metadata(self, metadata_dict=None, metadata_path=None):
"""
Use this method to set/update the metadata. It can either be provided with a
dictionary of a path to a YAML file
@param metadata_dict: Nested dictionary containing the metadata annotations.
@param metadata_path: Path to the YAML file with metadata.
"""
if metadata_path:
self._metadata = MetadataHandler.read_metadata_template(metadata_path)
if metadata_dict:
self._metadata = metadata_dict
def dump_metadata(self, dump_path=None, complete_metadata=False, repo_path='.'):
"""
Use this method to dump the current metadata into a YAML file.
The filename is a concatenation of the run tag and the "_annotated" suffix.
@param dump_path: Path to the directory where the metadata is dumped.
@param complete_metadata: If true, the Platform and Implementation will
be added automatically, if not already provided.
@param repo_path: Path to the git repository of the Implementation that
underlies the run file. This path is needed for the
automatic completion.
"""
if complete_metadata:
self.complete_metadata(repo_path=repo_path)
if self._metadata:
tag = self._metadata['tag']
f_out_name = '_'.join([tag, 'dump.yaml'])
f_out_path = os.path.join(dump_path, f_out_name)
with open(f_out_path, 'wb') as f_out:
bytes_io = BytesIO()
yaml = YAML()
yaml.width = 4096
yaml.dump(self._metadata, bytes_io)
f_out.write(bytes_io.getvalue())
def write_metadata(self, run_path=None, complete_metadata=False, repo_path='.'):
"""
This method writes the metadata into the run file.
@param run_path: Path to the annotated run file.
@param complete_metadata: If true, the Platform and Implementation will
be added automatically, if not already provided.
@param repo_path: Path to the git repository of the Implementation that
underlies the run file. This path is needed for the
automatic completion.
"""
if complete_metadata:
self.complete_metadata(repo_path=repo_path)
bytes_io = BytesIO()
yaml = YAML()
yaml.width = 4096
yaml.dump(self._metadata, bytes_io)
byte_str = bytes_io.getvalue().decode('UTF-8')
lines = byte_str.split('\n')
if run_path is None:
f_out_path = '_'.join([self.run_path, 'annotated'])
else:
f_out_path = '_'.join([run_path])
with open(f_out_path, 'w') as f_out:
f_out.write(''.join([META_START, '\n']))
for line in lines[:-1]:
f_out.write(' '.join(['#', line, '\n']))
f_out.write(''.join([META_END, '\n']))
with open(self.run_path, 'r') as f_in:
for run_line in f_in.readlines():
f_out.write(run_line)
def complete_metadata(self, repo_path='.'):
"""
This method automatically adds metadata about the Platform and
the Implementation component.
@param repo_path: Path to the git repository of the Implementation that
underlies the run file. If not specified this method
assumes that the program is executed from the root
directory of the git repository.
"""
if self._metadata.get('platform') is None:
platform_dict = {
'hardware': {
'cpu': self._get_cpu(),
'ram': self._get_ram(),
},
'operating system': self._get_os(),
'software': self._get_libs(),
}
self._metadata['platform'] = platform_dict
if self._metadata.get('implementation') is None:
self._metadata['implementation'] = self._get_src(repo_path=repo_path)
@staticmethod
def strip_metadata(annotated_run):
'''
Strips off the metadata and returns a dict-version of the run that is parsed with pytrec_eval.
@param annotated_run: Path to the annotated run file.
@return: defaultdict that can be used with pytrec_eval or repro_eval.
'''
with TextIOWrapper(buffer=BytesIO(), encoding='utf-8', line_buffering=True) as text_io_wrapper:
with open(annotated_run, 'r') as f_in:
lines = f_in.readlines()
for line in lines:
if line[0] != '#':
text_io_wrapper.write(line)
text_io_wrapper.seek(0,0)
run = pytrec_eval.parse_run(text_io_wrapper)
return run
@staticmethod
def read_metadata(run_path):
'''
Reads the metadata out of an annotated run and returns a dict containing the metadata.
@param run_path: Path to the run file.
@return: Dictionary containing the metadata information of the annotated
run file.
'''
_metadata = None
with open(run_path, 'r') as f_in:
lines = f_in.readlines()
if lines[0].strip('\n') == META_START:
metadata_str = ''
yaml=YAML(typ='safe')
for line in lines[1:]:
if line.strip('\n') != META_END:
metadata_str += line.strip('#')
else:
break
_metadata = yaml.load(metadata_str)
return _metadata
@staticmethod
def read_metadata_template(metadata_path):
"""
This method reads in a YAML file containing the metadata.
@param template_path: Path to the metadata YAML file.
@return: Nested dictionary containing the metadata.
"""
with open(metadata_path, 'r') as f_in:
yaml = YAML(typ='safe')
return yaml.load(f_in)
def _get_cpu(self):
"""
Reads out metadata information about the CPU including the model's name
the architectures, the operation mode and the number of available cores.
"""
cpu = cpuinfo.get_cpu_info()
return {
'model': cpu['brand_raw'],
'architecture': platform.machine(),
'operation mode': '-'.join([str(cpu['bits']), 'bit']),
'number of cores': cpu['count'],
}
def _get_os(self):
"""
Reads out metadata information about the operating system including
the platform (e.g. Linux), the kernel release version,
and the distribution's name.
"""
try:
with open("/etc/os-release") as f_in:
os_info = {}
for line in f_in:
k,v = line.rstrip().split('=')
os_info[k] = v.strip('"')
distribution = os_info['PRETTY_NAME']
except:
warnings.warn('/etc/os-release not found. Using the available information of the platform package instead.')
distribution = platform.version()
return {
'platform': platform.system(),
'kernel': platform.release(),
'distribution': distribution,
}
def _get_ram(self):
"""
Reads out the available RAM and returns the size in GB.
"""
memory_bytes = os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES')
memory_gb = memory_bytes/(1024.0 ** 3)
return ' '.join([str(round(memory_gb, 2)),'GB'])
def _get_libs(self):
"""
Reads out all installed Python packages of the active environment.
"""
installed_packages = [d.project_name for d in pkg_resources.working_set]
return {'libraries': {'python': installed_packages}}
def _get_src(self, repo_path='.'):
"""
Reads out information from the specified repository.
@param repo_path: Path to the git repository of the Implementation that
underlies the run file. If not specified this method
assumes that the program is executed from the root
directory of the git repository.
"""
extensions_path = pkg_resources.resource_filename(__name__, 'resources/extensions.json')
repo = git.Repo(repo_path)
with open(extensions_path, 'r') as input_file:
extensions = json.load(input_file)
languages = set()
for _, _, files in os.walk('.'):
for name in files:
_, file_extension = os.path.splitext(name)
language = extensions.get(file_extension[1:])
if language:
languages.add(language)
return {
'repository': repo.remote().url,
'commit': str(repo.head.commit),
'lang': list(languages),
} | /repro_eval-0.4.0-py3-none-any.whl/repro_eval/metadata.py | 0.58747 | 0.315314 | metadata.py | pypi |
from repro_eval.config import TRIM_THRESH, PHI
from scipy.stats.stats import kendalltau
from tqdm import tqdm
from repro_eval.measure.external.rbo import rbo
from repro_eval.util import break_ties
def _rbo(run, ideal, p, depth):
# Implementation taken from the TREC Health Misinformation Track with modifications
# see also: https://github.com/claclark/Compatibility
run_set = set()
ideal_set = set()
score = 0.0
normalizer = 0.0
weight = 1.0
for i in range(depth):
if i < len(run):
run_set.add(run[i])
if i < len(ideal):
ideal_set.add(ideal[i])
score += weight*len(ideal_set.intersection(run_set))/(i + 1)
normalizer += weight
weight *= p
return score/normalizer
def _ktau_union(orig_run, rep_run, trim_thresh=TRIM_THRESH, pbar=False):
"""
Helping function returning a generator to determine Kendall's tau Union (KTU) for all topics.
@param orig_run: The original run.
@param rep_run: The reproduced/replicated run.
@param trim_thresh: Threshold values for the number of documents to be compared.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Generator with KTU values.
"""
generator = tqdm(rep_run.items()) if pbar else rep_run.items()
for topic, docs in generator:
orig_docs = list(orig_run.get(topic).keys())[:trim_thresh]
rep_docs = list(rep_run.get(topic).keys())[:trim_thresh]
union = list(sorted(set(orig_docs + rep_docs)))
orig_idx = [union.index(doc) for doc in orig_docs]
rep_idx = [union.index(doc) for doc in rep_docs]
yield topic, round(kendalltau(orig_idx, rep_idx).correlation, 14)
def ktau_union(orig_run, rep_run, trim_thresh=TRIM_THRESH, pbar=False):
"""
Determines the Kendall's tau Union (KTU) between the original and reproduced document orderings
according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
@param orig_run: The original run.
@param rep_run: The reproduced/replicated run.
@param trim_thresh: Threshold values for the number of documents to be compared.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Dictionary with KTU values that compare the document orderings of the original and reproduced runs.
"""
# Safety check for runs that are not added via pytrec_eval
orig_run = break_ties(orig_run)
rep_run = break_ties(rep_run)
return dict(_ktau_union(orig_run, rep_run, trim_thresh=trim_thresh, pbar=pbar))
def _RBO(orig_run, rep_run, phi, trim_thresh=TRIM_THRESH, pbar=False, misinfo=True):
"""
Helping function returning a generator to determine the Rank-Biased Overlap (RBO) for all topics.
@param orig_run: The original run.
@param rep_run: The reproduced/replicated run.
@param phi: Parameter for top-heaviness of the RBO.
@param trim_thresh: Threshold values for the number of documents to be compared.
@param pbar: Boolean value indicating if progress bar should be printed.
@param misinfo: Use the RBO implementation that is also used in the TREC Health Misinformation Track.
See also: https://github.com/claclark/Compatibility
@return: Generator with RBO values.
"""
generator = tqdm(rep_run.items()) if pbar else rep_run.items()
if misinfo:
for topic, docs in generator:
yield topic, _rbo(list(rep_run.get(topic).keys())[:trim_thresh],
list(orig_run.get(topic).keys())[:trim_thresh],
p=phi,
depth=trim_thresh)
else:
for topic, docs in generator:
yield topic, rbo(list(rep_run.get(topic).keys())[:trim_thresh],
list(orig_run.get(topic).keys())[:trim_thresh],
p=phi).ext
def RBO(orig_run, rep_run, phi=PHI, trim_thresh=TRIM_THRESH, pbar=False, misinfo=True):
"""
Determines the Rank-Biased Overlap (RBO) between the original and reproduced document orderings
according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
@param orig_run: The original run.
@param rep_run: The reproduced/replicated run.
@param phi: Parameter for top-heaviness of the RBO.
@param trim_thresh: Threshold values for the number of documents to be compared.
@param pbar: Boolean value indicating if progress bar should be printed.
@param misinfo: Use the RBO implementation that is also used in the TREC Health Misinformation Track.
See also: https://github.com/claclark/Compatibility
@return: Dictionary with RBO values that compare the document orderings of the original and reproduced runs.
"""
# Safety check for runs that are not added via pytrec_eval
orig_run = break_ties(orig_run)
rep_run = break_ties(rep_run)
return dict(_RBO(orig_run, rep_run, phi=phi, trim_thresh=trim_thresh, pbar=pbar, misinfo=misinfo)) | /repro_eval-0.4.0-py3-none-any.whl/repro_eval/measure/document_order.py | 0.818265 | 0.345685 | document_order.py | pypi |
import numpy as np
from copy import deepcopy
from tqdm import tqdm
from repro_eval.config import exclude
def diff(topic_score_a, topic_score_b):
"""
Use this function to get a generator with absoulte differences
between the topic scores of the baseline and advanced runs.
@param topic_score_a: Topic scores of the advanced run.
@param topic_score_b: Topic scores of the baseline run.
@return: Generator with absolute differences between the topics scores.
"""
for measure, value in topic_score_a.items():
if measure not in exclude:
yield measure, value - topic_score_b.get(measure)
def topic_diff(run_a, run_b):
"""
Use this function to get a generator with absoulte differences
between the topic scores of the baseline and advanced runs for each measure.
@param run_a: The advanced run.
@param run_b: The baseline run.
@return: Generator with absolute differences between the topics scores for each measure.
"""
run_a_cp = deepcopy(run_a)
run_b_cp = deepcopy(run_b)
for topic, measures in run_a_cp.items():
yield topic, dict(diff(measures, run_b_cp.get(topic)))
def _mean_improvement(run_a, run_b):
"""
Helping function returning a generator for determining the mean improvements.
@param run_a: The advanced run.
@param run_b: The baseline run.
@return: Generator with mean improvements.
"""
measures_all = list(list(run_a.values())[0].keys())
measures_valid = [m for m in measures_all if m not in exclude]
topics = run_a.keys()
delta = dict(topic_diff(run_a, run_b))
for measure in measures_valid:
yield measure, np.array([delta.get(topic).get(measure) for topic in topics]).mean()
def mean_improvement(run_a, run_b):
"""
Determines the relative improvement that is used to derive the Delta Relative Improvement (DeltaRI).
@param run_a: The advanced run.
@param run_b: The baseline run.
@return: Dictionary with mean improvements for each measure.
"""
return dict(_mean_improvement(run_a, run_b))
def _er(orig_score_a, orig_score_b, rep_score_a, rep_score_b, pbar=False):
"""
Helping function returning a generator for determining the Effect Ratio (ER).
@param orig_score_a: Scores of the original advanced run.
@param orig_score_b: Scores of the original baseline run.
@param rep_score_a: Scores of the reproduced/replicated advanced run.
@param rep_score_b: Scores of the reproduced/replicated baseline run.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Generator with ER scores.
"""
mi_orig = mean_improvement(orig_score_a, orig_score_b)
mi_rep = mean_improvement(rep_score_a, rep_score_b)
generator = tqdm(mi_rep.items()) if pbar else mi_rep.items()
for measure, value in generator:
yield measure, value / mi_orig.get(measure)
def ER(orig_score_a, orig_score_b, rep_score_a, rep_score_b, pbar=False):
"""
Determines the Effect Ratio (ER) according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
The ER value is determined by the ratio between the mean improvements
of the original and reproduced/replicated experiments.
@param orig_score_a: Scores of the original advanced run.
@param orig_score_b: Scores of the original baseline run.
@param rep_score_a: Scores of the reproduced/replicated advanced run.
@param rep_score_b: Scores of the reproduced/replicated baseline run.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Dictionary containing the ER values for the specified run combination.
"""
return dict(_er(orig_score_a, orig_score_b, rep_score_a, rep_score_b, pbar=pbar))
def _mean_score(scores):
"""
Helping function to determine the mean scores across the topics for each measure.
@param scores: Run scores.
@return: Generator with mean scores.
"""
measures_all = list(list(scores.values())[0].keys())
measures_valid = [m for m in measures_all if m not in exclude]
topics = scores.keys()
for measure in measures_valid:
yield measure, np.array([scores.get(topic).get(measure) for topic in topics]).mean()
def mean_score(scores):
"""
Use this function to compute the mean scores across the topics for each measure.
@param scores: Run scores.
@return: Dictionary containing the mean scores for each measure.
"""
return dict(_mean_score(scores))
def _rel_improve(scores_a, scores_b):
"""
Helping function returning a generator for determining the relative improvements.
@param scores_a: Scores of the advanced run.
@param scores_b: Scores of the baseline run.
@return: Generator with relative improvements.
"""
mean_scores_a = mean_score(scores_a)
mean_scores_b = mean_score(scores_b)
for measure, mean in mean_scores_a.items():
yield measure, (mean - mean_scores_b.get(measure)) / mean_scores_b.get(measure)
def rel_improve(scores_a, scores_b):
"""
Determines the relative improvement that is used to derive the Delta Relative Improvement (DeltaRI).
@param scores_a: Scores of the advanced run.
@param scores_b: Scores of the baseline run.
@return: Dictionary with relative improvements for each measure.
"""
return dict(_rel_improve(scores_a, scores_b))
def _deltaRI(orig_score_a, orig_score_b, rep_score_a, rep_score_b, pbar=False):
"""
Helping function returning a generator for determining the Delta Relative Improvement (DeltaRI).
@param orig_score_a: Scores of the original advanced run.
@param orig_score_b: Scores of the original baseline run.
@param rep_score_a: Scores of the reproduced/replicated advanced run.
@param rep_score_b: Scores of the reproduced/replicated baseline run.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Generator with DeltaRI scores.
"""
rel_improve_orig = rel_improve(orig_score_a, orig_score_b)
rel_improve_rep = rel_improve(rep_score_a, rep_score_b)
generator = tqdm(rel_improve_orig.items()) if pbar else rel_improve_orig.items()
for measure, ri in generator:
yield measure, ri - rel_improve_rep.get(measure)
def deltaRI(orig_score_a, orig_score_b, rep_score_a, rep_score_b, pbar=False):
"""
Determines the Delta Relative Improvement (DeltaRI) according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
The DeltaRI value is determined by the difference between the relative improvements
of the original and reproduced/replicated experiments.
@param orig_score_a: Scores of the original advanced run.
@param orig_score_b: Scores of the original baseline run.
@param rep_score_a: Scores of the reproduced/replicated advanced run.
@param rep_score_b: Scores of the reproduced/replicated baseline run.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Dictionary containing the DeltaRI values for the specified run combination.
"""
return dict(_deltaRI(orig_score_a, orig_score_b, rep_score_a, rep_score_b, pbar=pbar)) | /repro_eval-0.4.0-py3-none-any.whl/repro_eval/measure/overall_effects.py | 0.882022 | 0.637637 | overall_effects.py | pypi |
import numpy as np
from math import sqrt
from copy import deepcopy
from tqdm import tqdm
from repro_eval.config import exclude
def _rmse(orig_score, rep_core, pbar=False):
"""
Helping function returning a generator to determine the Root Mean Square Error (RMSE) for all topics.
@param orig_score: The original scores.
@param rep_core: The reproduced/replicated scores.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Generator with RMSE values.
"""
orig_cp = deepcopy(orig_score)
rep_cp = deepcopy(rep_core)
measures_all = list(list(orig_cp.values())[0].keys())
topics = orig_cp.keys()
measures_valid = [m for m in measures_all if m not in exclude]
measures = tqdm(measures_valid) if pbar else measures_valid
for measure in measures:
orig_measure = np.array([orig_cp.get(topic).get(measure) for topic in topics])
rpl_measure = np.array([rep_cp.get(topic).get(measure) for topic in topics])
diff = orig_measure - rpl_measure
yield measure, sqrt(sum(np.square(diff))/len(diff))
def rmse(orig_score, rep_score, pbar=False):
"""
Determines the Root Mean Square Error (RMSE) between the original and reproduced topic scores
according to the following paper:
Timo Breuer, Nicola Ferro, Norbert Fuhr, Maria Maistro, Tetsuya Sakai, Philipp Schaer, Ian Soboroff.
How to Measure the Reproducibility of System-oriented IR Experiments.
Proceedings of SIGIR, pages 349-358, 2020.
@param orig_score: The original scores.
@param rep_core: The reproduced/replicated scores.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Dictionary with RMSE values that measure the closeness between the original and reproduced topic scores.
"""
return dict(_rmse(orig_score, rep_score, pbar=pbar))
def _maxrmse(orig_score, pbar=False):
"""
Helping function returning a generator to determine the maximum Root Mean Square Error (RMSE) for all topics.
@param orig_score: The original scores.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Generator with RMSE values.
"""
orig_cp = deepcopy(orig_score)
measures_all = list(list(orig_cp.values())[0].keys())
topics = orig_cp.keys()
measures_valid = [m for m in measures_all if m not in exclude]
measures = tqdm(measures_valid) if pbar else measures_valid
for measure in measures:
orig_measure = np.array([orig_cp.get(topic).get(measure) for topic in topics])
_max = np.vectorize(lambda x: max(x, 1 - x))
maxdiff = _max(orig_measure)
yield measure, sqrt(sum(np.square(maxdiff))/len(maxdiff))
def nrmse(orig_score, rep_score, pbar=False):
"""
Determines the normalized Root Mean Square Error (RMSE) between the original and reproduced topic scores.
@param orig_score: The original scores.
@param rep_core: The reproduced/replicated scores.
@param pbar: Boolean value indicating if progress bar should be printed.
@return: Dictionary with RMSE values that measure the closeness between the original and reproduced topic scores.
"""
rmse = dict(_rmse(orig_score, rep_score, pbar=pbar))
maxrmse = dict(_maxrmse(orig_score, pbar=pbar))
return {measure: score / maxrmse.get(measure) for measure, score in rmse.items()} | /repro_eval-0.4.0-py3-none-any.whl/repro_eval/measure/effectiveness.py | 0.807878 | 0.451629 | effectiveness.py | pypi |
# repro-zipfile
[](https://pypi.org/project/repro-zipfile/)
[](https://pypi.org/project/repro-zipfile/)
[](https://github.com/drivendataorg/repro-zipfile/actions/workflows/tests.yml?query=branch%3Amain)
[](https://codecov.io/gh/drivendataorg/repro-zipfile)
**A tiny, zero-dependency replacement for Python's `zipfile.ZipFile` for creating reproducible/deterministic ZIP archives.**
"Reproducible" or "deterministic" in this context means that the binary content of the ZIP archive is identical if you add files with identical binary content in the same order. This Python package provides a `ReproducibleZipFile` class that works exactly like [`zipfile.ZipFile`](https://docs.python.org/3/library/zipfile.html#zipfile-objects) from the Python standard library, except that all files written to the archive have their last-modified timestamps set to a fixed value.
## Installation
repro-zipfile is available from PyPI. To install, run:
```bash
pip install repro-zipfile
```
## Usage
Simply import `ReproducibleZipFile` and use it in the same way you would use [`zipfile.ZipFile`](https://docs.python.org/3/library/zipfile.html#zipfile-objects) from the Python standard library.
```python
from repro_zipfile import ReproducibleZipFile
with ReproducibleZipFile("archive.zip", "w") as zp:
# Use write to add a file to the archive
zp.write("examples/data.txt", arcname="data.txt")
# Or writestr to write data to the archive
zp.writestr("lore.txt", data="goodbye")
```
Note that files must be written to the archive in the same order to reproduce an identical archive. Be aware that functions that like `os.listdir`, `os.glob`, `Path.iterdir`, and `Path.glob` return files in a nondeterministic order—you should call `sorted` on their returned values first.
See [`examples/usage.py`](./examples/usage.py) for an example script that you can run, and [`examples/demo_vs_zipfile.py`](./examples/demo_vs_zipfile.py) for a demonstration in contrast with the standard library's zipfile module.
### Set timestamp value with SOURCE_DATE_EPOCH
repro_zipfile supports setting the fixed timestamp value using the `SOURCE_DATE_EPOCH` environment variable. This should be an integer corresponding to the [Unix epoch time](https://en.wikipedia.org/wiki/Unix_time) of the timestamp you want to set. `SOURCE_DATE_EPOCH` is a [standard](https://reproducible-builds.org/docs/source-date-epoch/) created by the [Reproducible Builds project](https://reproducible-builds.org/).
## How does repro-zipfile work?
The primary reason that ZIP archives aren't automatically reproducible is because they include last-modified timestamps of files. This means that files with identical content but with different last-modified times cause the resulting ZIP archive to be different. `repro_zipfile.ReproducibleZipFile` is a subclass of `zipfile.ZipFile` that overrides the `write` and `writestr` methods to set the modified timestamp of all files written to the archive to a fixed value. By default, this value is 1980-01-01 0:00 UTC, which is the earliest timestamp that is supported by the ZIP format. You can customize this value as documented in the previous section. Note that repro-zipfile does not modify the original files—only the metadata written to the archive.
You can effectively reproduce what `ReproducibleZipFile` does with something like this:
```python
from zipfile import ZipFile
with ZipFile("archive.zip", "w") as zp:
# Use write to add a file to the archive
zp.write("examples/data.txt", arcname="data.txt")
zinfo = zp.getinfo("data.txt")
zinfo.date_time = (1980, 1, 1, 0, 0, 0)
# Or writestr to write data to the archive
zp.writestr("lore.txt", data="goodbye")
zinfo = zp.getinfo("lore.txt")
zinfo.date_time = (1980, 1, 1, 0, 0, 0)
```
It's not hard to do, but we believe `ReproducibleZipFile` is sufficiently more convenient to justify a small package!
## Why care about reproducible ZIP archives?
ZIP archives are often useful when dealing with a set of multiple files, especially if the files are large and can be compressed. Creating reproducible ZIP archives is often useful for:
- **Building a software package.** This is a development best practice to make it easier to verify distributed software packages. See the [Reproducible Builds project](https://reproducible-builds.org/) for more explanation.
- **Working with data.** Verify that your data pipeline produced the same outputs, and avoid further reprocessing of identical data.
- **Packaging machine learning model artifacts.** Manage model artifact packages more effectively by knowing when they contain identical models.
## Related Tools and Alternatives
- https://diffoscope.org/
- Can do a rich comparison of archive files and show what specifically differs
- https://github.com/timo-reymann/deterministic-zip
- Command-line program written in Go that matches zip's interface but strips nondeterministic metadata when adding files
- https://github.com/bboe/deterministic_zip
- Command-line program written in Python that creates deterministic ZIP archives
- https://salsa.debian.org/reproducible-builds/strip-nondeterminism
- Perl library for removing nondeterministic metadata from file archives
- https://github.com/Code0x58/python-stripzip
- Python command-line program that removes file metadata from existing ZIP archives
| /repro_zipfile-0.1.0.tar.gz/repro_zipfile-0.1.0/README.md | 0.44746 | 0.95096 | README.md | pypi |
# Repro

Repro is a library for reproducing results from research papers.
For now, it is focused on making predictions with pre-trained models as easy as possible.
Currently, running pre-trained models can be difficult to do.
Some models require specific versions of dependencies, require complicated preprocessing steps, have their own input and output formats, are poorly documented, etc.
Repro addresses these problems by packaging each of the pre-trained models in its own Docker container, which includes the pre-trained models themselves as well as all of the code, dependencies, and environment setup required to run them.
Then, Repro provides lightweight Python code to read the input data, pass the data to a Docker container, run prediction in the container, and return the output to the user.
Since the complicated model-specific code is isolated within Docker, the user does not need to worry about setting up the environment correctly or know how the model is implemented at all.
**As long as you have a working Docker installation, then you can run every model included in repro with no additional effort.**
It should "just work" (at least that is the goal).
## Installation Instructions
First, you need to have a working Docker installation.
See [here](tutorials/docker.md) for installation instructions as well as scripts to verify your setup is working.
Then, we recommend creating a conda environment specific to repro before installing the library:
```shell script
conda create -n repro python=3.6
conda activate repro
```
For users:
```shell script
pip install repro
```
For developers:
```shell script
git clone https://github.com/danieldeutsch/repro
cd repro
pip install --editable .
pip install -r dev-requirements.txt
```
## Example Usage
Here is an example of how Repro can be used, highlighting how simple it is to run a complex model pipeline.
We will demonstrate how to generate summaries of a document with three different models
- BertSumExtAbs from [Liu & Lapata (2019)](https://arxiv.org/abs/1908.08345) ([docs](models/liu2019/Readme.md))
- BART from [Lewis et al. (2020)](https://arxiv.org/abs/1910.13461) ([docs](models/lewis2020/Readme.md))
- GSum from [Dou et al. (2021)](https://arxiv.org/abs/2010.08014) ([docs](models/dou2021/Readme.md))
and then evaluate those summaries with three different text generation evaluation metrics
- ROUGE from [Lin (2004)](https://aclanthology.org/W04-1013/) ([docs](models/lin2004/Readme.md))
- BLEURT from [Sellam et al. (2020)](https://arxiv.org/abs/2004.04696) ([docs](models/sellam2020/Readme.md))
- QAEval from [Deutsch et al. (2021)](https://arxiv.org/abs/2010.00490) ([docs](models/deutsch2021/Readme.md))
Once you have Docker and Repro installed, all you have to do is instantiate the classes and run `predict`:
```python
from repro.models.liu2019 import BertSumExtAbs
from repro.models.lewis2020 import BART
from repro.models.dou2021 import SentenceGSumModel
# Each of these classes uses the pre-trained weights that we want to use
# by default, but you can specify others if you want to
liu2019 = BertSumExtAbs()
lewis2020 = BART()
dou2021 = SentenceGSumModel()
# Here's the document we want to summarize (it's not very long,
# but you get the point)
document = (
"Joseph Robinette Biden Jr. was elected the 46th president of the United States "
"on Saturday, promising to restore political normalcy and a spirit of national "
"unity to confront raging health and economic crises, and making Donald J. Trump "
"a one-term president after four years of tumult in the White House."
)
# Now, run `predict` to generate the summaries from the models
summary1 = liu2019.predict(document)
summary2 = lewis2020.predict(document)
summary3 = dou2021.predict(document)
# Import the evaluation metrics. We call them "models" even though
# they are metrics
from repro.models.lin2004 import ROUGE
from repro.models.sellam2020 import BLEURT
from repro.models.deutsch2021 import QAEval
# Like the summarization models, each of these classes take parameters,
# but we just use the defaults
rouge = ROUGE()
bleurt = BLEURT()
qaeval = QAEval()
# Here is the reference summary we will use
reference = (
"Joe Biden was elected president of the United States after defeating Donald Trump."
)
# Then evaluate the summaries
for summary in [summary1, summary2, summary3]:
metrics1 = rouge.predict(summary, [reference])
metrics2 = bleurt.predict(summary, [reference])
metrics3 = qaeval.predict(summary, [reference])
```
Behind the scenes, Repro is running each model and metric in its own Docker container.
`BertSumExtAbs` is tokenizing and sentence splitting the input document with Stanford CoreNLP, then running BERT with `torch==1.1.0` and `transformers==1.2.0`.
`BLEURT` is running `tensorflow==2.2.2` to score the summary with a learned metric.
`QAEval` is chaining together pretrained question generation and question answering models with `torch==1.6.0` to evaluate the model outputs.
**But you don't need to know about any of that to run the models!**
All of the complex logic and environment details are taken care of by the Docker container, so all you have to do is call `predict()`.
It's that simple!
Abstracting the implementation details away in a Docker image is really useful for chaining together a complex NLP pipeline.
In this example, we summarize a document, ask a question, then evaluate how likely the QA prediction and expected answer mean the same thing.
The models used are:
- BART from [Lewis et al. (2020)](https://arxiv.org/abs/1910.13461) ([docs](models/lewis2020/Readme.md))
- A neural module network QA model from [Gupta et al. (2020)](https://arxiv.org/abs/1912.04971) ([docs](models/gupta2020/Readme.md))
- LERC from [Chen et al. (2020)](https://arxiv.org/abs/2010.03636) ([docs](models/chen2020/Readme.md))
```python
from repro.models.chen2020 import LERC
from repro.models.gupta2020 import NeuralModuleNetwork
from repro.models.lewis2020 import BART
document = (
"Roger Federer is a Swiss professional tennis player. He is ranked "
"No. 9 in the world by the Association of Tennis Professionals (ATP). "
"He has won 20 Grand Slam men's singles titles, an all-time record "
"shared with Rafael Nadal and Novak Djokovic. Federer has been world "
"No. 1 in the ATP rankings a total of 310 weeks – including a record "
"237 consecutive weeks – and has finished as the year-end No. 1 five times."
)
# First, summarize the document
bart = BART()
summary = bart.predict(document)
# Now, ask a question using the summary
question = "How many grand slam titles has Roger Federer won?"
answer = "twenty"
nmn = NeuralModuleNetwork()
prediction = nmn.predict(summary, question)
# Check to see if the expected answer ("twenty") and prediction ("20") mean the
# same thing in the summary
lerc = LERC()
score = lerc.predict(summary, question, answer, prediction)
```
More details on how to use the models implemented in Repro can be found [here](tutorials/using-models.md).
## Models Implemented in Repro
See the [`models`](models) directory or [this file](Papers.md) to see the list of papers with models currently supported by Repro.
Each model contains information in its Readme about how to use it as well as whether or not it currently reproduces the results reported in its respective paper or if it hasn't been tested yet.
If it has been tested, the code to reproduce the results is also included.
## Contributing a Model
See the tutorial [here](tutorials/adding-a-model.md) for instructions on how to add a new model.
| /repro-0.1.4.tar.gz/repro-0.1.4/Readme.md | 0.772702 | 0.96944 | Readme.md | pypi |
import numpy as np
from numpy import ma
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.figure import Figure
from matplotlib.backends.backend_agg import FigureCanvasAgg
from skimage.feature import canny
from skimage.transform import probabilistic_hough_line
from skimage import morphology as morph
import scipy.ndimage as nd
from shapely.geometry import LineString
from . import utils
def compute_earthshine(cube, dq, time, sl_left=slice(40,240), sl_right=slice(760,960)):
"""
Check for scattered earthshine light as the difference between median
values of the left and right column-averages of an image.
`cube`, `dq` and `time` come from `split_multiaccum`, run on an IMA file.
"""
diff = ma.masked_array(np.diff(cube, axis=0), mask=(dq[1:,:,:] > 0))
column_average = ma.median(diff, axis=1)
left = ma.median(column_average[:, sl_left], axis=1)/np.diff(time)
right = ma.median(column_average[:, sl_right], axis=1)/np.diff(time)
return left-right
def test():
import glob
import os
import astropy.io.fits as pyfits
import wfc3tools
from . import utils, reprocess_wfc3
from .reprocess_wfc3 import split_multiaccum
from . import anomalies
files=glob.glob('*ima.fits'); i=-1
files.sort()
i+=1; file=files[i]
for file in files:
if not os.path.exists(file.replace('_raw','_ima')):
try:
os.remove(file.replace('_raw','_flt'))
os.remove(file.replace('_raw','_ima'))
except:
pass
ima = pyfits.open(file, mode='update')
ima[0].header['CRCORR'] = 'PERFORM'
ima.flush()
wfc3tools.calwf3(file, log_func=reprocess_wfc3.silent_log)
ima = pyfits.open(file.replace('_raw','_ima'))
cube, dq, time, NS = split_multiaccum(ima, scale_flat=False)
is_grism = ima[0].header['FILTER'] in ['G102','G141']
if is_grism:
params = [LINE_PARAM_GRISM_LONG, LINE_PARAM_GRISM_SHORT]
else:
params = [LINE_PARAM_IMAGING_LONG, LINE_PARAM_IMAGING_SHORT]
out = trails_in_cube(cube, dq, time,
line_params=params[0],
subtract_column=is_grism)
image, edges, lines = out
if len(lines) == 0:
out = trails_in_cube(cube, dq, time,
line_params=params[1],
subtract_column=is_grism)
image, edges, lines = out
root = ima.filename().split('_')[0]
print(root, len(lines))
if len(lines) > 0:
fig = sat_trail_figure(image, edges, lines, label=root)
#fig.savefig('{0}_trails.png'.format(root))
canvas = FigureCanvasAgg(fig)
canvas.print_figure(root+'_trails.png', dpi=200)
reg = anomalies.segments_to_mask(lines, params[0]['NK'],
image.shape[1],
buf=params[0]['NK']*(1+is_grism))
fpr = open('{0}_trails.reg'.format(root),'w')
fpr.writelines(reg)
fpr.close()
# Imaging
LINE_PARAM_IMAGING_LONG = {'sn_thresh': 4, 'line_length': 700, 'line_thresh': 2, 'med_size': [12,1], 'use_canny': True, 'lo': 5, 'hi': 10, 'NK': 3, 'line_gap': 7}
LINE_PARAM_IMAGING_SHORT = {'sn_thresh': 4, 'line_length': 100, 'line_thresh': 2, 'med_size': [12,1], 'use_canny': True, 'lo': 9, 'hi': 14, 'NK': 3, 'line_gap': 1}
# # Grism
# LINE_PARAM_GRISM_SHORT = {'line_length': 400, 'line_thresh': 2, 'NK': 30, 'med_size': 5, 'use_canny': False, 'hi': 7, 'lo': 3, 'line_gap': 1, 'sn_thresh':4}
#
# LINE_PARAM_GRISM_LONG = {'line_length': 600, 'line_thresh': 2, 'NK': 30, 'med_size': 5, 'use_canny': False, 'hi': 7, 'lo': 3, 'line_gap': 1, 'sn_thresh':2}
LINE_PARAM_GRISM_LONG = {'sn_thresh': 2, 'line_length': 600, 'line_thresh': 2, 'use_canny': True, 'med_size': [12,1], 'lo': 3, 'hi': 7, 'NK': 30, 'line_gap': 3}
LINE_PARAM_GRISM_SHORT = {'sn_thresh': 2, 'line_length': 250, 'line_thresh': 2, 'use_canny': True, 'med_size': [12,1], 'lo': 10, 'hi': 15, 'NK': 30, 'line_gap': 1}
def auto_flag_trails(cube, dq, time, is_grism=False, root='satellite', earthshine_mask=False, pop_reads=[]):
"""
Automatically flag satellite trails
"""
#is_grism = ima[0].header['FILTER'] in ['G102','G141']
print('reprocess_wfc3.anomalies: {0}, is_grism={1}'.format(root, is_grism))
if is_grism:
params = [LINE_PARAM_GRISM_LONG, LINE_PARAM_GRISM_SHORT]
else:
params = [LINE_PARAM_IMAGING_LONG, LINE_PARAM_IMAGING_SHORT]
print('reprocess_wfc3.anomalies: {0}, Long trail params\n {1}'.format(root, params[0]))
out = trails_in_cube(cube, dq, time,
line_params=params[0],
subtract_column=is_grism,
earthshine_mask=earthshine_mask,
pop_reads=pop_reads)
image, edges, lines = out
is_short=False
if len(lines) == 0:
is_short=True
print('reprocess_wfc3.anomalies: {0}, Short trail params\n {1}'.format(root, params[1]))
#print('Try trail params: {0}'.format(params[1]))
out = trails_in_cube(cube, dq, time,
line_params=params[1],
subtract_column=is_grism,
earthshine_mask=earthshine_mask,
pop_reads=pop_reads)
image, edges, lines = out
#root = ima.filename().split('_')[0]
print('reprocess_wfc3: {0} has {1} satellite trails'.format(root, len(lines)))
fig = sat_trail_figure(image, edges, lines, label=root)
canvas = FigureCanvasAgg(fig)
canvas.print_figure(root+'_trails.png', dpi=200)
# fig.savefig('{0}_trails.png'.format(root))
# plt.close(fig)
if len(lines) > 0:
reg = segments_to_mask(lines, params[0]['NK'], image.shape[1],
buf=params[0]['NK']*4)
fpr = open('{0}.01.mask.reg'.format(root),'a')
fpr.writelines(reg)
fpr.close()
def trails_in_cube(cube, dq, time, line_params=LINE_PARAM_IMAGING_LONG, subtract_column=True, earthshine_mask=False, pop_reads=[]):
"""
Find satellite trails in MultiAccum sequence
Parameters
----------
`cube`, `dq` and `time` come from `split_multiaccum`, run on an IMA file.
line_params : dict
Line-finding parameters, see `~reprocess_wfc3.anomalies.LINE_PARAM_IMAGING`.
subtract_column : bool
Subtract column average from the smoothed image.
Returns
-------
image : `~numpy.ndarray`
The image from which the linear features were detected.
edges : `~numpy.ndarray`
The `skimage.feature.canny` edge image.
lines : (N,2,2)
`N` linear segments found on the edge image with
`skimage.transform.probabilistic_hough_line`.
"""
from .reprocess_wfc3 import split_multiaccum
utils.set_warnings()
## Line parameters
lp = LINE_PARAM_IMAGING_LONG
for k in line_params:
lp[k] = line_params[k]
NK = lp['NK']
lo = lp['lo']
hi = lp['hi']
line_thresh = lp['line_thresh']
line_length = lp['line_length']
line_gap = lp['line_gap']
med_size = lp['med_size']
use_canny = lp['use_canny']
sn_thresh = lp['sn_thresh']
# Parse FITS file if necessary
if hasattr(cube, 'filename'):
cube, dq, time, NS = split_multiaccum(cube, scale_flat=False)
# DQ masking in the cube
cdq = (dq - (dq & 8192)) == 0
cube[~cdq] = 0
# Difference images
dt = np.diff(time)[1:]
arr = (np.diff(cube, axis=0)[1:].T/dt).T
dq0 = (dq[-1,:,:] - (dq[-1,:,:] & 8192)) == 0
# Max diff image minus median
#diff = arr.max(axis=0) - np.median(arr, axis=0)
arr_so = np.sort(arr, axis=0)
# This will except out if too few reads
diff = arr_so[-1,:,:] - arr_so[-3,:,:]
# Global median
med = np.median(diff[5:-5,5:-5])
# Median filter
medfilt = nd.median_filter(diff*dq0-med, size=med_size)
# wagon wheel
medfilt[:300,900:] = 0
medfilt[380:600,810:950] = 0
medfilt[990:,] = 0
medfilt[:16,:] = 0
medfilt[:,:16] = 0
medfilt[:,-16:] = 0
medfilt = (medfilt*dq0)[5:-5,5:-5]
# Smoothed image
kern = np.ones((NK, NK))/NK**2
mk = nd.convolve(medfilt, kern)[NK//2::NK,NK//2::NK][1:-1,1:-1]
mask = mk != 0
# Apply earthshine mask
if earthshine_mask:
yp, xp = np.indices((1014,1014))
x0, y0 = [578, 398], [0,1014]
cline = np.polyfit(x0, y0, 1)
yi = np.polyval(cline, xp)
emask = yp > yi
#medfilt *= emask
mask &= emask[NK//2::NK,NK//2::NK][1:-1,1:-1]
else:
# Mask center to only get trails that extend from one edge to another
if line_length < 240:
#print('xxx mask center', line_length, mask.sum())
sl = slice(2*line_length//NK,-2*line_length//NK)
mask[sl,sl] = False
#print('yyy mask center', line_length, mask.sum())
# Column average
if subtract_column:
col = np.median(mk, axis=0)
mk -= col
image = mk
#image *= mask
# Image ~ standard deviation
nmad = utils.nmad(image[mask])
if use_canny:
edges = canny(image, sigma=1, low_threshold=lo*nmad, high_threshold=hi*nmad, mask=mask)
else:
edges = image-np.median(image)*0 > sn_thresh*nmad
small_edge=np.maximum(50//NK, 1)
morph.remove_small_objects(edges, min_size=small_edge*5,
connectivity=small_edge*5,
in_place=True)
# edges[0,:] = False
# edges[-1,:] = False
# edges[:,0] = False
# edges[:,-1] = False
lines = probabilistic_hough_line(edges, threshold=line_thresh,
line_length=line_length//NK,
line_gap=line_gap)
return image, edges, np.array(lines)
def segments_to_mask(lines, NK, NX, buf=None):
"""
"""
xl = np.array([-int(0.1*NX), int(1.1*NX)])
if buf is None:
buf = NK
line_shape = None
for l in lines:
c = np.polyfit(l[:,0], l[:,1], 1)
xx = (xl+1.5)*NK
yy = (np.polyval(c, xl)+1.5)*NK
line_i = LineString(coordinates=np.array([xx, yy]).T)
line_buf = line_i.buffer(buf, resolution=2)
if line_shape is None:
line_shape = line_buf
else:
line_shape = line_shape.union(line_buf)
return utils.shapely_polygon_to_region(line_shape)
def sat_trail_figure(image, edges, lines, label='Exposure'):
"""
Make a figure showing the detected features
TBD
Parameters
----------
image : `~numpy.ndarray`
The image from which the linear features were detected.
edges : `~numpy.ndarray`
The `skimage.feature.canny` edge image.
lines : (N,2,2)
`N` linear segments found on the edge image with
`skimage.transform.probabilistic_hough_line`.
label : string
Returns
-------
fig : `~matplotlib.figure.Figure`
Figure object.
"""
utils.set_warnings()
nmad = utils.nmad(image[image != 0])
# Generating figure 2
fig, axes = plt.subplots(1, 2, figsize=(4, 2), sharex=True, sharey=True)
ax = axes.ravel()
# fig = Figure(figsize=(4,2))
# ax = [fig.add_subplot(121+i) for i in range(2)]
ax[0].imshow(image, cmap=cm.gray, origin='lower',
vmin=-1*nmad, vmax=5*nmad)
ax[0].text(0.1, 0.95, label, ha='left', va='top',
transform=ax[0].transAxes, color='w', size=6)
ax[1].imshow(edges, cmap=cm.gray, origin='lower')
ax[1].text(0.1, 0.95, 'Canny edges', ha='left', va='top',
transform=ax[1].transAxes, color='w', size=6)
# ax[2].imshow(edges * 0, origin='lower')
for line in lines:
p0, p1 = line
ax[1].plot((p0[0], p1[0]), (p0[1], p1[1]), alpha=0.6)
ax[0].plot((p0[0], p1[0]), (p0[1], p1[1]), alpha=0.6)
# ax[2].set_xlim((0, image.shape[1]))
# ax[2].set_ylim((0, image.shape[0]))
# ax[2].text(0.1, 0.95, 'Hough lines', ha='left', va='top',
# transform=ax[2].transAxes, color='w', size=6)
for a in ax:
a.set_xticklabels([])
a.set_yticklabels([])
#a.set_axis_off()
fig.tight_layout(pad=0.1)
return fig | /reprocess_wfc3-0.2.3.tar.gz/reprocess_wfc3-0.2.3/reprocess_wfc3/anomalies.py | 0.414069 | 0.412353 | anomalies.py | pypi |
import os
import sys
import numpy as np
import warnings
def which_calwf3(executable='calwf3.e'):
"""
Return result of `which calwf3.e`
"""
import subprocess
env_bin = os.path.join(sys.exec_prefix, 'bin')
if (env_bin not in os.getenv('PATH')) & os.path.exists(env_bin):
_path = ':'.join([env_bin, os.getenv('PATH')])
else:
_path = os.getenv('PATH')
proc = subprocess.Popen(
['which', executable],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE,
env={'PATH':_path}
)
result = ''.join([line.decode('utf8').strip()
for line in proc.stdout])
return result
def run_calwf3(file, clean=True, log_func=None):
"""
Call calwf3.e executable with PATH fixes
Parameters
----------
file : str
Filename, e.g., `ixxxxxxq_raw.fits`
clean : bool
Clean existing `ima` and `flt` files if `file` is has 'raw' extension
log_func : func
Function to handle `stdout` from the `calwf3.e` executable
Returns
-------
return_code : int
Value returned by the `calwf3.e` process
"""
import subprocess
env_bin = os.path.join(sys.exec_prefix, 'bin')
if (env_bin not in os.getenv('PATH')) & os.path.exists(env_bin):
_path = ':'.join([env_bin, os.getenv('PATH')])
else:
_path = os.getenv('PATH')
if clean & ('_raw.fits' in file):
for ext in ['_ima.fits','_flt.fits']:
_file = file.replace('_raw.fits',ext)
if os.path.exists(_file):
print(f'$ rm {_file}')
os.remove(_file)
exec_file = which_calwf3()
print(f'$ {exec_file} {file}')
proc = subprocess.Popen(
['calwf3.e', file],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE,
env={'PATH':_path, 'iref':os.getenv('iref')}
)
if log_func is not None:
for line in proc.stdout:
log_func(line.decode('utf8'))
return_code = proc.wait()
return return_code
def set_warnings(numpy_level='ignore', astropy_level='ignore'):
"""
Set global numpy and astropy warnings
Parameters
----------
numpy_level : {'ignore', 'warn', 'raise', 'call', 'print', 'log'}
Numpy error level (see `~numpy.seterr`).
astropy_level : {'error', 'ignore', 'always', 'default', 'module', 'once'}
Astropy error level (see `~warnings.simplefilter`).
"""
from astropy.utils.exceptions import AstropyWarning
np.seterr(all=numpy_level)
warnings.simplefilter(astropy_level, category=AstropyWarning)
def nmad(data):
"""Normalized NMAD=1.48 * `~.astropy.stats.median_absolute_deviation`
"""
import astropy.stats
return 1.48*astropy.stats.median_absolute_deviation(data)
def shapely_polygon_to_region(shape, prefix=['image\n']):
"""
Convert a `~shapely` region to a DS9 region polygon
TBD
Parameters
----------
shape : `~shapely.geometry` object
Can handle `Polygon` and `MultiPolygon` shapes, or any shape with
a valid `boundary.xy` attribute.
prefix : list of strings
Strings to prepend to the polygon listself.
Returns
-------
polystr : list of strings
Region strings
"""
if hasattr(shape, '__len__'):
multi = shape
else:
multi = [shape]
polystr = [p for p in prefix]
for p in multi:
try:
if hasattr(p.boundary, '__len__'):
for pb in p.boundary:
coords = np.array(pb.xy).T
p_i = coords_to_polygon_region(coords)
polystr.append(p_i)
else:
coords = np.array(p.boundary.xy).T
p_i = coords_to_polygon_region(coords)
polystr.append(p_i)
except:
pass
return polystr
def boundary_coords(shape):
coo = []
def coords_to_polygon_region(coords):
"""
Coordinate list to `ds9` polygon string
Parameters
----------
coords : (N,2) `~numpy.ndarray`
Polygon vertices.
Returns
-------
polystr : type
"""
polystr = 'polygon('+','.join(['{0}'.format(xi) for xi in list(coords.flatten())])+')\n'
return polystr
LINE_PARAM_IMAGING = {'NK': 3, 'lo':5, 'hi': 10, 'line_length': 140, 'line_thresh': 2, 'line_gap': 15, 'med_size':5}
LINE_PARAM_GRISM = {'NK': 16, 'hi': 7, 'lo': 4, 'line_length': 180, 'line_thresh': 2, 'line_gap': 15, 'med_size':5} | /reprocess_wfc3-0.2.3.tar.gz/reprocess_wfc3-0.2.3/reprocess_wfc3/utils.py | 0.584864 | 0.242441 | utils.py | pypi |
import paddle
import numpy as np
from .utils import np2torch, np2paddle, paddle2np, torch2np, check_print_diff
def check_data(data1: dict, data2: dict):
for k in data1:
if k not in data2:
assert k in data2, 'k in data1 but not found in data2'.format(
k, data2)
for k in data2:
if k not in data1:
assert k in data1, 'k in data2 but not found in data1'.format(
k, data2.keys())
def compute_diff(data1: dict, data2: dict):
out_dict = {}
for k in data1:
assert k in data2
sub_data1, sub_data2 = data1[k], data2[k]
assert type(sub_data1) == type(sub_data2)
if isinstance(sub_data1, dict):
out = compute_diff(sub_data1, sub_data2)
out_dict[k] = out
elif isinstance(sub_data1, np.ndarray):
if sub_data1.shape != sub_data2.shape and sub_data1.transpose(
).shape == sub_data2.shape:
print('transpose sub_data1')
sub_data1 = sub_data1.transpose()
diff = np.abs(sub_data1 - sub_data2)
out_dict[k] = {
'mean': diff.mean(),
'max': diff.max(),
'min': diff.min()
}
else:
raise NotImplementedError
return out_dict
def compare_forward(torch_model,
paddle_model: paddle.nn.Layer,
input_dict: dict,
diff_threshold: float=1e-6,
diff_method: str='mean'):
torch_input = np2torch(input_dict)
paddle_input = np2paddle(input_dict)
torch_model.eval()
paddle_model.eval()
torch_out = torch_model(**torch_input)
paddle_out = paddle_model(**paddle_input)
diff_dict = compute_diff(torch2np(torch_out), paddle2np(paddle_out))
passed = check_print_diff(
diff_dict,
diff_method=diff_method,
diff_threshold=diff_threshold,
print_func=print)
if passed:
print('diff check passed')
else:
print('diff check failed')
def compare_loss_and_backward(torch_model,
paddle_model: paddle.nn.Layer,
torch_loss,
paddle_loss: paddle.nn.Layer,
input_dict: dict,
lr: float=1e-3,
steps: int=10,
diff_threshold: float=1e-6,
diff_method: str='mean'):
import torch
torch_input = np2torch(input_dict)
paddle_input = np2paddle(input_dict)
torch_model.eval()
paddle_model.eval()
torch_optim = torch.optim.SGD(params=torch_model.parameters(), lr=lr)
paddle_optim = paddle.optimizer.SGD(parameters=paddle_model.parameters(),
learning_rate=lr)
for i in range(steps):
# paddle
paddle_outputs = paddle_model(**paddle_input)
paddle_loss_value = paddle_loss(paddle_input, paddle_outputs)
paddle_loss_value['loss'].backward()
paddle_optim.step()
paddle_grad_dict = {'loss': paddle_loss_value['loss'].numpy()}
for name, parms in paddle_model.named_parameters():
if not parms.stop_gradient and parms.grad is not None:
paddle_grad_dict[name] = parms.grad.numpy()
paddle_optim.clear_grad()
# torch
torch_outputs = torch_model(**torch_input)
torch_loss_value = torch_loss(torch_input, torch_outputs)
torch_loss_value['loss'].backward()
torch_optim.step()
torch_grad_dict = {'loss': torch_loss_value['loss'].detach().numpy()}
for name, parms in torch_model.named_parameters():
if parms.requires_grad and parms.grad is not None:
torch_grad_dict[name] = parms.grad.numpy()
torch_optim.zero_grad()
# compare
diff_dict = compute_diff(paddle_grad_dict, torch_grad_dict)
passed = check_print_diff(
diff_dict,
diff_method=diff_method,
diff_threshold=diff_threshold,
print_func=print)
if passed:
print('diff check passed in iter {}'.format(i))
else:
print('diff check failed in iter {}'.format(i))
return
print('diff check passed') | /reprod_log-1.0.1-py3-none-any.whl/reprod_log/compare.py | 0.533397 | 0.701084 | compare.py | pypi |
from itertools import islice
from typing import Optional, Dict, Iterable
import argparse
from getpass import getpass
from datetime import date
from typeguard import typechecked
from pubfisher.core import Publication
from pubfisher.fishers.googlescholar import PublicationGSFisher
from reproduce_wem_taxonomy.share import PgConnector
class OriginKind:
# Publications defining the most important models.
PRIMARY = 'primary'
# Publications citing one of the primary publications.
CITES = 'cites'
# Publications needed to understand the other publications.
SUPPLEMENTARY = 'supplementary'
class PgInsertionSummary:
"""
Represents a result of the *insert_into_pg_...* methods below.
"""
@typechecked
def __init__(self, origin_id: int, pub_ids: Dict[Publication, int]):
self.origin_id = origin_id
self.pub_ids = pub_ids
def __str__(self):
return 'Origin ID: {}\nPublication IDs:\n{}' \
.format(self.origin_id, self.pub_ids)
class WEMTaxonomyFisher(PublicationGSFisher):
@typechecked
def __init__(self, pg: PgConnector, *args, **kwargs):
super(WEMTaxonomyFisher, self).__init__(*args, **kwargs)
self._pg = pg
@typechecked
def _insert_into_pg_with_existing_origin(self,
pubs: Iterable[Publication],
origin_id: int,
offset: int=0) \
-> PgInsertionSummary:
"""
Inserts *pubs* into our Postgres database schema.
:param pubs: the publications to insert
:param origin_id: postgres id of the origin of these publications
:param offset: optional offset within the origin specified by *origin_id*
:return: a summary of the insertion
"""
with self._pg.connect() as conn:
with conn.cursor() as cursor:
pub_ids = {}
nr = 1
for pub in pubs:
doc = pub.document
if not pub.url:
print('Ignored publication with no URL: {} by {}'
.format(doc.title, doc.authors))
continue
print('Storing publication no. {}: {}, in {} by {}, '
'{} citations'
.format(nr + offset, doc.title, pub.year, doc.authors,
doc.citation_count))
cursor.execute('SELECT * '
'FROM publications '
'WHERE lower(pub_title) = lower(%(title)s)',
{'title': doc.title})
pub_id = None
try:
pub_row = next(cursor)
print(' !! Publication exists already, '
'updating metadata only...')
col_names = [col.name for col in cursor.description]
existing_pub = dict(zip(col_names, pub_row))
pub_id = existing_pub['pub_id']
cursor.execute('UPDATE publications '
'SET pub_authors = %(authors)s, '
'pub_year = %(year)s, '
'pub_url = %(url)s, '
'pub_eprint = %(eprint)s, '
'pub_citation_count = %(citation_count)s '
'WHERE pub_id = %(pk)s;',
{'authors': doc.authors,
'year': pub.year,
'url': pub.url,
'eprint': pub.eprint,
'citation_count': doc.citation_count,
'pk': pub_id})
except StopIteration:
cursor.execute('INSERT INTO publications '
'(pub_id, pub_title, pub_authors, '
'pub_year, pub_abstract, pub_url, '
'pub_eprint, pub_relevant,'
' pub_citation_count) '
'VALUES (DEFAULT, %(title)s, %(authors)s, '
'%(year)s, %(abstract)s, %(url)s, '
'%(eprint)s, DEFAULT, %(citation_count)s) '
'RETURNING pub_id;',
{'title': doc.title,
'authors': doc.authors,
'year': pub.year,
'abstract': doc.abstract,
'url': pub.url,
'eprint': pub.eprint,
'citation_count': doc.citation_count})
pub_id = cursor.fetchone()[0]
finally:
assert pub_id is not None
cursor.execute('INSERT INTO publication_origins '
'(pub_id, pub_origin, pub_origin_position) '
'VALUES '
'(%(pk)s, %(origin_id)s, %(position)s);',
{'pk': pub_id,
'origin_id': origin_id,
'position': nr + offset})
pub_ids[pub] = pub_id
nr += 1
return PgInsertionSummary(origin_id, pub_ids)
@typechecked
def _insert_into_pg(self,
pubs: Iterable[Publication], query_url: str,
origin_kind: str, cites: Optional[int]=None) \
-> PgInsertionSummary:
"""
Inserts *pubs* into our Postgres database schema.
:param pubs: the publications to insert
:param query_url: the GS query url these publications where retrieved from
:param origin_kind: a string denoting the reason why these publications
are of interest
:param cites: an optional postgres id of a publication that all
the inserted publications cite
(should be provided if *origin_kind* is "cites")
:return: a summary of the insertion
"""
origin_id = None
with self._pg.connect() as conn:
with conn.cursor() as cursor:
cursor.execute('INSERT INTO origins '
'(origin_id, origin_url, origin_retrieval_date, '
'origin_cites, origin_kind) '
'VALUES (DEFAULT, %(url)s, %(retrieval_date)s, '
'%(cites)s, %(kind)s) '
'RETURNING origin_id;',
{'url': query_url,
'retrieval_date': date.today(),
'cites': cites,
'kind': origin_kind})
origin_id = cursor.fetchone()[0]
if origin_id:
return self._insert_into_pg_with_existing_origin(pubs, origin_id)
# Model papers
def look_for_collobert_paper(self):
self.look_for_key_words('Natural language processing (almost) '
'from scratch')
def look_for_w2v_paper(self):
self.look_for_key_words('Efficient estimation of word '
'representations in vector space')
def look_for_glove_paper(self):
self.look_for_key_words('Glove: Global vectors for word '
'representation')
def look_for_elmo_paper(self):
self.look_for_key_words('Deep contextualized word '
'representations')
def look_for_gen_pre_training_paper(self):
self.look_for_key_words('Improving Language Understanding by '
'Generative Pre-Training')
def look_for_bert_paper(self):
self.look_for_key_words('Bert: Pre-training of deep '
'bidirectional transformers for '
'language understanding')
def look_for_fast_text_paper(self):
self.look_for_key_words('Advances in pre-training '
'distributed word representations')
def look_for_c2w_paper(self):
self.look_for_key_words('Finding Function in Form: '
'Compositional Character Models '
'for Open Vocabulary Word Representation')
# Use case papers
def look_for_multilingual_corr_paper(self):
self.look_for_key_words('Improving Vector Space Word Representations '
'Using Multilingual Correlation')
def look_for_multilingual_proj_paper(self):
self.look_for_key_words('A representation learning framework '
'for multi-source transfer parsing')
def fish_for_taxonomy(self):
# Step 1: Store the papers of the 3 most important models.
self.look_for_w2v_paper()
w2v_pub = next(self.fish_all())
w2v_summary = self._insert_into_pg((w2v_pub,), self.query_url,
OriginKind.PRIMARY, cites=None)
self.look_for_glove_paper()
glove_pub = next(self.fish_all())
glove_summary = self._insert_into_pg((glove_pub,), self.query_url,
OriginKind.PRIMARY, cites=None)
self.look_for_elmo_paper()
elmo_pub = next(self.fish_all())
elmo_summary = self._insert_into_pg((elmo_pub,), self.query_url,
OriginKind.PRIMARY, cites=None)
# Step 2: Collect the first 400/250 citations of each of the primary papers.
def citation_count_or_zero(pub: Publication):
doc = pub.document
return doc.citation_count if doc.citation_count else 0
def first_n_sorted_by_citation_count(pubs: Iterable[Publication], n):
return sorted(islice(pubs, n), key=citation_count_or_zero, reverse=True)
self.look_for_citations_of(w2v_pub.document)
w2v_citations = first_n_sorted_by_citation_count(self.fish_all(), 400)
w2v_pg_id = w2v_summary.pub_ids[w2v_pub]
w2v_citations_summary = self._insert_into_pg(w2v_citations, self.query_url,
OriginKind.CITES,
cites=w2v_pg_id)
self.look_for_citations_of(glove_pub.document)
glove_citations = first_n_sorted_by_citation_count(self.fish_all(), 250)
glove_pg_id = glove_summary.pub_ids[glove_pub]
glove_citations_summary = self._insert_into_pg(glove_citations,
self.query_url,
OriginKind.CITES,
cites=glove_pg_id)
self.look_for_citations_of(elmo_pub.document)
elmo_citations = first_n_sorted_by_citation_count(self.fish_all(), 250)
elmo_pg_id = elmo_summary.pub_ids[elmo_pub]
elmo_citations_summary = self._insert_into_pg(elmo_citations,
self.query_url,
OriginKind.CITES,
cites=elmo_pg_id)
print('Added the following publications:')
print(w2v_citations_summary)
print(glove_citations_summary)
print(elmo_citations_summary)
# Step 3: Add some important citations missed out by Google Scholar
self.fish_missing_w2v_citations(w2v_pg_id)
def fish_missing_w2v_citations(self, w2v_pg_id: int):
self.look_for_fast_text_paper()
ft_pub = next(self.fish_all())
ft_summary = self._insert_into_pg((ft_pub,), self.query_url,
OriginKind.CITES,
cites=w2v_pg_id)
self.look_for_multilingual_corr_paper()
mc_pub = next(self.fish_all())
mc_summary = self._insert_into_pg((mc_pub,), self.query_url,
OriginKind.CITES,
cites=w2v_pg_id)
self.look_for_multilingual_proj_paper()
mp_pub = next(self.fish_all())
mp_summary = self._insert_into_pg((mp_pub,), self.query_url,
OriginKind.CITES,
cites=w2v_pg_id)
self.look_for_c2w_paper()
c2w_pub = next(self.fish_all())
c2w_summary = self._insert_into_pg((c2w_pub,), self.query_url,
OriginKind.SUPPLEMENTARY,
cites=w2v_pg_id)
print('Added the following missing citations of the w2v paper:')
print(ft_summary)
print(mc_summary)
print(mp_summary)
print(c2w_summary)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Collect publications for the '
'WEM taxonomy.')
parser.add_argument('--host', type=str,
help='host of the Postgres database',
default='localhost')
parser.add_argument('--db', type=str,
help='name of the Postgres database',
required=True)
parser.add_argument('--user', type=str,
help='name of a user having write access to the '
'Postgres database (default: taxonomist)',
default='taxonomist')
args = parser.parse_args()
pw = getpass('Enter password for user {}:'.format(args.user))
pg = PgConnector(host=args.host, db_name=args.db, user=args.user,
password=pw)
fisher = WEMTaxonomyFisher(pg)
fisher.fish_for_taxonomy() | /reproduce_wem_taxonomy-2020.1.29-py3-none-any.whl/reproduce_wem_taxonomy/collect_relevant_publications.py | 0.808823 | 0.159643 | collect_relevant_publications.py | pypi |
__all__ = ('pure', 'impure', 'raw')
__version__ = '1.0.6'
from _thread import get_ident as _thread_id
_in_progress = set()
def _name(obj):
return type(obj).__name__
def pure(self, /, *args, **kwargs):
"""Represent an object and its arguments as an unambiguous string.
Arguments:
self: An instance of a class - normally the ``self``
argument of the ``__repr__`` method in a class.
*args: Positional arguments that describe the instance.
**kwargs: Keyword arguments that describe the instance.
Returns:
str: The representation of ``self`` and its arguments.
"""
repr_id = (id(self), _thread_id())
if repr_id in _in_progress:
return '<...>'
try:
_in_progress.add(repr_id)
arguments = []
for argument in args:
arguments.append(repr(argument))
for name, argument in kwargs.items():
arguments.append('='.join((name, repr(argument))))
return ''.join((_name(self), '(', ', '.join(arguments), ')'))
finally:
_in_progress.discard(repr_id)
def impure(self, /, *args, **kwargs):
"""Represent an object and its state as an unambiguous string.
Arguments:
self: An instance of a class - normally the ``self``
argument of the ``__repr__`` method in a class.
*args: Unnamed state that describes the instance.
**kwargs: Named state that describes the instance.
Returns:
str: The representation of ``self`` and its state.
"""
repr_id = (id(self), _thread_id())
if repr_id in _in_progress:
return '<...>'
try:
_in_progress.add(repr_id)
parts = [_name(self)]
for argument in args:
parts.append(repr(argument))
for name, argument in kwargs.items():
parts.append('='.join((name, repr(argument))))
return ''.join(('<', ' '.join(parts), '>'))
finally:
_in_progress.discard(repr_id)
def raw(string):
"""Escape a string to make ``reprshed`` use it verbatim."""
class _Raw(object):
def __repr__(self):
return string
def __reduce__(self):
return (raw, (string,))
_Raw.__name__ = string
return _Raw() | /reprshed-1.0.6.tar.gz/reprshed-1.0.6/normal.py | 0.746416 | 0.166066 | normal.py | pypi |
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import functools
import types
import itertools
import sys
PY2 = sys.version_info < (3,)
if PY2:
from itertools import imap as map
string_base = basestring
iteritems = dict.iteritems
else:
string_base = str
iteritems = dict.items
__all__ = ['identifier', 'standard_repr', 'GetattrRepr']
def identifier(cls):
"""Return the fully-specified identifier of the class cls.
@param cls: The class whose identifier is to be specified.
"""
return cls.__module__ + '.' + cls.__name__
def standard_repr(obj, args=None, kwargs=None):
"""Return a repr-style string for obj, echoing and repring each argument.
The output of this format follows the convention described in the
module docstring: namely, the result will be a valid constructor for
obj with a fully qualified path name, with the other positional and
keyword arguments acting as arguments to the constructor.
The first argument is a reference to the object of which the result
should be a representation. The second argument is a list of values
to appear as positional arguments in the repr. (These should be
actual values and not already converted to repr-ized strings). The
final argument should be a list of (kwarg, value) tuples which will
appear in the repr as positional arguments, in that order.
This unwieldy argument syntax is necessary to ensure that the
keyword arguments are ordered properly. (This used to accept *args
and **kwargs, but the ordering of keyword arguments is arbitrary in
that case.)
Note that repr (not str) will be called on each argument. This means
that strings will be enclosed in quotation marks and escaped in the
result (which is exactly what one would expect).
>>> class A(object):
... pass
>>> standard_repr(A(), ['mass\n', 45.3, 200, True], [('parent', None)])
__main__.A('mass\n', 45.3, 200, True, parent=None)
@param obj: The object to be repr'ed.
@param args: Positional arguments to appear in the repr.
@param kwargs: A list of (kwarg, value) tuples. The `kwarg` element
must be a string.
"""
if args is None:
args = []
if kwargs is None:
kwargs = []
if not all(isinstance(tpl[0], string_base) for tpl in kwargs):
raise TypeError('kwarg names must be strings')
items = itertools.chain(
map(repr, args),
('{0}={1!r}'.format(kwarg, value) for kwarg, value in kwargs),
)
return ''.join([identifier(obj.__class__), '(', ', '.join(items), ')'])
class GetattrRepr(object):
"""A descriptor to assign to a class's __repr__ attribute.
Consider some class C to which a GetattrRepr is assigned as the
class's `__repr__` attribute, and obj which is an instance of C. For
each argument arg passed to GetattrRepr's constructors, the value of
repr(getattr(obj, arg)) will apear as a positional argument in the
final output. For each keyword argument k with value v, the keyword
argument pair k=repr(getattr(obj, v) will appear in the final output
of the repr.
>>> class MyClass(object):
... def __init__(self, name, value, units=None):
... self.name = name
... self.value = value
... self.units = units
...
... __repr__ = GetattrRepr('name', 'value', units='units')
...
...
>>> my_object1 = MyClass('counts', 400)
>>> my_object2 = MyClass('mass', 45.3, 'kilograms')
>>> repr(my_object1)
__main__.MyClass('counts', 400, units=None)
>>> repr(my_object2)
__main__.MyClass('mass', 45.3, units='kilograms')
There is one further subtlety to make it more syntactically
convenient. Every positional argument must be a str/bytes object,
except the last, which may possibly be some other kind of iterable.
If it is not a str or bytes object, it will be iterated over. For
each item that is a tuple, it will be assumed that the first item
is the name of the keyword parameter and the second item is the
attribute name that should be called. For each item that is not a
tuple, it will be assumed to be both the name of the parameter and
the name of the attribute. This is done to allow for repeatable
ordering of keyword arguments. The handling of non-tuple arguments
is pure sugar to avoid repetitive typing. Any keyword arguments
that are also passed to the function will appear in the resulting
repr in arbitrary order, after any keyword arguments specified in
this manner.
The following are all equivalent (with the exception of the way
keywords are ordered in the first example, which depends on the
python implementation):
>>> GetattrRepr('pos1', 'pos2', kw1='kw1', kw2='kw2', kw3='kw3')
>>> GetattrRepr('pos1', 'pos2', ['kw1', 'kw2', 'kw3'])
>>> GetattrRepr('pos1', 'pos2', [('kw1', 'kw1'), ('kw2', 'kw2'),
... ('kw3', 'kw3')])
>>> GetattrRepr('pos1', 'pos2', ['kw1', ('kw2', 'kw2')], kw3='kw3')
"""
def __init__(self, *args, **kwargs):
"""Initialize the GetattrRepr descriptor.
@param args: The names of attributes to be included as
positional arguments in the repr, with the possible
exception of the last argument as described in the class
docstring.
@param kwargs: The keyword names to appear in the repr and the
associated names of attributes to appear as values.
"""
if args and not isinstance(args[-1], string_base):
self.args = args[:-1]
self.kwargs = [
item if not isinstance(item, string_base) else (item, item)
for item in args[-1]
]
self.kwargs.extend(iteritems(kwargs))
else:
self.args = args
self.kwargs = list(iteritems(kwargs))
def __get__(self, instance, owner):
"""Return this descriptor as a bound method.
The method will be linked to this object's __call__ method.
@param instance: The instance called by the object (or None if
the attribute is being accessed by the class-level)
@param owner: The class which defines the attribute.
"""
if instance is None:
return self
return types.MethodType(self.__call__, instance)
def __call__(self, instance):
"""Return a standard representation of the object."""
# my_getattr is a function with one argument `arg`.
# It will return instance.<arg>
my_getattr = functools.partial(getattr, instance)
return standard_repr(
instance, list(map(my_getattr, self.args)),
[(k, my_getattr(v)) for k, v in self.kwargs]
)
# Python 2 can't take unicode as __name__.
# In Python 3, str is unicode, and is okay for __name__
__call__.__name__ = str('__repr__') | /reprutils-1.0.tar.gz/reprutils-1.0/reprutils.py | 0.727685 | 0.410284 | reprutils.py | pypi |
import subprocess
from collections import defaultdict
from pathlib import Path
from ruamel.yaml import YAML
from halo import Halo
from reps.console import command_new
HOOKS = defaultdict(list)
def hook(name):
def wrapper(func):
HOOKS[name].append(func)
return func
return wrapper
def run_hooks(group, items):
for hook_fn in HOOKS[group]:
with Halo(f"running {hook_fn.__name__}") as spinner:
hook_fn(items)
spinner.succeed(f"{hook_fn.__name__}")
def run(cmd, **kwargs):
kwargs.setdefault("check", True)
kwargs.setdefault("stdout", subprocess.PIPE)
kwargs.setdefault("stderr", subprocess.STDOUT)
try:
subprocess.run(cmd, **kwargs)
except subprocess.CalledProcessError as e:
print("+ command failed: {' '.join(cmd)}")
print(e.output)
raise
@hook("pre-gen-py")
def base_init(items):
"""Generate the 'base' template first."""
if "_copy_without_render" in items:
del items["_copy_without_render"]
command_new(
items["__project_slug"],
template="base",
extra_context=items,
no_input=True,
output_dir=str(Path.cwd().parent),
accept_hooks=False,
overwrite_if_exists=True,
)
@hook("post-gen-py")
def merge_pre_commit(items):
"""Update the base pre-commit config with Python-specific tools."""
yaml = YAML()
with open(".pre-commit-config.yaml", "r") as fh:
pre_commit_config = yaml.load(fh)
pre_commit_config["repos"].extend(
[
{
"repo": "https://github.com/psf/black",
"rev": "23.3.0",
"hooks": [
{"id": "black"},
],
},
{
"repo": "https://github.com/charliermarsh/ruff-pre-commit",
"rev": "v0.0.272",
"hooks": [{"id": "ruff", "args": ["--fix", "--exit-non-zero-on-fix"]}],
},
]
)
yaml.explicit_start = True
yaml.indent(mapping=2, sequence=4, offset=2)
with open(".pre-commit-config.yaml", "w") as fh:
yaml.dump(pre_commit_config, fh)
@hook("post-gen-py")
def add_poetry_dependencies(items):
run(
[
"poetry",
"add",
"--group=test",
"coverage",
"pytest",
"pytest-mock",
"responses",
"tox",
]
)
run(
[
"poetry",
"add",
"--group=docs",
"sphinx<7",
"sphinx-autobuild",
"sphinx-book-theme",
]
)
@hook("post-gen-py")
@hook("post-gen-base")
def git_init(items):
run(["git", "init"])
run(["git", "checkout", "-b", "main"])
run(
[
"git",
"remote",
"add",
"origin",
f"git@github.com:{items['github_slug']}.git",
]
)
@hook("post-gen-py")
@hook("post-gen-base")
def pre_commit_autoupdate(items):
run(["pre-commit", "autoupdate"])
@hook("post-gen-py")
@hook("post-gen-base")
def lock_taskgraph_requirements(items):
run(["pip-compile", "requirements.in", "--generate-hashes"], cwd="taskcluster")
@hook("post-gen-base")
def taskgraph_init(items):
run(["taskgraph", "init"]) | /reps_new-0.3.5.tar.gz/reps_new-0.3.5/reps/hooks.py | 0.450601 | 0.187281 | hooks.py | pypi |
Reproducible Simulation Tools
=============================
This is a helper tool to quickly build large dumb parallel simulation or
processing in a reproducible way.
* Parallelization is done using [ipyparallel](http://ipyparallel.readthedocs.io/en/latest/)
* Results are saved in human friendly JSON format, as soon as collected
* Provided the main project is versioned with git, the state of the repo
is checked prior to simulation. If the repo is dirty, simulation is aborted
* The results are tagged with the commit number
* A basic interface displays how many loops have been done, how much time has ellapsed,
approximately how long is left
* Options allow to run a single loop or in serial mode (without using ipyparallel)
for debugging
* All the arguments and parameters for the simulation are saved along the results
Basics
------
The code to repeat is isolated in a function taking as single argument a list `args` .
This list `args` contains all the parameters that vary from loop to loop. The global
parameters, that are always the same during all loops of the simulation are stored in
a Python dictionary called `parameters`.
Every script created with `rrtools` comes with a list of options that can be
accessed through the help command
$ python examples/test_simulation.py --help
usage: test_simulation.py [-h] [-d DIR] [-p PROFILE] [-t] [-s] [--dummy]
parameters
Dummy test simulation
positional arguments:
parameters JSON file containing simulation parameters
optional arguments:
-h, --help show this help message and exit
-d DIR, --dir DIR directory to store sim results
-p PROFILE, --profile PROFILE
ipython profile of cluster
-t, --test test mode, runs a single loop of the simulation
-s, --serial run in a serial loop, ipyparallel not called
--dummy tags the directory as dummy, can be used for running
small batches
If using a cluster of `ipyparallel` engines is not available, it is possible
to run everything in a simple loop using the `-s` of `--serial` option.
For debugging, the `-t` or `--test` option runs only 2 loops of all.
Using the `--dummy` option will tag the results with `dummy` tag, which
is useful to make sure we distinguish test runs from the real simulation
results.
Example
-------
A simple example is availble in `examples` folder. It can be run like this
python examles/test_simulation.py examples/test_simulation.json
The python file contains the function definitions for the different parts
import os
import itertools
import rrtools
# find the absolute path to this file
base_dir = os.path.abspath(os.path.split(__file__)[0])
def init(parameters):
'''
This function takes as unique positional argument a Python
dictionary of global parameters for the simulation.
This lets the user add some parameters computed in software
to the dictionary. The update dictionary will be saved
along the simulation output.
This updated dictionary is later availbable in the global namespace of
parallel_loop and gen_args functions.
Parameters
----------
parameters: dict
The global simulation parameters
'''
parameters['lower_bound'] = 0
def parallel_loop(args):
'''
This is the heart of the parallel simulation. This function is what is repeated
a large number of time.
Parameters
----------
args: list
A list of arguments whose combination is unique to one loop of the simulation.
'''
global parameters
import time
# split arguments
timeout = args[0]
key = args[1]
time.sleep(timeout)
return dict(key=key, timeout=timeout, secret=parameters['secret'])
def gen_args(parameters):
'''
This function is called once before the simulation to generate
the list of arguments combinations to try.
For example say that you have arguments x=1,2,3 and y=2,3 for your parallel
loop and you want to try all combinations. Then this function
can generate the list
args = [[1,2], [1,3], [2,2], [2,3], [3,2], [3,3]]
Paramters
---------
parameters: dict
The Python dictionary of globaly simulation parameters. This can
typically contain the range of values for the arguments to sweep.
'''
timeouts = range(parameters['max_timeout'])
keys = range(parameters['max_int'])
return list(itertools.product(timeouts, keys))
if __name__ == '__main__':
rrtools.run(parallel_loop, gen_args, func_init=init,
base_dir=base_dir, results_dir='data/',
description='Dummy test simulation')
The JSON file contains global simulation parameters.
{
"max_timeout": 10,
"max_int": 2,
"secret": "helloworld"
}
Control the Number of Threads
-----------------------------
When using outer loop level parallelism, it is important that the inner loop does not
use parallel processing. When using numpy for the processing, it is thus important to
disable multi-threading in the BLAS library used. This can be achieved by setting
the number of threads to one using environment variables.
* Openblas `OPENBLAS_NUM_THREADS=1`
* MKL `MKL_NUM_THREADS=1` or directly in the code using the `mkl.set_num_threads(1)` function.
If not, the outer threads might compete with the inner threads for resources,
and the overall simulation becomes very slow. Resource usage is most efficient
when sufficiently many outer loops can run in parallel.
Author
------
Robin Scheibler [contact](mailto://fakufaku@gmail.com)
License
-------
Copyright (c) 2018 Robin Scheibler
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| /repsimtools-0.0.1.tar.gz/repsimtools-0.0.1/README.md | 0.890812 | 0.820649 | README.md | pypi |
# RepSys: Framework for Interactive Evaluation of Recommender Systems
[](https://badge.fury.io/py/repsys-framework)
The RepSys is a framework for developing and analyzing recommendation systems, and it allows you to:
- Add your own dataset and recommendation models
- Visually evaluate the models on various metrics
- Quickly create dataset embeddings to explore the data
- Preview recommendations using a web application
- Simulate user's behavior while receiving the recommendations

<p align="middle">
<img src="https://github.com/cowjen01/repsys/raw/master/images/demos/eval-space.png" width="48%" />
<img src="https://github.com/cowjen01/repsys/raw/master/images/demos/dataset-users.png" width="48%" />
</p>
## Online Demo
You can now try RepSys online on our [demo site](https://repsys.recombee.net) with the Movielens dataset.
Also, check out an [interactive blog post](https://www.recombee.com/blog/repsys-opensource-library-for-interactive-evaluation-of-recommendation-systems.html) we made using the RepSys widgets component.
## Publication
Our paper "[RepSys: Framework for Interactive Evaluation of Recommender Systems](https://dl.acm.org/doi/10.1145/3523227.3551469)" was accepted to the RecSys'22 conference.
## Installation
Install the package using [pip](https://pypi.org/project/repsys-framework/):
```
$ pip install repsys-framework
```
If you will be using PyMDE for data visualization, you need to install RepSys with the following extras:
```
$ pip install repsys-framework[pymde]
```
## Getting Started
If you want to skip this tutorial and try the framework, you can pull the content of the [demo](https://github.com/cowjen01/repsys/tree/master/demo) folder located at the repository.
As mentioned in the [next step](https://github.com/cowjen01/repsys#datasetpy), you still have to download the dataset before you begin.
Otherwise, please create an empty project folder that will contain the dataset and models implementation.
```
├── __init__.py
├── dataset.py
├── models.py
├── repsys.ini
└── .gitignore
```
### dataset.py
Firstly we need to import our dataset. We will use [MovieLens 20M Dataset](https://grouplens.org/datasets/movielens/20m/) with
20 million ratings made by 138,000 users to 27,000 movies for the tutorial purpose. Please download the `ml-20m.zip` file and unzip
the data into the current folder. Then add the following content to the `dataset.py` file:
```python
import pandas as pd
from repsys import Dataset
import repsys.dtypes as dtypes
class MovieLens(Dataset):
def name(self):
return "ml20m"
def item_cols(self):
return {
"movieId": dtypes.ItemID(),
"title": dtypes.Title(),
"genres": dtypes.Tag(sep="|"),
"year": dtypes.Number(data_type=int),
}
def interaction_cols(self):
return {
"movieId": dtypes.ItemID(),
"userId": dtypes.UserID(),
"rating": dtypes.Interaction(),
}
def load_items(self):
df = pd.read_csv("./ml-20m/movies.csv")
df["year"] = df["title"].str.extract(r"\((\d+)\)")
return df
def load_interactions(self):
df = pd.read_csv("./ml-20m/ratings.csv")
return df
```
This code will define a new dataset called ml20m, and it will import both ratings
and items data. You must always specify your data structure using predefined data types.
Before you return the data, you can also preprocess it like extracting the movie's year from the title column.
### models.py
Now we define the first recommendation model, which will be a simple implementation of the user-based KNN.
```python
import numpy as np
import scipy.sparse as sp
from sklearn.neighbors import NearestNeighbors
from repsys import Model
class KNN(Model):
def __init__(self):
self.model = NearestNeighbors(n_neighbors=20, metric="cosine")
def name(self):
return "knn"
def fit(self, training=False):
X = self.dataset.get_train_data()
self.model.fit(X)
def predict(self, X, **kwargs):
if X.count_nonzero() == 0:
return np.random.uniform(size=X.shape)
distances, indices = self.model.kneighbors(X)
distances = distances[:, 1:]
indices = indices[:, 1:]
distances = 1 - distances
sums = distances.sum(axis=1)
distances = distances / sums[:, np.newaxis]
def f(dist, idx):
A = self.dataset.get_train_data()[idx]
D = sp.diags(dist)
return D.dot(A).sum(axis=0)
vf = np.vectorize(f, signature="(n),(n)->(m)")
predictions = vf(distances, indices)
predictions[X.nonzero()] = 0
return predictions
```
You must define the fit method to train your model using the training data or load the previously trained model from a file.
All models are fitted when the web application starts, or the evaluation process begins. If this is not a training phase, always
load your model from a checkpoint to speed up the process. For tutorial purposes, this is omitted.
You must also define the prediction method that receives a sparse matrix of the users' interactions on the input.
For each user (row of the matrix) and item (column of the matrix), the method should return a predicted score indicating
how much the user will enjoy the item.
Additionally, you can specify some web application parameters you can set during recommender creation. The value is then accessible in
the `**kwargs` argument of the prediction method. In the example, we create a select input with all unique genres and filter out only those movies that do not contain the selected genre.
### repsys.ini
The last file we should create is a configuration that allows you to control a data splitting process, server settings,
framework behavior, etc.
```ini
[general]
seed=1234
[dataset]
train_split_prop=0.85
test_holdout_prop=0.2
min_user_interacts=5
min_item_interacts=0
[evaluation]
precision_recall_k=20,50
ndcg_k=100
coverage_k=20
diversity_k=20
novelty_k=20
percentage_lt_k=20
coverage_lt_k=20
[visualization]
embed_method=pymde
pymde_neighbors=15
umap_neighbors=15
umap_min_dist=0.1
tsne_perplexity=30
[server]
port=3001
```
### Splitting the Data
Before we train our models, we need to split the data into train, validation, and test sets. Run the following command from the current directory.
```
$ repsys dataset split
```
This will hold out 85% of the users as training data, and the rest 15% will be used as validation/test data with 7.5% of users each. For both validation
and test set, 20% of the interactions will also be held out for evaluation purposes. The split dataset will be stored in the default checkpoints folder.
### Training the Models
Now we can move to the training process. To do this, please call the following command.
```
$ repsys model train
```
This command will call the fit method of each model with the training flag set to true. You can always limit the models using `-m` flag with the model's name as a parameter.
### Evaluating the Models
When the data is prepared and the models trained, we can evaluate the performance of the models on the unseen users' interactions. Run the following command to do so.
```
$ repsys model eval
```
Again, you can limit the models using the `-m` flag. The results will be stored in the checkpoints folder when the evaluation is done.
### Evaluating the Dataset
Before starting the web application, the final step is to evaluate the dataset's data. This procedure will create users and items embeddings of the training and validation data
to allow you to explore the latent space. Run the following command from the project directory.
```
$ repsys dataset eval
```
You can choose from three types of embeddings algorithm:
1. [UMAP](https://umap-learn.readthedocs.io/en/latest/index.html) (Uniform Manifold Approximation and Projection for Dimension Reduction) is a dimensionality reduction technique similar to t-SNE. Use `--method umap` (this is the default option).
2. [PyMDE](https://pymde.org) (Minimum-Distortion Embedding) is a fast library designed to distort relationships between pairs of items minimally. Use `--method pymde`.
3. Combination of the PCA and TSNE algorithms (reduction of the dimensionality to 50 using PCA, then reduction to 2D space using TSNE). Use `--method tsne`.
4. Your own implementation of the algorithm. Use `--method custom` and add the following method to the model's class of your choice. In this case, you must also specify the model's name using `-m` parameter.
```python
from sklearn.decomposition import NMF
def compute_embeddings(self, X):
nmf = NMF(n_components=2)
W = nmf.fit_transform(X)
H = nmf.components_
return W, H.T
```
In the example, the negative matrix factorization is used. You have to return a user and item embeddings pair in this order. Also, it is essential to return the matrices in the shape of (n_users/n_items, n_dim).
If the reduced dimension is higher than 2, the TSNE method is applied.
### Running the Application
Finally, it is time to start the web application to see the results of the evaluations and preview live recommendations of your models.
```
$ repsys server
```
The application should be accessible on the default address [http://localhost:3001](http://localhost:3001). When you open the link, you will see the main screen where your recommendations appear once you finish the setup.
The first step is defining how the items' data columns should be mapped to the item view components.

Then we need to switch to the build mode and add two recommenders - one without filter and the second with only comedy movies included.

Now we switch back from the build mode and select a user from the validation set (never seen by a model before).

Finally, we see the user's interaction history on the right side and the recommendations made by the model on the left side.

## Contributing
To build the package from the source, you first need to install Node.js and npm library as documented [here](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
Then you can run the following script from the root directory to build the web application and install the package locally.
```
$ ./scripts/install-locally.sh
```
## Citation
If you employ RepSys in your research work, please do not forget to cite the related paper:
```
@inproceedings{10.1145/3523227.3551469,
author = {\v{S}afa\v{r}\'{\i}k, Jan and Van\v{c}ura, Vojt\v{e}ch and Kord\'{\i}k, Pavel},
title = {RepSys: Framework for Interactive Evaluation of Recommender Systems},
year = {2022},
isbn = {9781450392785},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3523227.3551469},
doi = {10.1145/3523227.3551469},
booktitle = {Proceedings of the 16th ACM Conference on Recommender Systems},
pages = {636–639},
numpages = {4},
keywords = {User simulation, Distribution analysis, Recommender systems},
location = {Seattle, WA, USA},
series = {RecSys '22}
}
```
## The Team
- Jan Šafařík (safarj10@fit.cvut.cz)
- Vojtěch Vančura (vancurv@fit.cvut.cz)
- Pavel Kordík (pavel.kordik@fit.cvut.cz)
- Petr Kasalický (kasalpe1@fit.cvut.cz)
## Sponsoring
The development of this framework is sponsored by the [Recombee](https://www.recombee.com) company.
<img src="https://github.com/cowjen01/repsys/raw/master/images/recombee-logo.png" width="50%" />
| /repsys-framework-0.4.1.tar.gz/repsys-framework-0.4.1/README.md | 0.850562 | 0.968231 | README.md | pypi |
from typing import Tuple
import numpy as np
from numpy import ndarray
from scipy.sparse import csr_matrix
from sklearn.preprocessing import MinMaxScaler
def get_precision_recall(X_predict: ndarray, X_true: ndarray, sort_indices: ndarray, k: int) -> Tuple[ndarray, ndarray]:
row_indices = np.arange(X_predict.shape[0])[:, np.newaxis]
X_true_bin = X_true > 0
X_true_nonzero = (X_true > 0).sum(axis=1)
X_predict_bin = np.zeros_like(X_predict, dtype=bool)
X_predict_bin[row_indices, sort_indices[:, :k]] = True
hits = (np.logical_and(X_true_bin, X_predict_bin).sum(axis=1)).astype(np.float32)
precision = hits / k
recall = hits / np.minimum(k, X_true_nonzero)
return precision, recall
def get_ndcg(
X_predict: ndarray,
X_true: ndarray,
sort_indices: ndarray,
true_sort_indices: ndarray,
k: int,
) -> ndarray:
row_indices = np.arange(X_predict.shape[0])[:, np.newaxis]
X_true_bin = X_true > 0
X_true_nonzero = X_true_bin.sum(axis=1)
discount = 1.0 / np.log2(np.arange(2, k + 2))
dcg = (X_true[row_indices, sort_indices[:, :k]] * discount).sum(axis=1)
mask = np.transpose(np.arange(k)[:, np.newaxis] < np.minimum(k, X_true_nonzero))
idcg = np.where(mask, (X_true[row_indices, true_sort_indices] * discount), 0).sum(axis=1)
return dcg / idcg
def get_coverage(X_predict: ndarray, sort_indices: ndarray, k: int) -> float:
n_covered_items = len(np.unique(np.concatenate(sort_indices[:, :k])))
n_items = X_predict.shape[1]
return n_covered_items / n_items
def get_diversity(embeddings: csr_matrix, sort_indices: ndarray, k: int) -> ndarray:
def f(idx):
pairs = np.array(np.meshgrid(idx, idx)).T.reshape(-1, 2)
embs_0 = embeddings.T[pairs[:, 0]].toarray() # np array of shape (k*k, emb_size)
embs_1 = embeddings.T[pairs[:, 1]].toarray() # np array of shape (k*k, emb_size)
# Compute cosine distance between (embs_0[0], embs_1[0]), (embs_0[1], embs_1[1]), (embs_0[2], embs_1[2]), ...
# as normalized embs_0 * normalized embs_1 and sum over axis 1
dist = 1 - np.sum(
(embs_0 / np.linalg.norm(embs_0, axis=1)[:, np.newaxis])
* (embs_1 / np.linalg.norm(embs_1, axis=1)[:, np.newaxis]),
axis=1,
)
return dist.sum()
vf = np.vectorize(f, signature="(n)->()")
distances = vf(sort_indices[:, :k])
return distances / (k * (k - 1))
def get_novelty(X_train: csr_matrix, sort_indices: ndarray, k: int) -> ndarray:
popularity = np.asarray((X_train > 0).sum(axis=0)).squeeze() / X_train.shape[0]
def f(idx):
return np.sum(-np.log2(popularity[idx]))
vf = np.vectorize(f, signature="(n)->()")
novelty = vf(sort_indices[:, :k])
max_novelty = -np.log2(1 / X_train.shape[0])
return novelty / (k * max_novelty)
def get_error_metrics(X_predict: ndarray, X_true: ndarray) -> Tuple[float, float, float]:
diff = X_true - X_predict
mae = np.abs(diff).mean(axis=1)
mse = np.square(diff).mean(axis=1)
rmse = np.sqrt(mse)
return mae, mse, rmse
def get_item_pop(X_predict: ndarray) -> ndarray:
scaler = MinMaxScaler()
popularity = X_predict.sum(axis=0).reshape(-1, 1)
popularity = scaler.fit_transform(popularity)
popularity = popularity.reshape(1, -1).squeeze()
return popularity
def get_plt(sort_indices: ndarray, long_tail_items: ndarray, k: int) -> ndarray:
def f(idx):
return len(np.intersect1d(long_tail_items, idx, assume_unique=True))
vf = np.vectorize(f, signature="(n)->()")
plt = vf(sort_indices[:, :k])
return plt / k
def get_clt(sort_indices: ndarray, long_tail_items: ndarray, k: int) -> float:
covered_items = np.unique(np.concatenate(sort_indices[:, :k]))
tail_covered = len(np.intersect1d(covered_items, long_tail_items, assume_unique=True))
return tail_covered / long_tail_items.shape[0] | /repsys-framework-0.4.1.tar.gz/repsys-framework-0.4.1/repsys/metrics.py | 0.895882 | 0.605187 | metrics.py | pypi |
from typing import List, Type
from pandas import DataFrame
from repsys.dtypes import (
DataType,
ColumnDict,
Number,
String,
UserID,
ItemID,
Category,
Interaction,
Tag,
Title,
find_column_by_type,
)
from repsys.errors import InvalidDatasetError
def _check_df_columns(df: DataFrame, cols: ColumnDict):
for col in cols.keys():
if col not in df.columns:
raise InvalidDatasetError(f"Column '{col}' not found in the data.")
def _check_valid_dtypes(cols: ColumnDict, valid_dtypes: List[Type[DataType]]):
for col, dt in cols.items():
if type(dt) not in valid_dtypes:
raise InvalidDatasetError(f"Type '{dt.__name__}' of column '{col}' is forbidden.")
def _check_required_dtypes(cols: ColumnDict, req_dtypes: List[Type[DataType]]):
dtypes = [type(dt) for dt in cols.values()]
for dt in req_dtypes:
if dt not in dtypes:
raise InvalidDatasetError(f"Type '{dt.__name__}' is required.")
def validate_item_cols(cols: ColumnDict) -> None:
valid_dtypes = [ItemID, Tag, String, Title, Number, Category]
required_dtypes = [ItemID, Title]
_check_valid_dtypes(cols, valid_dtypes)
_check_required_dtypes(cols, required_dtypes)
def validate_item_data(items: DataFrame, cols: ColumnDict) -> None:
_check_df_columns(items, cols)
item_col = find_column_by_type(cols, ItemID)
if items.duplicated(subset=[item_col]).sum() > 0:
raise InvalidDatasetError(f"Index '{item_col}' contains non-unique values.")
def validate_interact_cols(cols: ColumnDict) -> None:
valid_dtypes = [ItemID, UserID, Interaction]
required_dtypes = [ItemID, UserID]
_check_valid_dtypes(cols, valid_dtypes)
_check_required_dtypes(cols, required_dtypes)
def validate_interact_data(
interacts: DataFrame,
items: DataFrame,
interact_cols: ColumnDict,
item_cols: ColumnDict,
) -> None:
_check_df_columns(interacts, interact_cols)
interacts_item_id_col = find_column_by_type(interact_cols, ItemID)
items_id_col = find_column_by_type(item_cols, ItemID)
s1 = set(interacts[interacts_item_id_col])
s2 = set(items[items_id_col])
diff = s1.difference(s2)
if len(diff) > 0:
raise InvalidDatasetError(
"Some of the items are included in the interactions data " f"but not in the items data: {list(diff)}."
)
def validate_dataset(
items: DataFrame,
item_cols: ColumnDict,
interacts: DataFrame,
interact_cols: ColumnDict,
):
validate_item_cols(item_cols)
validate_item_data(items, item_cols)
validate_interact_cols(interact_cols)
validate_interact_data(interacts, items, interact_cols, item_cols) | /repsys-framework-0.4.1.tar.gz/repsys-framework-0.4.1/repsys/validators.py | 0.779783 | 0.393443 | validators.py | pypi |
import configparser
import os
from typing import List
import repsys.constants as const
from repsys.errors import InvalidConfigError
class DatasetConfig:
def __init__(
self,
test_holdout_prop: float,
train_split_prop: float,
min_user_interacts: int,
min_item_interacts: int,
):
self.test_holdout_prop = test_holdout_prop
self.train_split_prop = train_split_prop
self.min_user_interacts = min_user_interacts
self.min_item_interacts = min_item_interacts
class EvaluationConfig:
def __init__(
self,
precision_recall_k: List[int],
ndcg_k: List[int],
coverage_k: List[int],
diversity_k: List[int],
novelty_k: List[int],
percentage_lt_k: List[int],
coverage_lt_k: List[int],
):
self.precision_recall_k = precision_recall_k
self.ndcg_k = ndcg_k
self.coverage_k = coverage_k
self.diversity_k = diversity_k
self.novelty_k = novelty_k
self.percentage_lt_k = percentage_lt_k
self.coverage_lt_k = coverage_lt_k
class VisualizationConfig:
def __init__(
self, embed_method: str, pymde_neighbors: int, umap_neighbors: int, umap_min_dist: float, tsne_perplexity: int
):
self.embed_method = embed_method
self.pymde_neighbors = pymde_neighbors
self.umap_neighbors = umap_neighbors
self.umap_min_dist = umap_min_dist
self.tsne_perplexity = tsne_perplexity
class Config:
def __init__(
self,
checkpoints_dir: str,
seed: int,
debug: bool,
server_port: int,
dataset_config: DatasetConfig,
eval_config: EvaluationConfig,
visual_config: VisualizationConfig,
):
self.dataset = dataset_config
self.eval = eval_config
self.checkpoints_dir = checkpoints_dir
self.debug = debug
self.seed = seed
self.server_port = server_port
self.visual = visual_config
def validate_dataset_config(config: DatasetConfig):
if config.train_split_prop <= 0 or config.train_split_prop >= 1:
raise InvalidConfigError("The train split proportion must be between 0 and 1")
if config.test_holdout_prop <= 0 or config.test_holdout_prop >= 1:
raise InvalidConfigError("The test holdout proportion must be between 0 and 1")
if config.min_user_interacts < 0:
raise InvalidConfigError("Minimum user interactions can be negative")
if config.min_item_interacts < 0:
raise InvalidConfigError("Minimum item interactions can be negative")
def validate_visual_config(config: VisualizationConfig):
if config.embed_method not in ["umap", "pymde", "tsne", "custom"]:
raise InvalidConfigError("Invalid embedding method (none of: umap, pymde, tsne or custom)")
def parse_list(arg: str, sep: str = ","):
if isinstance(arg, str):
return [int(x.strip()) for x in arg.split(sep)]
return arg
def read_config(config_path: str = None):
config = configparser.ConfigParser()
if config_path and os.path.isfile(config_path):
with open(config_path, "r") as f:
config.read_file(f)
dataset_config = DatasetConfig(
config.getfloat("dataset", "test_holdout_prop", fallback=const.DEFAULT_TEST_HOLDOUT_PROP),
config.getfloat("dataset", "train_split_prop", fallback=const.DEFAULT_TRAIN_SPLIT_PROP),
config.getint("dataset", "min_user_interacts", fallback=const.DEFAULT_MIN_USER_INTERACTS),
config.getint("dataset", "min_item_interacts", fallback=const.DEFAULT_MIN_ITEM_INTERACTS),
)
validate_dataset_config(dataset_config)
evaluator_config = EvaluationConfig(
parse_list(
config.get(
"evaluation",
"precision_recall_k",
fallback=const.DEFAULT_PRECISION_RECALL_K,
)
),
parse_list(config.get("evaluation", "ndcg_k", fallback=const.DEFAULT_NDCG_K)),
parse_list(config.get("evaluation", "coverage_k", fallback=const.DEFAULT_COVERAGE_K)),
parse_list(config.get("evaluation", "diversity_k", fallback=const.DEFAULT_DIVERSITY_K)),
parse_list(config.get("evaluation", "novelty_k", fallback=const.DEFAULT_NOVELTY_K)),
parse_list(config.get("evaluation", "percentage_lt_k", fallback=const.DEFAULT_PERCENTAGE_LT_K)),
parse_list(config.get("evaluation", "coverage_lt_k", fallback=const.DEFAULT_COVERAGE_LT_K)),
)
visual_config = VisualizationConfig(
config.get("visualization", "embed_method", fallback=const.DEFAULT_EMBED_METHOD),
config.getint("visualization", "pymde_neighbors", fallback=const.DEFAULT_PYMDE_NEIGHBORS),
config.getint("visualization", "umap_neighbors", fallback=const.DEFAULT_UMAP_NEIGHBORS),
config.getfloat("visualization", "umap_min_dist", fallback=const.DEFAULT_UMAP_MIN_DIST),
config.getint("visualization", "tsne_perplexity", fallback=const.DEFAULT_TSNE_PERPLEXITY),
)
validate_visual_config(visual_config)
return Config(
config.get("general", "checkpoints_dir", fallback=const.DEFAULT_CHECKPOINTS_DIR),
config.getint("general", "seed", fallback=const.DEFAULT_SEED),
config.getboolean("general", "debug", fallback=False),
config.get("server", "port", fallback=const.DEFAULT_SERVER_PORT),
dataset_config,
evaluator_config,
visual_config,
) | /repsys-framework-0.4.1.tar.gz/repsys-framework-0.4.1/repsys/config.py | 0.75037 | 0.216156 | config.py | pypi |
import logging
from typing import Dict
from repsys.config import Config
from repsys.dataset import Dataset
from repsys.evaluators import DatasetEvaluator, ModelEvaluator
from repsys.model import Model
from repsys.server import run_server
logger = logging.getLogger(__name__)
def split_dataset(config: Config, dataset: Dataset):
logger.info("Creating train/validation/test split")
dataset.fit(
config.dataset.train_split_prop,
config.dataset.test_holdout_prop,
config.dataset.min_user_interacts,
config.dataset.min_item_interacts,
config.seed,
)
logger.info(f"Saving splits into '{config.checkpoints_dir}'")
dataset.save(config.checkpoints_dir)
logger.info("Splitting successfully finished")
logger.warning("DON'T FORGET TO RETRAIN YOUR MODELS! ")
def fit_models(models: Dict[str, Model], dataset: Dataset, config: Config, training: bool = False):
for model in models.values():
if training:
logger.info(f"Training '{model.name()}' model")
else:
logger.info(f"Fitting '{model.name()}' model")
model.config = config
model.dataset = dataset
model.fit(training=training)
def start_server(config: Config, models: Dict[str, Model], dataset: Dataset):
logger.info("Starting web application server")
dataset.load(config.checkpoints_dir)
fit_models(models, dataset, config)
logger.info("Loading dataset evaluation")
dataset_eval = DatasetEvaluator(dataset)
dataset_eval.load(config.checkpoints_dir)
logger.info("Loading models evaluation")
model_eval = ModelEvaluator(dataset)
model_eval.load(config.checkpoints_dir, list(models.keys()), load_prev=True)
run_server(config, models, dataset, dataset_eval, model_eval)
def train_models(config: Config, models: Dict[str, Model], dataset: Dataset, model_name: str = None):
logger.info("Training implemented models")
dataset.load(config.checkpoints_dir)
if model_name is not None:
models = {model_name: models.get(model_name)}
fit_models(models, dataset, config, training=True)
def evaluate_dataset(
config: Config,
models: Dict[str, Model],
dataset: Dataset,
method: str,
model_name: str,
):
if not method:
method = config.visual.embed_method
logger.info(f"Evaluating implemented dataset using '{method}' method")
dataset.load(config.checkpoints_dir)
model = None
if method == "custom" and model_name is not None:
model = models.get(model_name)
fit_models({model_name: model}, dataset, config)
logger.info(f"Computing embeddings")
evaluator = DatasetEvaluator(
dataset,
pymde_neighbors=config.visual.pymde_neighbors,
umap_neighbors=config.visual.umap_neighbors,
umap_min_dist=config.visual.umap_min_dist,
tsne_perplexity=config.visual.tsne_perplexity,
)
evaluator.compute_user_embeddings("train", method, model, max_samples=10000)
evaluator.compute_user_embeddings("validation", method, model)
evaluator.compute_item_embeddings(method, model)
evaluator.save(config.checkpoints_dir)
def evaluate_models(
config: Config,
models: Dict[str, Model],
dataset: Dataset,
split_type: str,
model_name: str,
):
logger.info("Evaluating implemented models")
dataset.load(config.checkpoints_dir)
if model_name is not None:
models = {model_name: models.get(model_name)}
fit_models(models, dataset, config)
evaluator = ModelEvaluator(
dataset,
precision_recall_k=config.eval.precision_recall_k,
ndcg_k=config.eval.ndcg_k,
coverage_k=config.eval.coverage_k,
diversity_k=config.eval.diversity_k,
novelty_k=config.eval.novelty_k,
coverage_lt_k=config.eval.coverage_lt_k,
percentage_lt_k=config.eval.percentage_lt_k,
)
for model in models.values():
logger.info(f"Evaluating '{model.name()}' model")
evaluator.evaluate(model, split_type)
evaluator.print()
evaluator.save(config.checkpoints_dir) | /repsys-framework-0.4.1.tar.gz/repsys-framework-0.4.1/repsys/core.py | 0.717111 | 0.30279 | core.py | pypi |
import functools
import glob
import os
import random
import shutil
import time
from typing import List
import numpy as np
from repsys.constants import CURRENT_VERSION
def remove_dir(path: str) -> None:
shutil.rmtree(path)
def create_dir(path: str) -> None:
if not os.path.exists(path):
os.makedirs(path)
def current_dir_path() -> str:
return os.getcwd()
def default_config_path() -> str:
return os.path.join(current_dir_path(), "repsys.ini")
def tmp_dir_path() -> str:
return os.path.join(current_dir_path(), "tmp")
def checkpoints_dir_path() -> str:
return os.path.join(current_dir_path(), ".repsys_checkpoints")
def create_checkpoints_dir() -> None:
create_dir(checkpoints_dir_path())
def create_tmp_dir() -> None:
create_dir(tmp_dir_path())
def remove_tmp_dir() -> None:
remove_dir(tmp_dir_path())
def unzip_dir(zip_path: str, dir_path: str) -> None:
shutil.unpack_archive(zip_path, dir_path)
def zip_dir(zip_path: str, dir_path: str) -> None:
path_chunks = zip_path.split(".")
if path_chunks[-1] == "zip":
zip_path = ".".join(path_chunks[:-1])
shutil.make_archive(zip_path, "zip", dir_path)
def get_subclasses(cls) -> List[str]:
return cls.__subclasses__() + [g for s in cls.__subclasses__() for g in get_subclasses(s)]
def current_ts() -> int:
return int(time.time())
def find_checkpoints(dir_path: str, pattern: str, history: int = 1) -> List[str]:
path = os.path.join(dir_path, pattern)
files = glob.glob(path)
if not files:
return []
files.sort(reverse=True)
return files[:history]
def set_seed(seed: int) -> None:
np.random.seed(seed)
random.seed(seed)
def enforce_updated(func):
@functools.wraps(func)
def _wrapper(self, *args, **kwargs):
if not getattr(self, "_updated"):
raise Exception("The instance must be updated (call appropriate update method).")
return func(self, *args, **kwargs)
return _wrapper
def tmpdir_provider(func):
@functools.wraps(func)
def _wrapper(*args, **kwargs):
create_tmp_dir()
try:
func(*args, **kwargs)
finally:
remove_tmp_dir()
return _wrapper
def write_version(version: str, dir_name: str):
with open(os.path.join(dir_name, "version.txt"), "w") as f:
f.write(version)
def read_version(dir_name: str):
path = os.path.join(dir_name, "version.txt")
if not os.path.isfile(path):
return CURRENT_VERSION
with open(path, "r") as f:
return f.readline() | /repsys-framework-0.4.1.tar.gz/repsys-framework-0.4.1/repsys/helpers.py | 0.521471 | 0.160266 | helpers.py | pypi |
import logging
from typing import Dict
import click
import coloredlogs
from click import Context
from repsys.config import read_config
from repsys.core import (
train_models,
evaluate_dataset,
start_server,
split_dataset,
evaluate_models,
)
from repsys.dataset import Dataset
from repsys.helpers import *
from repsys.loaders import load_packages
from repsys.model import Model
logger = logging.getLogger(__name__)
def setup_logging(level):
coloredlogs.install(
level=level,
use_chroot=False,
fmt="%(asctime)s %(levelname)-8s %(name)s - %(message)s",
)
def config_callback(ctx, param, value):
return read_config(value)
def models_callback(ctx, param, value):
return load_packages(value, Model)
def dataset_callback(ctx, param, value):
instances = load_packages(value, Dataset)
default_name = list(instances.keys())[0]
if len(instances) > 1:
dataset_name = click.prompt(
"Multiple datasets detected, please specify",
type=click.Choice(list(instances.keys())),
default=default_name,
)
return instances.get(dataset_name)
return instances.get(default_name)
def dataset_pkg_option(func):
@click.option(
"--dataset-pkg",
"dataset",
callback=dataset_callback,
default="dataset",
show_default=True,
help="Dataset package.",
)
@functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def models_pkg_option(func):
@click.option(
"--models-pkg",
"models",
callback=models_callback,
default="models",
show_default=True,
help="Models package.",
)
@functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@click.group()
@click.option("--debug/--no-debug", default=False, show_default=True, help="Enable debug mode.")
@click.option(
"-c",
"--config",
callback=config_callback,
default="repsys.ini",
show_default=True,
type=click.Path(exists=True),
help="Configuration file path.",
)
@click.pass_context
def repsys_group(ctx, debug, config):
"""Command-line utility for the Repsys framework."""
ctx.ensure_object(dict)
ctx.obj["CONFIG"] = config
if debug or config.debug:
setup_logging(logging.DEBUG)
else:
setup_logging(logging.INFO)
create_checkpoints_dir()
@repsys_group.command(name="server")
@models_pkg_option
@dataset_pkg_option
@click.pass_context
def server_start_cmd(ctx: Context, models: Dict[str, Model], dataset: Dataset):
"""Start web application server."""
start_server(ctx.obj["CONFIG"], models, dataset)
@click.group(name="model")
def models_group():
"""Models training and evaluation."""
pass
@click.group(name="dataset")
def dataset_group():
"""Dataset splitting and evaluation."""
pass
repsys_group.add_command(dataset_group)
repsys_group.add_command(models_group)
# MODELS GROUP
@models_group.command(name="eval")
@models_pkg_option
@dataset_pkg_option
@click.pass_context
@click.option(
"-s",
"--split",
default="validation",
type=click.Choice(["test", "validation"]),
show_default=True,
help="Evaluation split.",
)
@click.option("-m", "--model-name", help="Model to evaluate.")
def models_eval_cmd(
ctx: Context,
models: Dict[str, Model],
dataset: Dataset,
split: str,
model_name: str,
):
"""Evaluate models using validation/test split."""
evaluate_models(ctx.obj["CONFIG"], models, dataset, split, model_name)
@models_group.command(name="train")
@dataset_pkg_option
@models_pkg_option
@click.pass_context
@click.option("-m", "--model-name", help="Model to train.")
def models_train_cmd(ctx: Context, models: Dict[str, Model], dataset: Dataset, model_name: str):
"""Train models using train split."""
train_models(ctx.obj["CONFIG"], models, dataset, model_name)
# DATASET GROUP
@dataset_group.command(name="split")
@dataset_pkg_option
@click.pass_context
def dataset_split_cmd(ctx: Context, dataset: Dataset):
"""Create train/validation/test split."""
split_dataset(ctx.obj["CONFIG"], dataset)
@dataset_group.command(name="eval")
@dataset_pkg_option
@models_pkg_option
@click.pass_context
@click.option(
"--method",
type=click.Choice(["umap", "pymde", "tsne", "custom"]),
help="Embeddings method.",
)
@click.option("-m", "--model-name", help="Embeddings model.")
def dataset_eval_cmd(
ctx: Context,
dataset: Dataset,
models: Dict[str, Model],
method: str,
model_name: str,
):
"""Compute dataset embeddings."""
evaluate_dataset(ctx.obj["CONFIG"], models, dataset, method, model_name) | /repsys-framework-0.4.1.tar.gz/repsys-framework-0.4.1/repsys/cli.py | 0.798423 | 0.173183 | cli.py | pypi |
from ..Layer.Layer import Layer, EmptyLayer
from json import loads, dumps
from ..constants import VERSION
from ..helpers import check_version
class Constructor:
# Тип функции активации
act_func_type: str = None
# Входной слой
input_layer: Layer = None
# Выходной слой
output_layer: Layer = None
# Слои нейронки
layers: list = None
# Количество слоёв
count_layers: int = 0
def __init__(self, act_func_type='sigmoid'):
self.act_func_type = act_func_type
self.layers = []
def __repr__(self):
line = f'Input layer\n{repr(self.input_layer)}\n'
for index, layer in enumerate(self.layers):
line += f'Layer #{index}\n{repr(layer)}\n'
return line
def input(self, size: int):
if len(self.layers):
raise Exception('Unexpacted behaviour. Network does already have hidden layers.')
self.input_layer = EmptyLayer(size)
return self
def layer(self, size: int, *args, **kwargs):
layers = self.layers
prev_layer = None
if not self.input_layer:
raise Exception('No input layer')
if not len(self.layers):
prev_layer = self.input_layer
else:
prev_layer = layers[-1]
layer = Layer(
size,
*args,
**({
'prev_layer': prev_layer,
'act_func_type': self.act_func_type,
**kwargs,
})
)
layers.append(layer)
self.output_layer = layer
self.count_layers += 1
return self
def reset(self):
for layer in self.layers:
layer.reset()
def dumps(self, file_path: str = None):
if not self.layer:
raise Exception('No input layer')
if len(self.layers) < 2:
raise Exception('No enough layers present')
layers_snapshot = [
{
'type': 'input_layer',
'snapshot': self.input_layer.save()
}
]
for layer in self.layers:
layers_snapshot.append({
'type': 'layer',
'snapshot': layer.save()
})
json = dumps({
'version': VERSION,
'params': {
'act_func_type': self.act_func_type
},
'layers': layers_snapshot
})
if file_path:
with open(file_path, 'w+') as file:
file.write(json)
else:
return json
def loads(self, json: str = None, file_path: str = None):
cur_json = json
if file_path:
with open(file_path, 'r') as file:
cur_json = file.read()
if not cur_json:
raise Exception('No json present')
snapshot = loads(cur_json)
version = snapshot['version']
if not check_version(version):
raise Exception('Invalid snapshot version')
params = snapshot['params']
self.act_func_type = params['act_func_type']
for layer in snapshot['layers']:
if layer['type'] == 'input_layer':
self.input(**layer['snapshot'])
elif layer['type'] == 'layer':
self.layer(**{
'act_func_type': self.act_func_type,
**layer['snapshot']
}) | /reptile-network-0.1.4.tar.gz/reptile-network-0.1.4/reptile/Constructor/Constructor.py | 0.546496 | 0.22891 | Constructor.py | pypi |
from ..Core.Core import core
from ..ActFunctions.ActFunctions import get_act
class EmptyLayer:
# Количество нейронов
size: int = None
# Входные значения
input_values: core.ndarray = None
# Выходные значения
output_values: core.ndarray = None
# Ссылка на следующий слой
next_layer = None
# Ссылка на предыдущий слой
prev_layer = None
def __init__(self, size: int):
self.size = size
def __repr__(self):
return \
f'Size: {self.size}\n\
\rInput values: \n{core.shape(self.input_values)}\n\
\rOutput values: \n{core.shape(self.output_values)}\n'
def tie_next_layer(self, next_layer):
self.next_layer = next_layer
def feed(self, data: core.ndarray):
self.input_values = data
next_layer = self.next_layer
if not next_layer:
self.output_values = data
return data
else:
return next_layer.feed(data)
def fit(self, *args, **kwargs):
prev_layer = self.prev_layer
if not prev_layer:
return
prev_layer.fit(*args, **kwargs)
def save(self):
return {
'size': self.size
}
class Layer(EmptyLayer):
# Матрица весов
weight_matrix: core.ndarray = None
# Столбец баесов
biases: core.ndarray = None
# Тип активационной функции
act_func_type: str = None
# Функция активации
act_func = None
# Производная функции активации
act_func_der = None
# Геттер текущих значений нейронов
get_values = None
def __init__(
self,
size: int,
act_func_type: str,
*args,
prev_layer = None,
weight_matrix: core.ndarray = None,
biases: core.ndarray = None,
**kwargs
):
super().__init__(size, *args, **kwargs)
self.prev_layer = prev_layer
prev_layer.tie_next_layer(self)
Act = get_act(act_func_type)
self.act_func_type = act_func_type
self.act_func = Act.act_func
self.act_func_der = Act.act_func_der
self.get_values = Act.get_values
if not (weight_matrix is None and biases is None):
# Заполняем матрицы готовыми данными
self.weight_matrix = core.array(weight_matrix)
self.biases = core.array(biases)
else:
# Инициализируем рандомные матрицы весов и баесов
self.reset()
def __repr__(self):
line = super().__repr__()
if self.weight_matrix is not None:
line += f'\rWeight matrix: \n{core.shape(self.weight_matrix)}\n'
if self.biases is not None:
line += f'\rBiases: \n{core.shape(self.biases)}\n'
return line
def reset(self):
# Инициализируем рандомные матрицы весов и баесов
self.weight_matrix = core.random.randn(self.prev_layer.size, self.size)
self.biases = core.random.randn(self.size)
def feed(self, data: core.ndarray):
summ = core.dot(data, self.weight_matrix) + self.biases
self.output_values = self.act_func(summ)
return super().feed(self.output_values)
def fit(self, cost_der: core.ndarray, learning_const: float):
delta = cost_der * self.act_func_der()
previous_layer_values = None
prev_layer = self.prev_layer
if not isinstance(prev_layer, EmptyLayer):
previous_layer_values = prev_layer.get_values()
else:
previous_layer_values = prev_layer.input_values
weight_grad = core.dot(previous_layer_values.T, delta)
biases_grad = core.sum(delta, axis=0)
prev_layer_const_der = core.dot(delta, self.weight_matrix.T)
self.weight_matrix -= learning_const * weight_grad
self.biases -= learning_const * biases_grad
super().fit(prev_layer_const_der, learning_const)
def save(self):
return {
**super().save(),
'act_func_type': self.act_func_type,
'weight_matrix': self.weight_matrix.tolist(),
'biases': self.biases.tolist()
} | /reptile-network-0.1.4.tar.gz/reptile-network-0.1.4/reptile/Layer/Layer.py | 0.47317 | 0.438244 | Layer.py | pypi |

*The Reptile image is created by [Chill Desk](https://dribbble.com/shots/2274614-Mortal-Kombat-Reptile-Illustration).*
# Reptile
Reptile is a command-line interface for Python. Specifically, Reptile helps with producing interactive [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop)-like software. With Reptile, you can easily create different prompts depending on the specific type of question you want the user to respond to (input, list, checkbox, etc.).
The design of Reptile is heavily based on [PyInquirer](https://github.com/CITGuru/PyInquirer), which in turn is heavily based on [Inquirer.js](https://github.com/SBoudrias/Inquirer.js/). Compared to PyInquirer, Reptile features tow major improvements:
- It's based on the latest version of [prompt_toolkit](https://python-prompt-toolkit.readthedocs.io/en/master/) (3.0+), rather than 1.0.
- The code is well formatted, commented and documented.
To create prompts with Reptile is as simple as:
```python
import reptile
# Single-question prompt.
question = {
"Type": "List",
"Name": "Movie",
"Message": "What's your favourite movie?",
"Choices": ["Into the Wild", "Fight Club", "Casablanca"]
}
answers = reptile.prompt(question)
# {"Movie": "Casablanca"}
```
```python
import reptile
# Multiple-questions prompt.
questions = [
{
"Type": "Checkbox",
"Name": "Books",
"Message": "What books have you read?",
"Choices": ["Infinite Jest", "The Little Prince", "The Hobbit"]
},
{
"Type": "Confirm",
"Name": "Confirmed",
"Message": "Do you agree Parasite was a masterpiece?",
},
{
"Type": "Input",
"Name": "Name",
"Message": "What's your name?",
},
{
"Type": "List",
"Name": "Movie",
"Message": "What's your favourite movie?",
"Choices": ["Into the Wild", "Fight Club", "Casablanca"]
},
]
answers = reptile.prompt(question)
# {
# "Checkbox": ["Infinite Jest", "The Little Prince", "The Hobbit"],
# "Confirmed": False,
# "Name": "Alessandro",
# "Movie": "Casablanca",
# }
```
## The Prompts
### Checkbox

A **checkbox** is a prompt that allows the user to select zero, one or more options. It offers two shortcuts: \<a> to select all options and \<i> to invert the selections.
Options:
- Name: str → The name of the question. It's then used as key in the output dictionary (`answer = reptile.prompt(question)`).
- Message: str → The message to display to the user (the question itself).
- Choices: list → The options available to be selected.
- Values: list → A list of the same length of Choices. If available, the corresponded value(s) in Values is returned instead of the choice(s) selected by the user.
- Default: Any → The value to return if the output is empty.
- Validate: function → A function that takes the output of the prompt as input and returns either True (if validated) or a string (if not validated; the string is used as error message).
- Transform: function → A function that takes the output of the prompt and replaces it with something else.
- When: function → Used to create conditional flows of questions. It's a function that takes the whole answers dictionary and returns either True (if the question has to be asked) or False (if it's to be skiped).
### Confirm

**Confirm** is a prompt that allows the user to respond to a Yes or No question. It only accepts y/Y and n/N. The output value is True for Yes and False for No.
Options:
- Name: str → The name of the question. It's then used as key in the output dictionary (`answer = reptile.prompt(question)`).
- Message: str → The message to display to the user (the question itself).
- Transform: function → A function that takes the output of the prompt and replaces it with something else.
- When: function → Used to create conditional flows of questions. It's a function that takes the whole answers dictionary and returns either True (if the question has to be asked) or False (if it's to be skiped).
### Input

An **input** is a prompt that allows the user input its own value as text.
Options:
- Name: str → The name of the question. It's then used as key in the output dictionary (`answer = reptile.prompt(question)`).
- Message: str → The message to display to the user (the question itself).
- Default: Any → The value to return if the output is empty.
- Validate: function → A function that takes the output of the prompt as input and returns either True (if validated) or a string (if not validated; the string is used as error message).
- Transform: function → A function that takes the output of the prompt and replaces it with something else.
- When: function → Used to create conditional flows of questions. It's a function that takes the whole answers dictionary and returns either True (if the question has to be asked) or False (if it's to be skiped).
### List

A **list** is a prompt that allows the user to select one out of many options (but one and only one).
Options:
- Name: str → The name of the question. It's then used as key in the output dictionary (`answer = reptile.prompt(question)`).
- Message: str → The message to display to the user (the question itself).
- Choices: list → The options available to be selected.
- Values: list → A list of the same length of Choices. If available, the corresponded value(s) in Values is returned instead of the choice(s) selected by the user.
- Transform: function → A function that takes the output of the prompt and replaces it with something else.
- When: function → Used to create conditional flows of questions. It's a function that takes the whole answers dictionary and returns either True (if the question has to be asked) or False (if it's to be skiped).
## Development
### Tests
You can run the unit tests via PyTest. After you've activated the virtual environment (`pipenv shell`), just execute:
```bash
python -m pytest
```
Reptile also comes with a handy manual test module, so that the interactions with the shell can be tested as well. To execute the module, activate the virtual environment and install Reptile from the local files themselves (rather than downloading the build from PyPI). An explanation of why this step is necessary can be found [here](https://stackoverflow.com/a/50194143/2154440). Once you've activated the virtual environment (again, `pipenv shell`), to install Reptile locally simply navigate to the folder containing the `setup.py` file and execute:
```bash
pipenv install -e .
```
The dot (`.`) means *here* and the `-e` flag signifies the package should be installed in *editable mode*, meaning all changes to the raw files will be reflected in the installed package live (this is only necessary if you inteded to make changes to Reptile).
Once Reptile has been installed locally, you can execute:
```bash
python tests/manual/manual.py
```
Just follow the instructions as they appear on-screen to complete the test.
| /reptile-1.0.tar.gz/reptile-1.0/README.md | 0.640523 | 0.929824 | README.md | pypi |
from django.utils.html import strip_tags
from reptor.lib.importers.BaseImporter import BaseImporter
try:
from gql import Client, gql
from gql.transport.aiohttp import AIOHTTPTransport
from gql.transport.exceptions import TransportServerError
except ImportError:
gql = None
class GhostWriter(BaseImporter):
"""
Imports findings from GhostWriter
Connects to the GraqhQL API of a GhostWriter instance and imports its
finding templates to SysReptor via API.
"""
meta = {
"author": "Richard Schwabe",
"name": "GhostWriter",
"version": "1.0",
"license": "MIT",
"summary": "Imports GhostWriter finding templates",
}
mapping = {
"title": "title",
"cvss_vector": "cvss",
"description": "summary",
"findingGuidance": "description",
"replication_steps": "description",
"hostDetectionTechniques": "description",
"networkDetectionTechniques": "description",
"impact": "impact",
"mitigation": "recommendation",
"references": "references",
}
ghostwriter_url: str
apikey: str
def __init__(self, **kwargs) -> None:
super().__init__(**kwargs)
if gql is None:
raise ImportError(
"Error importing gql. Install with 'pip install reptor{self.log.escape('[ghostwriter]')}'."
)
self.ghostwriter_url = kwargs.get("url", "")
if not self.ghostwriter_url:
try:
self.ghostwriter_url = self.url
except AttributeError:
raise ValueError("Ghostwriter URL is required.")
if not hasattr(self, "apikey"):
raise ValueError(
"Ghostwriter API Key is required. Add to your user config."
)
self.insecure = kwargs.get("insecure", False)
@classmethod
def add_arguments(cls, parser, plugin_filepath=None):
super().add_arguments(parser, plugin_filepath)
action_group = parser.add_argument_group()
action_group.add_argument(
"-url",
"--url",
metavar="URL",
action="store",
const="",
nargs="?",
help="API Url",
)
def convert_references(self, value):
value = strip_tags(value)
return [l for l in value.splitlines() if l.strip()]
def convert_hostDetectionTechniques(self, value):
if strip_tags(value):
return f"\n\n**Host Detection Techniques**\n\n{value}"
return value
def convert_networkDetectionTechniques(self, value):
if strip_tags(value):
return f"\n\n**Network Detection Techniques**\n\n{value}"
return value
def convert_findingGuidance(self, value):
if strip_tags(value):
return f"TODO: {value}\n\n"
return value
def _get_ghostwriter_findings(self):
query = gql(
"""
query MyQuery {
finding {
findingGuidance
networkDetectionTechniques
hostDetectionTechniques
replication_steps
cvss_vector
description
impact
mitigation
references
title
}
}
"""
)
# Either x-hasura-admin-secret or Authorization Bearer can be used
headers = {
"Content-Type": "application/json",
}
if (
self.apikey.startswith("ey")
and "." in self.apikey
and len(self.apikey) > 40
):
# Probably a JWT token
headers["Authorization"] = f"Bearer {self.apikey}"
else:
# Probably hasura admin secret
headers["x-hasura-admin-secret"] = self.apikey
transport = AIOHTTPTransport(
url=f"{self.ghostwriter_url}/v1/graphql", headers=headers
)
client = Client(transport=transport, fetch_schema_from_transport=False)
try:
result = client.execute(query)
except TransportServerError as e:
if e.code == 404:
raise TransportServerError(
"404, Not found. Wrong URL? Make sure to specify the graqhql port (default: 8080)."
) from e
raise e
return result["finding"]
def next_findings_batch(self):
self.debug("Running batch findings")
findings = self._get_ghostwriter_findings()
for finding_data in findings:
yield {
"language": "en-US",
"status": "in-progress",
"data": finding_data,
}
loader = GhostWriter | /plugins/importers/GhostWriter/GhostWriter.py | 0.545528 | 0.186891 | GhostWriter.py | pypi |
import re
from typing import Union
from requests.exceptions import HTTPError
from reptor.models.Finding import Finding
from reptor.models.Section import Section
from reptor.models.Base import ProjectFieldTypes
from reptor.lib.plugins.Base import Base
try:
import deepl
except ImportError:
deepl = None
class Translate(Base):
""" """
meta = {
"name": "Translate",
"summary": "Translate Projects to other languages via Deepl",
}
PREDEFINED_SKIP_FIELDS = [
"affected_components",
"references",
]
TRANSLATE_DATA_TYPES = [
ProjectFieldTypes.string.value,
ProjectFieldTypes.markdown.value,
]
DEEPL_FROM = [
"BG",
"CS",
"DA",
"DE",
"EL",
"EN",
"ES",
"ET",
"FI",
"FR",
"HU",
"ID",
"IT",
"JA",
"KO",
"LT",
"LV",
"NB",
"NL",
"PL",
"PT",
"RO",
"RU",
"SK",
"SL",
"SV",
"TR",
"UK",
"ZH",
]
DEEPL_TO = [
"BG",
"CS",
"DA",
"DE",
"EL",
"EN",
"EN-GB",
"EN-US",
"ES",
"ET",
"FI",
"FR",
"HU",
"ID",
"IT",
"JA",
"KO",
"LT",
"LV",
"NB",
"NL",
"PL",
"PT",
"PT-BR",
"PT-PT",
"RO",
"RU",
"SK",
"SL",
"SV",
"TR",
"UK",
"ZH",
]
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.from_lang = kwargs.get("from")
self.to_lang = kwargs["to"]
self.dry_run = kwargs.get("dry_run")
self.chars_count_to_translate = 0
try:
self.skip_fields = kwargs.get("skip_fields", "").split(",") or getattr(
self, "skip_fields"
)
except AttributeError:
self.skip_fields = list()
try:
self.skip_fields.extend(self.PREDEFINED_SKIP_FIELDS)
except TypeError:
raise TypeError(f"skip_fields should be list.")
if not hasattr(self, "deepl_api_token"):
self.deepl_api_token = ""
try:
if not self.deepl_api_token:
# TODO error msg might propose a conf command for interactive configuration
raise AttributeError("No Deepl API token found.")
if not deepl:
raise ModuleNotFoundError(
'deepl library not found. Install plugin requirements with "pip3 install reptor[translate]'
)
self.deepl_translator = deepl.Translator(self.deepl_api_token)
except (AttributeError, ModuleNotFoundError) as e:
if not self.dry_run:
raise e
@classmethod
def add_arguments(cls, parser, plugin_filepath=None):
super().add_arguments(parser, plugin_filepath)
parser.add_argument(
"-from",
"--from",
metavar="LANGUAGE_CODE",
help="Language code of source language",
choices=cls.DEEPL_FROM,
action="store",
default=None,
)
parser.add_argument(
"-to",
"--to",
metavar="LANGUAGE_CODE",
help="Language code of dest language",
choices=cls.DEEPL_TO,
action="store",
default=None,
)
parser.add_argument(
"-skip-fields",
"--skip-fields",
metavar="FIELDS",
help="Report and Finding fields, comma-separated",
action="store",
default="",
)
# Currently supported: Deepl
# parser.add_argument(
# "-translator",
# "--translator",
# help="Translator service to use",
# choices=["deepl"],
# default="deepl",
# )
parser.add_argument(
"-dry-run",
"--dry-run",
help="Do not translate, count characters to be translated and checks Deepl quota",
action="store_true",
)
def _translate(self, text: str) -> str:
if not re.search("[a-zA-Z]", text):
return text
self.chars_count_to_translate += len(text)
result = self.deepl_translator.translate_text(
text,
source_lang=self.from_lang, # Can be None
target_lang=self.to_lang,
preserve_formatting=True,
)
return result.text
def _translate_section(
self, section: Union[Finding, Section]
) -> Union[Finding, Section]:
for field in section.data:
if field.type not in self.TRANSLATE_DATA_TYPES:
continue
if field.name in self.skip_fields:
continue
field.value = self._translate(field.value)
return section
def _dry_run_translate(self, text: str) -> str:
self.chars_count_to_translate += len(text)
return text
def _duplicate_and_update_project(self, project_title: str) -> None:
self.display(f"Duplicating project{' (dry run)' if self.dry_run else ''}.")
if not self.dry_run:
to_project_id = self.reptor.api.projects.duplicate().id
self.display(
f"Updating project metadata{' (dry run)' if self.dry_run else ''}."
)
# Switch project to update duplicated project instead of original
self.reptor.api.switch_project(to_project_id)
try:
data = {"name": self._translate(project_title)}
if sysreptor_language_code := self._get_sysreptor_language_code(
self.to_lang
):
data["language"] = sysreptor_language_code
self.reptor.api.projects.update_project(data)
except HTTPError as e:
self.warning(f"Error updating project: {e.response.text}")
else:
self._translate(project_title) # To count characters
def _translate_project(self):
if self.dry_run:
self._translate = self._dry_run_translate
project = self.reptor.api.projects.get_project()
self._duplicate_and_update_project(project_title=project.name)
self.display(f"Translating findings{' (dry run)' if self.dry_run else ''}.")
sections = [
Finding(f, self.reptor.api.project_designs.project_design)
for f in self.reptor.api.projects.get_findings()
] + self.reptor.api.projects.get_sections()
for section in sections:
translated_section = self._translate_section(section)
translated_section_data = translated_section.data.to_dict()
if not self.dry_run:
if translated_section.__class__ == Finding:
self.reptor.api.projects.update_finding(
translated_section.id, {"data": translated_section_data}
)
elif translated_section.__class__ == Section:
self.reptor.api.projects.update_section(
translated_section.id, {"data": translated_section_data}
)
self.display(
f"Translated {self.chars_count_to_translate} characters{' (dry run)' if self.dry_run else ''}."
)
self._log_deepl_usage()
self.success(f"Project translated{' (dry run)' if self.dry_run else ''}.")
self.display(
"We recommend to check quality of the translation, or to add a note that the report was "
"translated by a machine."
)
def _log_deepl_usage(self):
try:
usage = self.deepl_translator.get_usage()
if usage.any_limit_reached:
self.warning("Deepl transaction limit reached.")
if usage.character.valid:
self.display(
f"Deepl usage: {usage.character.count} of {usage.character.limit} characters"
)
except AttributeError:
pass
def _get_sysreptor_language_code(self, language_code) -> str:
enabled_language_codes = self.reptor.api.projects.get_enabled_language_codes()
matched_lcs = [
enabled_lc
for enabled_lc in enabled_language_codes
if enabled_lc.lower().startswith(language_code[:2].lower())
]
if not matched_lcs:
return ""
elif len(matched_lcs) == 1:
return matched_lcs[0]
else:
for matched_lc in matched_lcs:
if matched_lc.lower() == language_code.lower():
return matched_lc
else:
return matched_lcs[0]
def run(self):
self._translate_project()
loader = Translate | /plugins/projects/Translate/Translate.py | 0.454593 | 0.215619 | Translate.py | pypi |
import pathlib
import shutil
import reptor.settings as settings
import reptor.subcommands as subcommands
from reptor.lib.plugins.Base import Base
from reptor.utils.table import make_table
class Plugins(Base):
"""
Use this plugin to list your plugins.
Search for plugins based on tags, name or author
Create a new plugin from a template for the community
or yourself.
# Arguments
* --search Allows you to search for a specific plugin by name and tag
* --copy PLUGINNAME Copies a plugin to your local folder for development
* --new PLUGINNAME Creates a new plugin based off a template and your input
* --verbose Provides more information about a plugin
# Developer Notes
You can modify the `_list` method to change the output for search as
well as the default output.
"""
meta = {
"name": "Plugins",
"summary": "Allows plugin management & development",
}
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.search = kwargs.get("search")
self.new_plugin_name = kwargs.get("new_plugin_name")
self.copy_plugin_name = kwargs.get("copy_plugin_name")
self.verbose = kwargs.get("verbose")
@classmethod
def add_arguments(cls, parser, plugin_filepath=None):
super().add_arguments(parser, plugin_filepath)
action_group = parser.add_mutually_exclusive_group()
action_group.title = "action_group"
action_group.add_argument(
"-search",
"--search",
metavar="SEARCHTERM",
action="store",
dest="search",
const="",
nargs="?",
help="Search for term",
)
action_group.add_argument(
"-new",
"--new",
metavar="PLUGINNAME",
nargs="?",
dest="new_plugin_name",
const="",
help="Create a new plugin",
)
action_group.add_argument(
"-copy",
"--copy",
metavar="PLUGINNAME",
action="store",
dest="copy_plugin_name",
help="Copy plugin to home directory",
)
def _list(self, plugins):
if self.verbose:
table = make_table(
[
"Name",
"Short Help",
"Space",
"Overwrites",
"Author",
"Version",
"Tags",
]
)
else:
table = make_table(
[
"Name",
"Short Help",
"Tags",
]
)
for tool in plugins:
color = "blue"
tool_name = f"[{color}]{tool.name}[/{color}]"
overwritten_plugin = tool.get_overwritten_plugin()
overwrites_name = ""
if overwritten_plugin:
overwrites_name = f"[red]{overwritten_plugin.name}({overwritten_plugin.category})[/red]"
tool_name = f"[red]{tool.name}({tool.category})\nOverwrites: {overwritten_plugin.category}[/red]"
color = "red"
if self.verbose:
table.add_row(
f"[{color}]{tool.name}[/{color}]",
f"[{color}]{tool.summary}[/{color}]",
f"[{color}]{tool.category}[/{color}]",
overwrites_name,
f"[{color}]{tool.author}[/{color}]",
f"[{color}]{tool.version}[/{color}]",
f"[{color}]{','.join(tool.tags)}[/{color}]",
)
else:
table.add_row(
f"{tool_name}",
f"[{color}]{tool.summary}[/{color}]",
f"[{color}]{','.join(tool.tags)}[/{color}]",
)
self.console.print(table)
def _search(self):
"""Searches plugins"""
plugins = list()
for _, group_plugins in subcommands.SUBCOMMANDS_GROUPS.items():
plugins.extend(group_plugins[1])
if self.search:
self.console.print(f"\nSearching for: [red]{self.search}[/red]\n")
results = list()
for plugin in plugins:
if self.search in plugin.tags:
results.append(plugin)
continue
if self.search in plugin.name:
results.append(plugin)
continue
else:
results = plugins
self._list(results)
def _create_new_plugin(self):
"""Goes through a few questions to generate a new plugin.
The user must at least answer the plugin Name.
It should always be a directory based plugin, because of templates.
The user can then go on and develop their own plugin.
"""
# Todo: Finalise this to make it as easy as possible for new plugins to be created by anyone
# think Django's manage.py startapp or npm init
introduction = """
Please answer the following questions.
Based on the answers we will create a raw plugin for you to work on.
You will find the new plugin in your ~/.sysrepter/plugins folder.
Let's get started...
"""
self.reptor.console.print(introduction)
if self.new_plugin_name:
plugin_name = self.new_plugin_name.strip().split(" ")[0]
print(f"plugin Name: {plugin_name}")
else:
plugin_name = (
(input("plugin Name (No spaces, try to use the tool name): "))
.strip()
.split(" ")[0]
)
plugin_name.strip().split(" ")[0]
author = input("Author Name: ")[:25] or "Unknown"
tags = input("Tags (only first: 5 Tags), i.e owasp,web,scanner: ").split(",")[
:5
]
# Create the folder
new_plugin_folder = pathlib.Path(settings.PLUGIN_DIRS_USER / plugin_name)
try:
new_plugin_folder.mkdir(parents=True)
except FileExistsError:
self.highlight(
"A plugin with this name already exists in your home directory."
)
overwrite = input("Do you want to continue? [N/y]: ") or "n"
if overwrite[0].lower() != "y":
raise AssertionError("Aborting...")
shutil.copytree(
settings.PLUGIN_TOOLBASE_TEMPLATE_FOLDER,
new_plugin_folder / "",
dirs_exist_ok=True,
)
# Now rename some stuff and replace some placeholders
new_plugin_file = new_plugin_folder / f"{plugin_name.capitalize()}.py"
pathlib.Path(new_plugin_folder / "Toolbase.py").rename(new_plugin_file)
with open(new_plugin_file, "r+") as f:
contents = f.read()
contents = contents.replace("MYMODULENAME", plugin_name.capitalize())
contents = contents.replace("AUTHOR_NAME", author)
contents = contents.replace("TAGS_LIST", ",".join(tags))
f.seek(0)
f.write(contents)
f.truncate()
self.log.success(f"New plugin created. Happy coding! ({new_plugin_folder})")
def _copy_plugin(self, dest=settings.PLUGIN_DIRS_USER):
# Check if plugin exists and get its path
try:
plugin = pathlib.Path(
self.reptor.plugin_manager.LOADED_PLUGINS[
self.copy_plugin_name
].__file__
)
except KeyError:
raise ValueError(f"Plugin '{self.copy_plugin_name}' does not exist.")
# Copy plugin
dest = dest / plugin.parent.name
self.log.display(f'Trying to copy "{plugin.parent}" to "{dest}"')
shutil.copytree(plugin.parent, dest)
self.log.success(f"Copied successfully. ({dest})")
def run(self):
if self.new_plugin_name is not None:
self._create_new_plugin()
elif self.copy_plugin_name is not None:
self._copy_plugin()
else:
self._search()
loader = Plugins | /plugins/core/Plugins.py | 0.45302 | 0.174551 | Plugins.py | pypi |
from reptor.lib.plugins.ToolBase import ToolBase
from reptor.plugins.tools.Nmap.models import Service
class Nmap(ToolBase):
"""
Author: Syslifters
Version: 1.0
Website: https://github.com/Syslifters/reptor
License: MIT
Tags: network,scanning,infrastructure
Short Help:
Formats nmap output
Description:
target=127.0.0.1
sudo nmap -Pn -n -sV -oG - -p 1-65535 $target
sudo -v # Elevate privileges for non-interactive
sudo -n nmap -Pn -n -sV -oG - -p 80,440 $target | tee nmap_result.txt | reptor nmap
# Format and upload
cat nmap_result.txt | reptor nmap -c upload
"""
meta = {
"name": "Nmap",
"summary": "format nmap output",
}
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.notename = kwargs.get("notename", "Nmap Scan")
self.note_icon = "👁️🗨️"
if self.input_format == "raw":
self.input_format = "xml"
@classmethod
def add_arguments(cls, parser, plugin_filepath=None):
super().add_arguments(parser, plugin_filepath)
input_format_group = cls.get_input_format_group(parser)
input_format_group.add_argument(
"-oX",
help="nmap XML output format, same as --xml (recommended)",
action="store_const",
dest="format",
const="xml",
)
input_format_group.add_argument(
"-oG",
"--grepable",
help="nmap Grepable output format",
action="store_const",
dest="format",
const="grepable",
)
parser.add_argument(
"-multi-notes",
"--multi-notes",
help="Uploads multiple notes (one per IP) instead of one note with all IPs",
action="store_true",
)
def parse_grepable(self):
self.parsed_input = list()
for line in self.raw_input.splitlines():
if line.startswith("#") or "Ports:" not in line:
continue
ip, ports = line.split("Ports:")
ip = ip.split(" ")[1]
ports = ports.split(",")
for port in ports:
port, status, protocol, _, service, _, version, _ = port.strip().split(
"/"
)
if status == "open":
s = Service()
s.parse(
{
"ip": ip,
"port": port,
"protocol": protocol,
"service": service.replace("|", "/"),
"version": version.replace("|", "/"),
}
)
self.parsed_input.append(s)
def parse_xml(self):
super().parse_xml()
nmap_data = self.parsed_input
self.parsed_input = list()
hosts = nmap_data.get("nmaprun", {}).get("host", [])
if not isinstance(hosts, list):
hosts = [hosts]
for host in hosts:
ip = host.get("address", {}).get("@addr")
ports = host.get("ports", {}).get("port", [])
if not isinstance(ports, list):
ports = [ports]
for port in ports:
if port.get("state", {}).get("@state") == "open":
s = Service()
s.parse(
{
"ip": ip,
"hostname": (host.get("hostnames") or {})
.get("hostname", {})
.get("@name"),
"port": port.get("@portid"),
"protocol": port.get("@protocol"),
"service": port.get("service", {}).get("@name"),
"version": port.get("service", {}).get("@product"),
}
)
self.parsed_input.append(s)
def parse(self):
super().parse()
if self.input_format == "grepable":
self.parse_grepable()
def preprocess_for_template(self):
data = dict()
if not self.multi_notes:
data["data"] = self.parsed_input
data["show_hostname"] = any([s.hostname for s in self.parsed_input])
else:
# Group data per IP
for d in self.parsed_input:
# Key of the dict (d.ip) will be title of the note
data.setdefault(d.ip, list()).append(d)
# Show hostname or not
for ip, port_data in data.items():
data[ip] = {
"data": port_data,
"show_hostname": any([s.hostname for s in port_data]),
}
return data
loader = Nmap | /plugins/tools/Nmap/Nmap.py | 0.519521 | 0.368775 | Nmap.py | pypi |
import typing
from reptor.lib.plugins.ModelBase import ModelBase
class Instance(ModelBase):
uri: str
method: str
param: str
attack: str
evidence: str
otherinfo: str
requestheader: str
requestbody: str
responseheader: str
# responsebody: str # Careful with this!
def parse(self, data):
self.uri = data.find("uri").text
self.method = data.find("method").text
self.param = data.find("param").text
self.attack = data.find("attack").text
self.evidence = data.find("evidence").text
self.otherinfo = data.find("otherinfo").text
if data.find("requestheader"):
self.requestheader = data.find("requestheader").text
if data.find("requestbody"):
self.requestbody = data.find("requestbody").text
if data.find("responseheader"):
self.responseheader = data.find("responseheader").text
class Alert(ModelBase):
pluginid: str
alertRef: str
name: str
riskcode: int
confidence: int
riskdesc: str
confidencedesc: str
desc: str
count: int
solution: str
otherinfo: str
reference: str
cweid: int
wascid: int
sourceid: int
instances: typing.List[Instance] = list()
def parse(self, data):
self.pluginid = data.find("pluginid").text
self.alertRef = data.find("alertRef").text
self.name = data.find("name").text
self.riskcode = data.find("riskcode").text
self.confidence = data.find("confidence").text
self.riskdesc = data.find("riskdesc").text
self.confidencedesc = data.find("confidencedesc").text
self.desc = data.find("desc").text
self.count = data.find("count").text
self.solution = data.find("solution").text
self.reference = data.find("reference").text
self.cweid = data.find("cweid").text
self.wascid = data.find("wascid").text
self.sourceid = data.find("sourceid").text
@property
def references_as_list_items(self):
return self.reference.splitlines()
class Site(ModelBase):
name: str
host: str
port: str
ssl: bool
alerts: typing.List[Alert] = list()
def parse(self, data):
self.name = data.get("name")
self.host = data.get("host")
self.port = data.get("port")
self.ssl = data.get("ssl") | /plugins/tools/Zap/models.py | 0.518059 | 0.263416 | models.py | pypi |
import typing
from reptor.lib.plugins.ToolBase import ToolBase
class Sslyze(ToolBase):
"""
target="app1.example.com:443{127.0.0.1} app2.example.com:443{127.0.0.2}"
sslyze --sslv2 --sslv3 --tlsv1 --tlsv1_1 --tlsv1_2 --tlsv1_3 --certinfo --reneg --http_get --hide_rejected_ciphers --compression --heartbleed --openssl_ccs --fallback --robot "$target" --json_out=- | tee sslyze.txt | reptor sslyze
# Format and upload
cat sslyze_result.txt | reptor ssylyze -c upload
"""
meta = {
"author": "Syslifters",
"name": "Sslyze",
"version": "1.0",
"license": "MIT",
"tags": ["web", "ssl"],
"summary": "format sslyze JSON output",
}
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.note_icon = "🔒"
self.input_format = "json"
weak_ciphers = [
"DHE-DSS-DES-CBC3-SHA",
"DHE-DSS-AES128-SHA",
"DHE-DSS-AES128-SHA256",
"DHE-DSS-AES256-SHA",
"DHE-DSS-AES256-SHA256",
"DHE-DSS-CAMELLIA128-SHA",
"DHE-DSS-CAMELLIA128-SHA256",
"DHE-DSS-CAMELLIA256-SHA",
"DHE-DSS-CAMELLIA256-SHA256",
"DHE-DSS-SEED-SHA",
"DHE-PSK-3DES-EDE-CBC-SHA",
"DHE-PSK-AES128-CBC-SHA",
"DHE-PSK-AES128-CBC-SHA256",
"DHE-PSK-AES256-CBC-SHA",
"DHE-PSK-AES256-CBC-SHA384",
"DHE-PSK-CAMELLIA128-SHA256",
"DHE-PSK-CAMELLIA256-SHA384",
"DHE-RSA-DES-CBC3-SHA",
"DHE-RSA-AES128-SHA",
"DHE-RSA-AES128-SHA256",
"DHE-RSA-AES256-SHA",
"DHE-RSA-AES256-SHA256",
"DHE-RSA-CAMELLIA128-SHA",
"DHE-RSA-CAMELLIA128-SHA256",
"DHE-RSA-CAMELLIA256-SHA",
"DHE-RSA-CAMELLIA256-SHA256",
"DHE-RSA-SEED-SHA",
"ECDHE-ECDSA-DES-CBC3-SHA",
"ECDHE-ECDSA-AES128-SHA",
"ECDHE-ECDSA-AES128-SHA256",
"ECDHE-ECDSA-AES256-SHA",
"ECDHE-ECDSA-AES256-SHA384",
"ECDHE-ECDSA-CAMELLIA128-SHA256",
"ECDHE-ECDSA-CAMELLIA256-SHA384",
"ECDHE-PSK-3DES-EDE-CBC-SHA",
"ECDHE-PSK-AES128-CBC-SHA",
"ECDHE-PSK-AES128-CBC-SHA256",
"ECDHE-PSK-AES256-CBC-SHA",
"ECDHE-PSK-AES256-CBC-SHA384",
"ECDHE-PSK-CAMELLIA128-SHA256",
"ECDHE-PSK-CAMELLIA256-SHA384",
"ECDHE-RSA-DES-CBC3-SHA",
"ECDHE-RSA-AES128-SHA",
"ECDHE-RSA-AES128-SHA256",
"ECDHE-RSA-AES256-SHA",
"ECDHE-RSA-AES256-SHA384",
"ECDHE-RSA-CAMELLIA128-SHA256",
"ECDHE-RSA-CAMELLIA256-SHA384",
"PSK-3DES-EDE-CBC-SHA",
"PSK-AES128-CBC-SHA",
"PSK-AES128-CBC-SHA256",
"PSK-AES128-CCM",
"PSK-AES128-CCM8",
"PSK-AES128-GCM-SHA256",
"PSK-AES256-CBC-SHA",
"PSK-AES256-CBC-SHA384",
"PSK-AES256-CCM",
"PSK-AES256-CCM8",
"PSK-AES256-GCM-SHA384",
"PSK-CAMELLIA128-SHA256",
"PSK-CAMELLIA256-SHA384",
"PSK-CHACHA20-POLY1305",
"RSA-PSK-3DES-EDE-CBC-SHA",
"RSA-PSK-AES128-CBC-SHA",
"RSA-PSK-AES128-CBC-SHA256",
"RSA-PSK-AES128-GCM-SHA256",
"RSA-PSK-AES256-CBC-SHA",
"RSA-PSK-AES256-CBC-SHA384",
"RSA-PSK-AES256-GCM-SHA384",
"RSA-PSK-CAMELLIA128-SHA256",
"RSA-PSK-CAMELLIA256-SHA384",
"RSA-PSK-CHACHA20-POLY1305",
"DES-CBC3-SHA",
"AES128-SHA",
"AES128-SHA256",
"AES128-CCM",
"AES128-CCM8",
"AES128-GCM-SHA256",
"AES256-SHA",
"AES256-SHA256",
"AES256-CCM",
"AES256-CCM8",
"AES256-GCM-SHA384",
"CAMELLIA128-SHA",
"CAMELLIA128-SHA256",
"CAMELLIA256-SHA",
"CAMELLIA256-SHA256",
"IDEA-CBC-SHA",
"SEED-SHA",
"SRP-DSS-3DES-EDE-CBC-SHA",
"SRP-DSS-AES-128-CBC-SHA",
"SRP-DSS-AES-256-CBC-SHA",
"SRP-RSA-3DES-EDE-CBC-SHA",
"SRP-RSA-AES-128-CBC-SHA",
"SRP-RSA-AES-256-CBC-SHA",
"SRP-3DES-EDE-CBC-SHA",
"SRP-AES-128-CBC-SHA",
"SRP-AES-256-CBC-SHA",
]
insecure_ciphers = [
"ADH-DES-CBC3-SHA",
"ADH-AES128-SHA",
"ADH-AES128-SHA256",
"ADH-AES128-GCM-SHA256",
"ADH-AES256-SHA",
"ADH-AES256-SHA256",
"ADH-AES256-GCM-SHA384",
"ADH-CAMELLIA128-SHA",
"ADH-CAMELLIA128-SHA256",
"ADH-CAMELLIA256-SHA",
"ADH-CAMELLIA256-SHA256",
"ADH-SEED-SHA",
"DHE-PSK-NULL-SHA",
"DHE-PSK-NULL-SHA256",
"DHE-PSK-NULL-SHA384",
"AECDH-DES-CBC3-SHA",
"AECDH-AES128-SHA",
"AECDH-AES256-SHA",
"AECDH-NULL-SHA",
"ECDHE-ECDSA-NULL-SHA",
"ECDHE-PSK-NULL-SHA",
"ECDHE-PSK-NULL-SHA256",
"ECDHE-PSK-NULL-SHA384",
"ECDHE-RSA-NULL-SHA",
"PSK-NULL-SHA",
"PSK-NULL-SHA256",
"PSK-NULL-SHA384",
"RSA-PSK-NULL-SHA",
"RSA-PSK-NULL-SHA256",
"RSA-PSK-NULL-SHA384",
"NULL-MD5",
"NULL-SHA",
"NULL-SHA256",
]
def get_weak_ciphers(self, target):
result_protocols = dict()
for protocol, protocol_data in target["commands_results"].items():
if len(protocol_data.get("accepted_cipher_list", list())) > 0:
result_protocols[protocol] = dict()
weak_ciphers = list(
filter(
None,
[
c["openssl_name"]
if c["openssl_name"] in self.weak_ciphers
else None
for c in protocol_data["accepted_cipher_list"]
],
)
)
if weak_ciphers:
result_protocols[protocol]["weak_ciphers"] = weak_ciphers
insecure_ciphers = list(
filter(
None,
[
c["openssl_name"]
if c["openssl_name"] in self.insecure_ciphers
else None
for c in protocol_data["accepted_cipher_list"]
],
)
)
if insecure_ciphers:
result_protocols[protocol]["insecure_ciphers"] = insecure_ciphers
return result_protocols
def get_certinfo(self, target):
result_certinfo = dict()
certinfo = target.get("commands_results", dict()).get("certinfo")
if certinfo is None:
return result_certinfo
result_certinfo["certificate_matches_hostname"] = certinfo.get(
"certificate_matches_hostname"
)
result_certinfo["has_sha1_in_certificate_chain"] = certinfo.get(
"has_sha1_in_certificate_chain"
)
result_certinfo["certificate_untrusted"] = list(
filter(
None,
[
store["trust_store"]["name"]
if not store["is_certificate_trusted"]
else None
for store in certinfo.get("path_validation_result_list") or list()
],
)
)
return result_certinfo
def get_vulnerabilities(self, target):
result_vulnerabilities = dict()
commands_results = target.get("commands_results")
if commands_results is None:
return result_vulnerabilities
result_vulnerabilities["heartbleed"] = commands_results.get(
"heartbleed", dict()
).get("is_vulnerable_to_heartbleed")
result_vulnerabilities["openssl_ccs"] = commands_results.get(
"openssl_ccs", dict()
).get("is_vulnerable_to_ccs_injection")
robot = commands_results.get("robot", dict()).get("robot_result_enum")
result_vulnerabilities["robot"] = (
robot if "NOT_VULNERABLE" not in robot else False
)
return result_vulnerabilities
def get_misconfigurations(self, target):
result_misconfigs = dict()
commands_results = target.get("commands_results")
if commands_results is None:
return result_misconfigs
result_misconfigs["compression"] = (
commands_results.get("compression", dict()).get("compression_name")
is not None
)
result_misconfigs["downgrade"] = (
commands_results.get("fallback", dict()).get("supports_fallback_scsv", True)
is not True
)
result_misconfigs["client_renegotiation"] = commands_results.get(
"reneg", dict()
).get("accepts_client_renegotiation")
result_misconfigs["no_secure_renegotiation"] = (
commands_results.get("reneg", dict()).get(
"supports_secure_renegotiation", True
)
is not True
)
return result_misconfigs
def get_server_info(self, target):
result_server_info = dict()
server_info = target.get("server_info")
result_server_info["hostname"] = server_info["hostname"]
result_server_info["port"] = str(server_info["port"])
result_server_info["ip_address"] = server_info["ip_address"]
return result_server_info
def preprocess_for_template(self):
data = list()
if not isinstance(self.parsed_input, dict):
return None
for target in self.parsed_input.get("accepted_targets", list()):
target_data = self.get_server_info(target)
target_data["protocols"] = self.get_weak_ciphers(target)
target_data["has_weak_ciphers"] = False
if any(
[
v.get("weak_ciphers", list()) + v.get("insecure_ciphers", list())
for k, v in target_data["protocols"].items()
]
):
target_data["has_weak_ciphers"] = True
target_data["certinfo"] = self.get_certinfo(target)
target_data["vulnerabilities"] = self.get_vulnerabilities(target)
target_data["has_vulnerabilities"] = False
if any([v for k, v in target_data["vulnerabilities"].items()]):
target_data["has_vulnerabilities"] = True
target_data["misconfigurations"] = self.get_misconfigurations(target)
data.append(target_data)
if self.template in [
"protocols",
"certinfo",
"vulnerabilities",
"misconfigurations",
"weak_ciphers",
]:
if len(data) > 1:
self.log.warning(
"sslyze output contains more than one target. Taking the first one."
)
return {"target": data[0]}
return {"data": data}
def finding_weak_ciphers(self):
finding_context = self.preprocess_for_template()
if finding_context is None:
return None
if any(
[
target.get("has_weak_ciphers")
for target in finding_context.get("data", [])
]
):
# Add affected components
finding_context["affected_components"] = [
f'{t["hostname"]}:{t["port"]} ({t["ip_address"]})'
for t in finding_context["data"]
]
return finding_context
return None
loader = Sslyze | /plugins/tools/Sslyze/Sslyze.py | 0.489015 | 0.177383 | Sslyze.py | pypi |
import datetime
import typing
from reptor.lib.plugins.ModelBase import ModelBase
class Item(ModelBase):
method: str = "GET"
description: str
uri: str
namelink: str
iplink: str
references: str
endpoint: str = "/"
def parse(self, data):
description = data.find("description").text
self.description = description
if ": " in description:
description_data = description.split(":")
self.endpoint = description_data[0]
self.description = description_data[1]
self.uri = data.find("uri").text
self.namelink = data.find("namelink").text
self.iplink = data.find("iplink").text
self.references = data.find("references").text
self.method = data.get("method")
class ScanDetails(ModelBase):
targetip: str
targethostname: str
targetport: str
targetbanner: str
sitename: str
siteip: str
hostheader: str
errors: str
checks: str
items: typing.List[Item] = list()
def parse(self, data):
self.targethostname = data.get("targethostname")
self.targetport = data.get("targetport")
self.targetbanner = data.get("targetbanner")
self.sitename = data.get("sitename")
self.hostheader = data.get("hostheader")
self.siteip = data.get("siteip")
self.targetip = data.get("targetip")
self.starttime = data.get("starttime")
self.errors = data.get("errors")
def append_item(self, item: Item):
if not self.items:
self.items = list()
self.items.append(item)
def set_items(self, items: typing.List[Item]):
self.items = items
class Statistics(ModelBase):
elapsed: int
errors: int
checks: int
itemsfound: int
itemstested: int
starttime: datetime.datetime
endtime: datetime.datetime
def parse(self, data):
self.elapsed = data.get("elapsed")
self.itemsfound = data.get("itemsfound")
self.endtime = data.get("endtime")
self.itemstested = data.get("itemstested")
self.checks = data.get("checks")
self.errors = data.get("errors")
class NiktoScan(ModelBase):
options: str
version: str
scandetails: ScanDetails
statistics: Statistics
def parse(self, data):
self.options = data.get("options")
self.version = data.get("version") | /plugins/tools/Nikto/models.py | 0.486332 | 0.306475 | models.py | pypi |
RepubMQTT
=========
RepubMQTT re-publishes MQTT messages to MQTT, RESTful or logfiles, allowing
filtering and selection of fields via regular expressions.
A Python dict structure is used to describe all actions. For example, the
following definition
```
myrule = {
'name': 'outside_temp',
'rules': [['filter','selector', 'publish']],
'filter': {'topic': '(loftha.*)', 'event_data.entity_id': '(sensor.dark_sky_temperature)'},
'selector': { 'temperature:event_data.new_state.state': '(.*)'},
'publish': {'protocol': 'log', 'data': "%(ts)s: temperature is %(temperature)s" }
}
RULES = [myrule]
```
will produce the output
```
2016-12-22 14:16:31: temperature is 2.2
```
from this (very complex) MQTT msg:
```
2016-12-22 14:16:31: loftha -> {
"event_data": {
"entity_id": "sensor.dark_sky_temperature",
"new_state": {
"attributes": {
"attribution": "Powered by Dark Sky",
"friendly_name": "Outside",
"icon": "mdi:thermometer",
"unit_of_measurement": "\u00b0C"
},
"entity_id": "sensor.dark_sky_temperature",
"last_changed": "2016-12-22T19:16:31.073761+00:00",
"last_updated": "2016-12-22T19:16:31.073761+00:00",
"state": "2.2"
},
"old_state": {
"attributes": {
"attribution": "Powered by Dark Sky",
"friendly_name": "Outside",
"icon": "mdi:thermometer",
"unit_of_measurement": "\u00b0C"
},
"entity_id": "sensor.dark_sky_temperature",
"last_changed": "2016-12-22T19:04:00.969787+00:00",
"last_updated": "2016-12-22T19:04:00.969787+00:00",
"state": "2.3"
}
},
"event_type": "state_changed"
}
```
Configuration
-------------
Each definition dictionary has to specify the 'name' of the entry
and a 'rules' field that contains a list of 3 items, specifying the
'filter', 'selector' and 'publish' rules:
```
'name': 'outside_temp',
'rules': [['filter','selector', 'publish']],
```
The names 'filter', 'selector', and 'publish' can be freely choosen, plus you can habe multiple sets of rules, as in
```
'name': 'aew_desk',
'rules': [['qfilter','qselector', 'qpublish'],
['infilter','inselector', 'inpublish']],
```
Each of the names specified must exists as a key in same definition dict.
'filter' and 'selector'
----------------------
The 'filter' and 'selector' rules are themseleves dicts and the share the
same syntax:
```
'filter|selector': { '[<new_field_name>]:<fieldname>': <matchdef> ... }
```
The ```<fieldname>``` selects the field with the same name in the incomming
MQTT messagae. The optional ```<new_field_name>``` can be used to set a new
name in the output record, otherwide the last component of ```<fieldname>```
will be the new name in the outut record. The value ```<matchdef>```
specifies how to match the field content. See matchdef below.
If the matchedef succeeds for the field then the value of the last 're' match
for the field is selected. For 'filter' rules the message passes through the filter
iff all the fields are selected successfully. For 'selector' rules, all
selected fields are made available a dictionary for publishing .
Two additonal "virtual" fields are available: 'ts' is a human readble time
stamp and 'topic' is the topic of the incoming MQTT message.
A 'filter' rule must be pressent. If the 'selector' name is given as '' then
the full input message is made availabe for further processing, with the
additonal fields 'ts' and 'topic'.
The 'publish' entry specifes what actions to take.
For example, the filter:
```'filter': {'topic': 'loftha.\*', 'entity_id': 'sensor.dark_sky_temperature'}```
would match an incoming MQTT message with the topic ```loftha/#``` and a
json payload field ```entity_id``` with the content ```sensor.dark_sky_temperature```.
matchdefs
---------
A matchdef is a list of tuples. Each tuple specifies a comparison operation
('==','<','<=','>','>=', '!=', 're') and the value to compare to. In case of
're', the comparison value is a regular expression. A simple sting as matchdef is a
shorthand notation for [['re',string]]. The list of tuples consitute an AND condition,
all matchdefs in the list must evaluate successfully in order for the filter/selector
to continue.
'publish'
--------
The 'publish' entry is either a dict or a list of dicts. Two keys are
required, 'protocol' and 'data'. Valid values for 'protocol' are 'log',
'mqtt', or 'restful'. Custom protocols can be added, see below.
'data' holds the template for the data to be published. It can either be a
string with pythonesque parenthesised mapping keys ```%(name)s```, which will
be replaced with actual data, or it can be a dict, in which case the data will
be published as json. If 'data' is absent, then all the fields that were
selected by the 'selector' rule are used to construct a dictionary.
See 'copy_fields' to copy (and optionally tranform) selected fields to the data
dict from the input record.
'protocol' mqtt
---------------
The 'topic' field will hold the MQTT topic to publish to. Also, the optional
field 'retain' can be set to ```True``` in order to cause the MQTT broker to
retain the msg. The optional field 'once' can be set to ```True``` if this
message is only ever to be pubished once during the runtime of the program.
'protocol' restful
------------------
The field 'url' specifies the URL to post the msg to, the optional 'header'
can be used to specify additonal HTTP protocol header fields, in the form of
a dictionary.
'protocol' log
--------------
The 'log' publisher can specify a 'logfile' filed to which to write the
output. Default is stdout.
custom protocols
----------------
The program using the repubmqtt.Republish class can register additional protocols
by calling ````register_publish_protocol('<prot_name>', <proto_function>)```.
The proto_name is the protocoll name and can override the build-in protocols.
proto_function should have 3 argumnts, a publish dict, an output string and
dict containing the record built by the selector rules.
'copy\_fields'
-----------
The optional
'copy_fields' field holds a list of copy definitons, consisting of
destination field, source field and conversion:
``` 'copy_fields': [['value', 'lvl', 'int']] ```
In this example, the content of the field 'lvl' (either from a 'selector'
entry or from the input message) with be copied to the output field 'value'
after conversion to integer.
copy\_fields conversions
-----------------------
When copying fields, you can specify a data conversion. Built-in convserion
are ```int``` and ```float```. In addtion, you can specify a key to
the ```xlate``` dict. In this case, a lookup is used to select the data
value for the destination field. An example of the ```xlate``` dict
might look like this:
```
xlate = {
'motion': { 'off': False, 'on': True },
'door': { 'off': 'Closed', 'on': 'Open' },
}
```
Here, using the conversion ```door``` would translate an input field value
of 'off' to 'Closed'.
Installation
------------
Use the standard pip install:
```python setup.py install```
The setup installs a sample script 'repubmqtt', it take a config file
like repub.conf-sample as argument.
| /repubmqtt-0.2.4.tar.gz/repubmqtt-0.2.4/README.md | 0.673406 | 0.830147 | README.md | pypi |
.. _ts2img:
======
ts2img
======
Introduction
============
Conversion of time series data to images (ts2img) is a general problem
that we have to deal with often for all kinds of datasets. Although most
of the external datasets we use come in image format most of them are
converted into a time series format for analysis. This is fairly
straightforward for image stacks but gets more complicated for orbit
data.
Orbit data mostly comes as several data values accompanied by latitude,
longitude information and must often be resampled to fit on a grid that
lends itself for time series analysis. A more or less general solution
for this already exists in the img2ts module.
Possible steps involved in the conversion
=========================================
The steps that a ts2img program might have to perform are:
1. Read time series in a geographic region - constrained by memory
2. Aggregate time series in time
- methods for doing this aggregation can vary for each dataset so
the method is best specified by the user
- resolution in time has to be chosen and is probably also best
specified by the user
- after aggregation every point in time and in space must have a
value which can of course be NaN
- time series might have to be split into separate images during
conversion, e.g. ASCAT time series are routinely split into images
for ascending and descending satellite overpasses. **This means
that we can not assume that the output dataset has the same number
or names of variables as the input dataset.**
3. Put the now uniform time series of equal length into a 2D array per
variable
4. A resampling step could be performed here but since only a part of
the dataset is available edge cases would not be resolved correctly.
A better solution would be to develop a good resampling tool which
might already exist in pyresample and pytesmo functions that use it.
5. write this data into a file
- this can be a netCDF file with dimensions of the grid into which
the data is written
- this could be any other file format, the interface to this format
just has to make sure that in the end a consistent image dataset
is built out of the parts that are written.
Solution
========
The chosen first solution uses netCDF as an output format. The output
will be a stack of images in netCDF format. This format can easily be
converted into substacks or single images if that is needed for a
certain user or project.
The chosen solution will **not** do resampling since this is better and
easier done using the whole converted dataset. This also means that if
the input dataset is e.g. a dataset defined over land only then the
resulting "image" will also not contain land points. I think it is best
to let this be decided by the input dataset.
The output of the resulting netCDF can have one of two possible
"shapes":
- 2D variables with time on one axis and gpi on the other. This is kind
of how SWI time series are stored already.
- 3D variables with latitude, longitude and time as the three
dimensions.
The decision of which it will be is dependent on the grid on which the
input data is stored. If the grid has a 2D shape then the 3D solution
will be chosen. If the input grid has only a 1D shape then only the 2D
solution is possible.
Time Series aggregation
-----------------------
The chosen solution will use a custom function for each dataset to
perform the aggregation if necessary. A simple example of a function
that gets a time time series and aggregates it to a monthly time series
could look like *agg\_tsmonthly*
Simple example of a aggregation function
.. code:: python
def agg_tsmonthly(ts, **kwargs):
"""
Parameters
----------
ts : pandas.DataFrame
time series of a point
kwargs : dict
any additional keyword arguments that are given to the ts2img object
during initialization
Returns
-------
ts_agg : pandas.DataFrame
aggregated time series, they all must have the same length
otherwise it can not work
each column of this DataFrame will be a layer in the image
"""
# very simple example
# aggregate to monthly timestamp
# should also make sure that the output has a certain length
return ts.asfreq("M")
Time series iteration
---------------------
The function ``agg_tsmonthly`` will be called for every time series in
the input dataset. The input dataset must have a ``iter_ts``
iterator that iterates over the grid points in a sensible order.
Interface to the netCDF writer
------------------------------
The netCDF writer will be initialized outside the *ts2img* class with a
filename and other attributes it needs. So the *ts2img* class only gets
a writer object. This writer object already knows about the start and
end date of the time series as well as the target grid and has
initialized the correct dimensions in the netCDF file. This object must
have a method ``write_ts`` which takes a array of gpi's and a 2D array
containing the time series for these gpis. This should be enough to
write the gpi's into the correct position of the netCDF file.
This approach should also work if another output format is supposed to
be used.
Implementation of the main ts2img class
---------------------------------------
The ts2img class will automatically use a the function given in
``agg_ts2img`` if no custom ``agg_ts2img`` function is provided. If
the tsreader implements a method called ``agg_ts2img`` this function
will be used instead.
.. code:: python
class Ts2Img(object):
"""
Takes a time series dataset and converts it
into an image dataset.
A custom aggregate function should be given otherwise
a daily mean will be used
Parameters
----------
tsreader: object
object that implements a iter_ts method which iterates over
pandas time series and has a grid attribute that is a pytesmo
BasicGrid or CellGrid
imgwriter: object
writer object that implements a write_ts method that takes
a list of grid point indices and a 2D array containing the time series data
agg_func: function
function that takes a pandas DataFrame and returns
an aggregated pandas DataFrame
ts_buffer: int
how many time series to read before writing to disk,
constrained by the working memory the process should use.
"""
def __init__(self, tsreader, imgwriter,
agg_func=None,
ts_buffer=1000):
self.agg_func = agg_func
if self.agg_func is None:
try:
self.agg_func = tsreader.agg_ts2img
except AttributeError:
self.agg_func = agg_tsmonthly
self.tsreader = tsreader
self.imgwriter = imgwriter
self.ts_buffer = ts_buffer
def calc(self, **tsaggkw):
"""
does the conversion from time series to images
"""
for gpis, ts in self.tsbulk(**tsaggkw):
self.imgwriter.write_ts(gpis, ts)
def tsbulk(self, gpis=None, **tsaggkw):
"""
iterator over gpi and time series arrays of size self.ts_buffer
Parameters
----------
gpis: iterable, optional
if given these gpis will be used, can be practical
if the gpis are managed by an external class e.g. for parallel
processing
tsaggkw: dict
Keywords to give to the time series aggregation function
Returns
-------
gpi_array: numpy.array
numpy array of gpis in this batch
ts_bulk: dict of numpy arrays
for each variable one numpy array of shape
(len(gpi_array), len(ts_aggregated))
"""
# have to use the grid iteration as long as iter_ts only returns
# data frame and no time series object including relevant metadata
# of the time series
i = 0
gpi_bulk = []
ts_bulk = {}
ts_index = None
if gpis is None:
gpis, _, _, _ = self.tsreader.grid.grid_points()
for gpi in gpis:
gpi_bulk.append(gpi)
ts = self.tsreader.read_ts(gpi)
ts_agg = self.agg_func(ts, **tsaggkw)
for column in ts_agg.columns:
try:
ts_bulk[column].append(ts_agg[column].values)
except KeyError:
ts_bulk[column] = []
ts_bulk[column].append(ts_agg[column].values)
if ts_index is None:
ts_index = ts_agg.index
i += 1
if i >= self.ts_buffer:
for key in ts_bulk:
ts_bulk[key] = np.vstack(ts_bulk[key])
gpi_array = np.hstack(gpi_bulk)
yield gpi_array, ts_bulk
ts_bulk = {}
gpi_bulk = []
i = 0
if i > 0:
for key in ts_bulk:
ts_bulk[key] = np.vstack(ts_bulk[key])
gpi_array = np.hstack(gpi_bulk)
yield gpi_array, ts_bulk
| /repurpose-0.9.tar.gz/repurpose-0.9/docs/ts2img.rst | 0.916199 | 0.846451 | ts2img.rst | pypi |
import torch.nn as nn
import torch.utils.checkpoint as checkpoint
from .se_block import SEBlock
import torch
import numpy as np
def conv_bn_relu(in_channels, out_channels, kernel_size, stride, padding, groups=1):
result = nn.Sequential()
result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding, groups=groups, bias=False))
result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))
result.add_module('relu', nn.ReLU())
return result
def conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups=1):
result = nn.Sequential()
result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding, groups=groups, bias=False))
result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))
return result
class RepVGGplusBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size,
stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros',
deploy=False,
use_post_se=False):
super(RepVGGplusBlock, self).__init__()
self.deploy = deploy
self.groups = groups
self.in_channels = in_channels
assert kernel_size == 3
assert padding == 1
self.nonlinearity = nn.ReLU()
if use_post_se:
self.post_se = SEBlock(out_channels, internal_neurons=out_channels // 4)
else:
self.post_se = nn.Identity()
if deploy:
self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride,
padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode)
else:
if out_channels == in_channels and stride == 1:
self.rbr_identity = nn.BatchNorm2d(num_features=out_channels)
else:
self.rbr_identity = None
self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=groups)
padding_11 = padding - kernel_size // 2
self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, padding=padding_11, groups=groups)
def forward(self, x):
if self.deploy:
return self.post_se(self.nonlinearity(self.rbr_reparam(x)))
if self.rbr_identity is None:
id_out = 0
else:
id_out = self.rbr_identity(x)
out = self.rbr_dense(x) + self.rbr_1x1(x) + id_out
out = self.post_se(self.nonlinearity(out))
return out
# This func derives the equivalent kernel and bias in a DIFFERENTIABLE way.
# You can get the equivalent kernel and bias at any time and do whatever you want,
# for example, apply some penalties or constraints during training, just like you do to the other models.
# May be useful for quantization or pruning.
def get_equivalent_kernel_bias(self):
kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
def _pad_1x1_to_3x3_tensor(self, kernel1x1):
if kernel1x1 is None:
return 0
else:
return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
def _fuse_bn_tensor(self, branch):
if branch is None:
return 0, 0
if isinstance(branch, nn.Sequential):
# For the 1x1 or 3x3 branch
kernel, running_mean, running_var, gamma, beta, eps = branch.conv.weight, branch.bn.running_mean, branch.bn.running_var, branch.bn.weight, branch.bn.bias, branch.bn.eps
else:
# For the identity branch
assert isinstance(branch, nn.BatchNorm2d)
if not hasattr(self, 'id_tensor'):
# Construct and store the identity kernel in case it is used multiple times
input_dim = self.in_channels // self.groups
kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
for i in range(self.in_channels):
kernel_value[i, i % input_dim, 1, 1] = 1
self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
kernel, running_mean, running_var, gamma, beta, eps = self.id_tensor, branch.running_mean, branch.running_var, branch.weight, branch.bias, branch.eps
std = (running_var + eps).sqrt()
t = (gamma / std).reshape(-1, 1, 1, 1)
return kernel * t, beta - running_mean * gamma / std
def switch_to_deploy(self):
if hasattr(self, 'rbr_reparam'):
return
kernel, bias = self.get_equivalent_kernel_bias()
self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels,
out_channels=self.rbr_dense.conv.out_channels,
kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,
padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation,
groups=self.rbr_dense.conv.groups, bias=True)
self.rbr_reparam.weight.data = kernel
self.rbr_reparam.bias.data = bias
self.__delattr__('rbr_dense')
self.__delattr__('rbr_1x1')
if hasattr(self, 'rbr_identity'):
self.__delattr__('rbr_identity')
if hasattr(self, 'id_tensor'):
self.__delattr__('id_tensor')
self.deploy = True
class RepVGGplusStage(nn.Module):
def __init__(self, in_planes, planes, num_blocks, stride, use_checkpoint, use_post_se=False, deploy=False):
super().__init__()
strides = [stride] + [1] * (num_blocks - 1)
blocks = []
self.in_planes = in_planes
for stride in strides:
cur_groups = 1
blocks.append(RepVGGplusBlock(in_channels=self.in_planes, out_channels=planes, kernel_size=3,
stride=stride, padding=1, groups=cur_groups, deploy=deploy, use_post_se=use_post_se))
self.in_planes = planes
self.blocks = nn.ModuleList(blocks)
self.use_checkpoint = use_checkpoint
def forward(self, x):
for block in self.blocks:
if self.use_checkpoint:
x = checkpoint.checkpoint(block, x)
else:
x = block(x)
return x
class RepVGGplus(nn.Module):
"""RepVGGplus
An official improved version of RepVGG (RepVGG: Making VGG-style ConvNets Great Again) <https://openaccess.thecvf.com/content/CVPR2021/papers/Ding_RepVGG_Making_VGG-Style_ConvNets_Great_Again_CVPR_2021_paper.pdf>`_.
Args:
num_blocks (tuple[int]): Depths of each stage.
num_classes (tuple[int]): Num of classes.
width_multiplier (tuple[float]): The width of the four stages
will be (64 * width_multiplier[0], 128 * width_multiplier[1], 256 * width_multiplier[2], 512 * width_multiplier[3]).
deploy (bool, optional): If True, the model will have the inference-time structure.
Default: False.
use_post_se (bool, optional): If True, the model will have Squeeze-and-Excitation blocks following the conv-ReLU units.
Default: False.
use_checkpoint (bool, optional): If True, the model will use torch.utils.checkpoint to save the GPU memory during training with acceptable slowdown.
Do not use it if you have sufficient GPU memory.
Default: False.
"""
def __init__(self,
num_blocks,
num_classes,
width_multiplier,
deploy=False,
use_post_se=False,
use_checkpoint=False):
super().__init__()
self.deploy = deploy
self.num_classes = num_classes
in_channels = min(64, int(64 * width_multiplier[0]))
stage_channels = [int(64 * width_multiplier[0]), int(128 * width_multiplier[1]), int(256 * width_multiplier[2]), int(512 * width_multiplier[3])]
self.stage0 = RepVGGplusBlock(in_channels=3, out_channels=in_channels, kernel_size=3, stride=2, padding=1, deploy=self.deploy, use_post_se=use_post_se)
self.stage1 = RepVGGplusStage(in_channels, stage_channels[0], num_blocks[0], stride=2, use_checkpoint=use_checkpoint, use_post_se=use_post_se, deploy=deploy)
self.stage2 = RepVGGplusStage(stage_channels[0], stage_channels[1], num_blocks[1], stride=2, use_checkpoint=use_checkpoint, use_post_se=use_post_se, deploy=deploy)
# split stage3 so that we can insert an auxiliary classifier
self.stage3_first = RepVGGplusStage(stage_channels[1], stage_channels[2], num_blocks[2] // 2, stride=2, use_checkpoint=use_checkpoint, use_post_se=use_post_se, deploy=deploy)
self.stage3_second = RepVGGplusStage(stage_channels[2], stage_channels[2], num_blocks[2] - num_blocks[2] // 2, stride=1, use_checkpoint=use_checkpoint, use_post_se=use_post_se, deploy=deploy)
self.stage4 = RepVGGplusStage(stage_channels[2], stage_channels[3], num_blocks[3], stride=2, use_checkpoint=use_checkpoint, use_post_se=use_post_se, deploy=deploy)
self.gap = nn.AdaptiveAvgPool2d(output_size=1)
self.flatten = nn.Flatten()
self.linear = nn.Linear(int(512 * width_multiplier[3]), num_classes)
# aux classifiers
if not self.deploy:
self.stage1_aux = self._build_aux_for_stage(self.stage1)
self.stage2_aux = self._build_aux_for_stage(self.stage2)
self.stage3_first_aux = self._build_aux_for_stage(self.stage3_first)
def _build_aux_for_stage(self, stage):
stage_out_channels = list(stage.blocks.children())[-1].rbr_dense.conv.out_channels
downsample = conv_bn_relu(in_channels=stage_out_channels, out_channels=stage_out_channels, kernel_size=3, stride=2, padding=1)
fc = nn.Linear(stage_out_channels, self.num_classes, bias=True)
return nn.Sequential(downsample, nn.AdaptiveAvgPool2d(1), nn.Flatten(), fc)
def forward(self, x):
out = self.stage0(x)
out = self.stage1(out)
stage1_aux = self.stage1_aux(out)
out = self.stage2(out)
stage2_aux = self.stage2_aux(out)
out = self.stage3_first(out)
stage3_first_aux = self.stage3_first_aux(out)
out = self.stage3_second(out)
out = self.stage4(out)
y = self.gap(out)
y = self.flatten(y)
y = self.linear(y)
return {
'main': y,
'stage1_aux': stage1_aux,
'stage2_aux': stage2_aux,
'stage3_first_aux': stage3_first_aux,
}
def switch_repvggplus_to_deploy(self):
for m in self.modules():
if hasattr(m, 'switch_to_deploy'):
m.switch_to_deploy()
if hasattr(self, 'stage1_aux'):
self.__delattr__('stage1_aux')
if hasattr(self, 'stage2_aux'):
self.__delattr__('stage2_aux')
if hasattr(self, 'stage3_first_aux'):
self.__delattr__('stage3_first_aux')
self.deploy = True
# torch.utils.checkpoint can reduce the memory consumption during training with a minor slowdown. Don't use it if you have sufficient GPU memory.
# Not sure whether it slows down inference
# pse for "post SE", which means using SE block after ReLU
def create_RepVGGplus_L2pse(deploy=False, use_checkpoint=False):
return RepVGGplus(num_blocks=[8, 14, 24, 1], num_classes=1000,
width_multiplier=[2.5, 2.5, 2.5, 5], deploy=deploy, use_post_se=True,
use_checkpoint=use_checkpoint)
class RepVGGplusBackbone(nn.Module):
"""RepVGGplus
An official improved version of RepVGG (RepVGG: Making VGG-style ConvNets Great Again) <https://openaccess.thecvf.com/content/CVPR2021/papers/Ding_RepVGG_Making_VGG-Style_ConvNets_Great_Again_CVPR_2021_paper.pdf>`_.
Args:
num_blocks (tuple[int]): Depths of each stage.
num_classes (tuple[int]): Num of classes.
width_multiplier (tuple[float]): The width of the four stages
will be (64 * width_multiplier[0], 128 * width_multiplier[1], 256 * width_multiplier[2], 512 * width_multiplier[3]).
deploy (bool, optional): If True, the model will have the inference-time structure.
Default: False.
use_post_se (bool, optional): If True, the model will have Squeeze-and-Excitation blocks following the conv-ReLU units.
Default: False.
use_checkpoint (bool, optional): If True, the model will use torch.utils.checkpoint to save the GPU memory during training with acceptable slowdown.
Do not use it if you have sufficient GPU memory.
Default: False.
"""
def __init__(self,
num_blocks,
width_multiplier,
deploy=False,
use_post_se=False,
use_checkpoint=False):
super().__init__()
self.deploy = deploy
in_channels = min(64, int(64 * width_multiplier[0]))
stage_channels = [int(64 * width_multiplier[0]), int(128 * width_multiplier[1]), int(256 * width_multiplier[2]), int(512 * width_multiplier[3])]
self.stage0 = RepVGGplusBlock(in_channels=3, out_channels=in_channels, kernel_size=3, stride=2, padding=1, deploy=self.deploy, use_post_se=use_post_se)
self.stage1 = RepVGGplusStage(in_channels, stage_channels[0], num_blocks[0], stride=2, use_checkpoint=use_checkpoint, use_post_se=use_post_se, deploy=deploy)
self.stage2 = RepVGGplusStage(stage_channels[0], stage_channels[1], num_blocks[1], stride=2, use_checkpoint=use_checkpoint, use_post_se=use_post_se, deploy=deploy)
#self.stage3 = RepVGGplusStage(stage_channels[1], stage_channels[2], num_blocks[2], stride=1, use_checkpoint=use_checkpoint, use_post_se=use_post_se, deploy=deploy)
def forward(self, x):
out = self.stage0(x)
out = self.stage1(out)
out = self.stage2(out)
#out = self.stage3(out)
return out
def switch_repvggplus_to_deploy(self):
for m in self.modules():
if hasattr(m, 'switch_to_deploy'):
m.switch_to_deploy()
if hasattr(self, 'stage1_aux'):
self.__delattr__('stage1_aux')
if hasattr(self, 'stage2_aux'):
self.__delattr__('stage2_aux')
if hasattr(self, 'stage3_first_aux'):
self.__delattr__('stage3_first_aux')
self.deploy = True
def create_RepVGGplus_backbone(deploy=False, use_checkpoint=False):
return RepVGGplusBackbone(num_blocks=[8, 14, 24, 1],
width_multiplier=[2.5, 2.5, 2.5, 5], deploy=deploy, use_post_se=True,
use_checkpoint=use_checkpoint)
# Will release more
repvggplus_func_dict = {
'RepVGGplus-L2pse': create_RepVGGplus_L2pse,
'RepVGGplus-backbone': create_RepVGGplus_backbone
}
def create_RepVGGplus_by_name(name, deploy=False, use_checkpoint=False):
if 'plus' in name:
return repvggplus_func_dict[name](deploy=deploy, use_checkpoint=use_checkpoint)
else:
print('=================== Building the vanila RepVGG ===================')
from .repvgg import get_RepVGG_func_by_name
return get_RepVGG_func_by_name(name)(deploy=deploy, use_checkpoint=use_checkpoint)
def get_RepVGGplus_func_by_name(name):
return repvggplus_func_dict[name]
# Use this for converting a RepVGG model or a bigger model with RepVGG as its component
# Use like this
# model = create_RepVGG_A0(deploy=False)
# train model or load weights
# repvgg_model_convert(model, save_path='repvgg_deploy.pth')
# If you want to preserve the original model, call with do_copy=True
# ====================== for using RepVGG as the backbone of a bigger model, e.g., PSPNet, the pseudo code will be like
# train_backbone = create_RepVGG_B2(deploy=False)
# train_backbone.load_state_dict(torch.load('RepVGG-B2-train.pth'))
# train_pspnet = build_pspnet(backbone=train_backbone)
# segmentation_train(train_pspnet)
# deploy_pspnet = repvgg_model_convert(train_pspnet)
# segmentation_test(deploy_pspnet)
# ===================== example_pspnet.py shows an example
def repvgg_model_convert(model:torch.nn.Module, save_path=None, do_copy=True):
import copy
if do_copy:
model = copy.deepcopy(model)
for module in model.modules():
if hasattr(module, 'switch_to_deploy'):
module.switch_to_deploy()
if save_path is not None:
torch.save(model.state_dict(), save_path)
return model | /repvgg-pytorch-1.0.1.tar.gz/repvgg-pytorch-1.0.1/repvgg_pytorch/repvggplus.py | 0.927124 | 0.454593 | repvggplus.py | pypi |
import torch.nn as nn
import numpy as np
import torch
import copy
from .se_block import SEBlock
import torch.utils.checkpoint as checkpoint
def conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups=1):
result = nn.Sequential()
result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding, groups=groups, bias=False))
result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))
return result
class RepVGGBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size,
stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):
super(RepVGGBlock, self).__init__()
self.deploy = deploy
self.groups = groups
self.in_channels = in_channels
assert kernel_size == 3
assert padding == 1
padding_11 = padding - kernel_size // 2
self.nonlinearity = nn.ReLU()
if use_se:
# Note that RepVGG-D2se uses SE before nonlinearity. But RepVGGplus models uses SE after nonlinearity.
self.se = SEBlock(out_channels, internal_neurons=out_channels // 16)
else:
self.se = nn.Identity()
if deploy:
self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride,
padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode)
else:
self.rbr_identity = nn.BatchNorm2d(num_features=in_channels) if out_channels == in_channels and stride == 1 else None
self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=groups)
self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, padding=padding_11, groups=groups)
print('RepVGG Block, identity = ', self.rbr_identity)
def forward(self, inputs):
if hasattr(self, 'rbr_reparam'):
return self.nonlinearity(self.se(self.rbr_reparam(inputs)))
if self.rbr_identity is None:
id_out = 0
else:
id_out = self.rbr_identity(inputs)
return self.nonlinearity(self.se(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out))
# Optional. This may improve the accuracy and facilitates quantization in some cases.
# 1. Cancel the original weight decay on rbr_dense.conv.weight and rbr_1x1.conv.weight.
# 2. Use like this.
# loss = criterion(....)
# for every RepVGGBlock blk:
# loss += weight_decay_coefficient * 0.5 * blk.get_cust_L2()
# optimizer.zero_grad()
# loss.backward()
def get_custom_L2(self):
K3 = self.rbr_dense.conv.weight
K1 = self.rbr_1x1.conv.weight
t3 = (self.rbr_dense.bn.weight / ((self.rbr_dense.bn.running_var + self.rbr_dense.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach()
t1 = (self.rbr_1x1.bn.weight / ((self.rbr_1x1.bn.running_var + self.rbr_1x1.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach()
l2_loss_circle = (K3 ** 2).sum() - (K3[:, :, 1:2, 1:2] ** 2).sum() # The L2 loss of the "circle" of weights in 3x3 kernel. Use regular L2 on them.
eq_kernel = K3[:, :, 1:2, 1:2] * t3 + K1 * t1 # The equivalent resultant central point of 3x3 kernel.
l2_loss_eq_kernel = (eq_kernel ** 2 / (t3 ** 2 + t1 ** 2)).sum() # Normalize for an L2 coefficient comparable to regular L2.
return l2_loss_eq_kernel + l2_loss_circle
# This func derives the equivalent kernel and bias in a DIFFERENTIABLE way.
# You can get the equivalent kernel and bias at any time and do whatever you want,
# for example, apply some penalties or constraints during training, just like you do to the other models.
# May be useful for quantization or pruning.
def get_equivalent_kernel_bias(self):
kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
def _pad_1x1_to_3x3_tensor(self, kernel1x1):
if kernel1x1 is None:
return 0
else:
return torch.nn.functional.pad(kernel1x1, [1,1,1,1])
def _fuse_bn_tensor(self, branch):
if branch is None:
return 0, 0
if isinstance(branch, nn.Sequential):
kernel = branch.conv.weight
running_mean = branch.bn.running_mean
running_var = branch.bn.running_var
gamma = branch.bn.weight
beta = branch.bn.bias
eps = branch.bn.eps
else:
assert isinstance(branch, nn.BatchNorm2d)
if not hasattr(self, 'id_tensor'):
input_dim = self.in_channels // self.groups
kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
for i in range(self.in_channels):
kernel_value[i, i % input_dim, 1, 1] = 1
self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
kernel = self.id_tensor
running_mean = branch.running_mean
running_var = branch.running_var
gamma = branch.weight
beta = branch.bias
eps = branch.eps
std = (running_var + eps).sqrt()
t = (gamma / std).reshape(-1, 1, 1, 1)
return kernel * t, beta - running_mean * gamma / std
def switch_to_deploy(self):
if hasattr(self, 'rbr_reparam'):
return
kernel, bias = self.get_equivalent_kernel_bias()
self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels, out_channels=self.rbr_dense.conv.out_channels,
kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,
padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation, groups=self.rbr_dense.conv.groups, bias=True)
self.rbr_reparam.weight.data = kernel
self.rbr_reparam.bias.data = bias
self.__delattr__('rbr_dense')
self.__delattr__('rbr_1x1')
if hasattr(self, 'rbr_identity'):
self.__delattr__('rbr_identity')
if hasattr(self, 'id_tensor'):
self.__delattr__('id_tensor')
self.deploy = True
class RepVGG(nn.Module):
def __init__(self, num_blocks, num_classes=1000, width_multiplier=None, override_groups_map=None, deploy=False, use_se=False, use_checkpoint=False):
super(RepVGG, self).__init__()
assert len(width_multiplier) == 4
self.deploy = deploy
self.override_groups_map = override_groups_map or dict()
assert 0 not in self.override_groups_map
self.use_se = use_se
self.use_checkpoint = use_checkpoint
self.in_planes = min(64, int(64 * width_multiplier[0]))
self.stage0 = RepVGGBlock(in_channels=3, out_channels=self.in_planes, kernel_size=3, stride=2, padding=1, deploy=self.deploy, use_se=self.use_se)
self.cur_layer_idx = 1
self.stage1 = self._make_stage(int(64 * width_multiplier[0]), num_blocks[0], stride=2)
self.stage2 = self._make_stage(int(128 * width_multiplier[1]), num_blocks[1], stride=2)
self.stage3 = self._make_stage(int(256 * width_multiplier[2]), num_blocks[2], stride=2)
self.stage4 = self._make_stage(int(512 * width_multiplier[3]), num_blocks[3], stride=2)
self.gap = nn.AdaptiveAvgPool2d(output_size=1)
self.linear = nn.Linear(int(512 * width_multiplier[3]), num_classes)
def _make_stage(self, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
blocks = []
for stride in strides:
cur_groups = self.override_groups_map.get(self.cur_layer_idx, 1)
blocks.append(RepVGGBlock(in_channels=self.in_planes, out_channels=planes, kernel_size=3,
stride=stride, padding=1, groups=cur_groups, deploy=self.deploy, use_se=self.use_se))
self.in_planes = planes
self.cur_layer_idx += 1
return nn.ModuleList(blocks)
def forward(self, x):
out = self.stage0(x)
for stage in (self.stage1, self.stage2, self.stage3, self.stage4):
for block in stage:
if self.use_checkpoint:
out = checkpoint.checkpoint(block, out)
else:
out = block(out)
out = self.gap(out)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
class RepVGGBackbone(nn.Module):
def __init__(self, num_blocks, width_multiplier=None, override_groups_map=None, deploy=False, use_se=False, use_checkpoint=False):
super(RepVGGBackbone, self).__init__()
assert len(width_multiplier) == 4
self.deploy = deploy
self.use_se = use_se
self.use_checkpoint = use_checkpoint
self.in_planes = min(64, int(64 * width_multiplier[0]))
self.stage0 = RepVGGBlock(in_channels=3, out_channels=self.in_planes, kernel_size=3, stride=2, padding=1, deploy=self.deploy, use_se=self.use_se)
self.cur_layer_idx = 1
self.stage1 = self._make_stage(int(64 * width_multiplier[0]), num_blocks[0], stride=2)
self.stage2 = self._make_stage(int(128 * width_multiplier[1]), num_blocks[1], stride=2)
self.stage3 = self._make_stage(int(256 * width_multiplier[2]), num_blocks[2], stride=1)
self.stage4 = self._make_stage(int(512 * width_multiplier[3]), num_blocks[3], stride=1)
def _make_stage(self, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
blocks = []
for stride in strides:
cur_groups = 1
blocks.append(RepVGGBlock(in_channels=self.in_planes, out_channels=planes, kernel_size=3,
stride=stride, padding=1, groups=cur_groups, deploy=self.deploy, use_se=self.use_se))
self.in_planes = planes
self.cur_layer_idx += 1
return nn.ModuleList(blocks)
def forward(self, x):
out = self.stage0(x)
for stage in (self.stage1, self.stage2, self.stage3, self.stage4):
for block in stage:
out = block(out)
return out
optional_groupwise_layers = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26]
g2_map = {l: 2 for l in optional_groupwise_layers}
g4_map = {l: 4 for l in optional_groupwise_layers}
def create_RepVGG_A0(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[2, 4, 14, 1], num_classes=1000,
width_multiplier=[0.75, 0.75, 0.75, 2.5], override_groups_map=None, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_A0_backbone(deploy=False, use_checkpoint=False):
return RepVGGBackbone(num_blocks=[2, 4, 14, 1],width_multiplier=[0.75, 0.75, 0.75, 1], override_groups_map=None, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_A1(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[2, 4, 14, 1], num_classes=1000,
width_multiplier=[1, 1, 1, 2.5], override_groups_map=None, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_A2(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[2, 4, 14, 1], num_classes=1000,
width_multiplier=[1.5, 1.5, 1.5, 2.75], override_groups_map=None, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B0(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[1, 1, 1, 2.5], override_groups_map=None, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B1(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[2, 2, 2, 4], override_groups_map=None, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B1g2(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[2, 2, 2, 4], override_groups_map=g2_map, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B1g4(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[2, 2, 2, 4], override_groups_map=g4_map, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B2(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[2.5, 2.5, 2.5, 5], override_groups_map=None, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B2g2(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[2.5, 2.5, 2.5, 5], override_groups_map=g2_map, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B2g4(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[2.5, 2.5, 2.5, 5], override_groups_map=g4_map, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B3(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[3, 3, 3, 5], override_groups_map=None, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B3g2(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[3, 3, 3, 5], override_groups_map=g2_map, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_B3g4(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[4, 6, 16, 1], num_classes=1000,
width_multiplier=[3, 3, 3, 5], override_groups_map=g4_map, deploy=deploy, use_checkpoint=use_checkpoint)
def create_RepVGG_D2se(deploy=False, use_checkpoint=False):
return RepVGG(num_blocks=[8, 14, 24, 1], num_classes=1000,
width_multiplier=[2.5, 2.5, 2.5, 5], override_groups_map=None, deploy=deploy, use_se=True, use_checkpoint=use_checkpoint)
func_dict = {
'RepVGG-A0': create_RepVGG_A0,
'RepVGG-A0-backbone': create_RepVGG_A0_backbone,
'RepVGG-A1': create_RepVGG_A1,
'RepVGG-A2': create_RepVGG_A2,
'RepVGG-B0': create_RepVGG_B0,
'RepVGG-B1': create_RepVGG_B1,
'RepVGG-B1g2': create_RepVGG_B1g2,
'RepVGG-B1g4': create_RepVGG_B1g4,
'RepVGG-B2': create_RepVGG_B2,
'RepVGG-B2g2': create_RepVGG_B2g2,
'RepVGG-B2g4': create_RepVGG_B2g4,
'RepVGG-B3': create_RepVGG_B3,
'RepVGG-B3g2': create_RepVGG_B3g2,
'RepVGG-B3g4': create_RepVGG_B3g4,
'RepVGG-D2se': create_RepVGG_D2se, # Updated at April 25, 2021. This is not reported in the CVPR paper.
}
def get_RepVGG_func_by_name(name):
return func_dict[name]
# Use this for converting a RepVGG model or a bigger model with RepVGG as its component
# Use like this
# model = create_RepVGG_A0(deploy=False)
# train model or load weights
# repvgg_model_convert(model, save_path='repvgg_deploy.pth')
# If you want to preserve the original model, call with do_copy=True
# ====================== for using RepVGG as the backbone of a bigger model, e.g., PSPNet, the pseudo code will be like
# train_backbone = create_RepVGG_B2(deploy=False)
# train_backbone.load_state_dict(torch.load('RepVGG-B2-train.pth'))
# train_pspnet = build_pspnet(backbone=train_backbone)
# segmentation_train(train_pspnet)
# deploy_pspnet = repvgg_model_convert(train_pspnet)
# segmentation_test(deploy_pspnet)
# ===================== example_pspnet.py shows an example
def repvgg_model_convert(model:torch.nn.Module, save_path=None, do_copy=True):
if do_copy:
model = copy.deepcopy(model)
for module in model.modules():
if hasattr(module, 'switch_to_deploy'):
module.switch_to_deploy()
if save_path is not None:
torch.save(model.state_dict(), save_path)
return model | /repvgg-pytorch-1.0.1.tar.gz/repvgg-pytorch-1.0.1/repvgg_pytorch/repvgg.py | 0.923013 | 0.45042 | repvgg.py | pypi |
# <img alt="repytah" src="branding/repytah_logo.png" height="100">
A Python package that builds aligned hierarchies for sequential data streams.
[](https://pypi.python.org/pypi/repytah)
[](https://anaconda.org/conda-forge/repytah)
[](https://github.com/smith-tinkerlab/repytah/blob/main/LICENSE.md)
[](https://github.com/smith-tinkerlab/repytah/actions/workflows/check_repytah.yml)
[](https://codecov.io/gh/tinkerlab/repytah)
[](https://zenodo.org/badge/latestdoi/198304490)
## Documentation
See our [website](https://repytah.readthedocs.io/en/latest/index.html) for a complete reference manual and introductory tutorials.
This [example](https://repytah.readthedocs.io/en/latest/example_vignette.html) tutorial will show you a usage of the package from start to finish.
## Summary
We introduce `repytah`, a Python package that constructs the aligned hierarchies representation that contains all possible structure-based hierarchical decompositions for a finite length piece of sequential data aligned on a common time axis. In particular, this representation--introduced by Kinnaird [@Kinnaird_ah] with music-based data (like musical recordings or scores) as the primary motivation--is intended for sequential data where repetitions have particular meaning (such as a verse, chorus, motif, or theme). Although the original motivation for the aligned hierarchies representation was finding structure for music-based data streams, there is nothing inherent in the construction of these representations that limits `repytah` to only being used on sequential data that is music-based.
The `repytah` package builds these aligned hierarchies by first extracting repeated structures (of all meaningful lengths) from the self-dissimilarity matrix (SDM) for a piece of sequential data. Intentionally `repytah` uses the SDM as the starting point for constructing the aligned hierarchies, as an SDM cannot be reversed-engineered back to the original signal and allows for researchers to collaborate with signals that are protected either by copyright or under privacy considerations. This package is a Python translation of the original MATLAB code by Kinnaird [-@Kinnaird_code] with additional documentation, and the code has been updated to leverage efficiencies in Python.
### Problems Addressed
Sequential data streams often have repeated elements that build on each other, creating hierarchies. Therefore, the goal of `repytah` is to extract these repetitions and their relationships to each other in order to form aligned hierarchies.
To learn more about aligned hierarchies, see this [paper](https://s18798.pcdn.co/ismir2016/wp-content/uploads/sites/2294/2016/07/020_Paper.pdf) by Kinnaird (ISMIR 2016) which introduces aligned hierarchies in the context of music-based data streams.
### Audience
People working with sequential data where repetitions have meaning will find `repytah` useful including computational scientists, advanced undergraduate students, younger industry experts, and many others.
An example application of `repytah` is in Music Information Retrieval (MIR), i.e., in the intersection of music and computer science.
## Installation
The latest stable release is available on PyPI, and you can install it by running:
```bash
pip install repytah
```
If you use Anaconda, you can install the package using `conda-forge`:
```bash
conda install -c conda-forge repytah
```
To build repytah from source, say `python setup.py build`.
Then, to install repytah, say `python setup.py install`.
Alternatively, you can download or clone the repository and use `pip` to handle dependencies:
```bash
unzip repytah.zip
pip install -e repytah-main
```
or
```bash
git clone https://github.com/smith-tinkerlab/repytah.git
pip install -e repytah
```
By calling `pip list` you should see `repytah` now as an installed package:
```bash
repytah (0.x.x, /path/to/repytah)
```
## Current and Future Work - Elements of the Package
* Aligned Hierarchies - This is the fundamental output of the package, of which derivatives can be built. The aligned hierarchies for a given sequential data stream is the collection of all possible **hierarchical** structure decompositions, **aligned** on a common time axis. To this end, we offer all possible structure decompositions in one cohesive object.
* Includes walk through file `example.py` using supplied `input.csv`
* _Forthcoming_ Aligned sub-Hierarchies - (AsH) - These are derivatives of the aligned hierarchies and are described in [Aligned sub-Hierarchies: a structure-based approach to the cover song task](http://ismir2018.ircam.fr/doc/pdfs/81_Paper.pdf)
* _Forthcoming_ Start-End and S_NL diagrams
* _Forthcoming_ SuPP and MaPP representations
### MATLAB code
The original code to this project was written in MATLAB by Katherine M. Kinnaird. It can be found [here](https://github.com/kmkinnaird/ThesisCode).
### Acknowledgements
This code was developed as part of Smith College's Summer Undergraduate Research Fellowship (SURF) from 2019 to 2022 and has been partially funded by Smith College's CFCD funding mechanism. Additionally, as Kinnaird is the Clare Boothe Luce Assistant Professor of Computer Science and Statistical & Data Sciences at Smith College, this work has also been partially supported by Henry Luce Foundation's Clare Boothe Luce Program.
Additionally, we would like to acknowledge and give thanks to Brian McFee and the [librosa](https://github.com/librosa) team. We significantly referenced the Python package [librosa](https://github.com/librosa/librosa) in our development process.
### Citing
Please cite `repytah` using the following:
C. Jia et al., repytah: A Python package that builds aligned hierarchies for sequential data streams. Python package version 0.1.2, 2023. [Online]. Available: [https://github.com/smith-tinkerlab/repytah](https://github.com/smith-tinkerlab/repytah).
| /repytah-0.1.2.tar.gz/repytah-0.1.2/README.md | 0.518059 | 0.918699 | README.md | pypi |
Utilities
=========
The module ``utilities.py``, when imported, allows ``search.py``, ``transform.py`` and
``assemble.py`` in the ``repytah`` package to run smoothly.
This module contains the following functions:
.. function:: create_sdm(fv_mat, num_fv_per_shingle)
Creates self-dissimilarity matrix; this matrix is found by creating audio
shingles from feature vectors, and finding the cosine distance between
shingles.
:parameters:
fv_mat : np.ndarray
Matrix of feature vectors where each column is a time step and each
row includes feature information i.e. an array of 144 columns/beats
and 12 rows corresponding to chroma values.
num_fv_per_shingle : int
Number of feature vectors per audio shingle.
:returns:
self_dissim_mat : np.ndarray
Self-dissimilarity matrix with paired cosine distances between shingles.
.. function:: find_initial_repeats(thresh_mat, bandwidth_vec, thresh_bw)
Looks for the largest repeated structures in thresh_mat. Finds all
repeated structures, represented as diagonals present in thresh_mat,
and then stores them with their start/end indices and lengths in a
list. As each diagonal is found, they are removed to avoid identifying
repeated sub-structures.
:parameters:
thresh_mat : np.ndarray[int]
Thresholded matrix that we extract diagonals from.
bandwidth_vec : np.ndarray[1D,int]
Array of lengths of diagonals to be found. Should be
1, 2, 3, ..., n where n is the number of timesteps.
thresh_bw : int
Smallest allowed diagonal length.
:returns:
all_lst : np.ndarray[int]
List of pairs of repeats that correspond to diagonals in
thresh_mat.
.. function:: stretch_diags(thresh_diags, band_width)
Creates a binary matrix with full length diagonals from a binary matrix of
diagonal starts and length of diagonals.
:parameters:
thresh_diags : np.ndarray
Binary matrix where entries equal to 1 signals the existence
of a diagonal.
band_width : int
Length of encoded diagonals.
:returns:
stretch_diag_mat : np.ndarray[bool]
Logical matrix with diagonals of length band_width starting
at each entry prescribed in thresh_diag.
.. function:: add_annotations(input_mat, song_length)
Adds annotations to the pairs of repeats in input_mat.
:parameters:
input_mat : np.ndarray
List of pairs of repeats. The first two columns refer to
the first repeat of the pair. The third and fourth columns
refer to the second repeat of the pair. The fifth column
refers to the repeat lengths. The sixth column contains any
previous annotations, which will be removed.
song_length : int
Number of audio shingles in the song.
:returns:
anno_list : np.ndarray
List of pairs of repeats with annotations marked.
.. function:: reconstruct_full_block(pattern_mat, pattern_key)
Creates a record of when pairs of repeated structures occur, from the
first beat in the song to the end. This record is a binary matrix with a
block of 1's for each repeat encoded in pattern_mat whose length is
encoded in pattern_key.
:parameters:
pattern_mat : np.ndarray
Binary matrix with 1's where repeats begin and 0's otherwise.
pattern_key : np.ndarray
Vector containing the lengths of the repeats encoded in
each row of pattern_mat.
:returns:
pattern_block : np.ndarray
Binary matrix representation for pattern_mat with blocks
of 1's equal to the length's prescribed in pattern_key.
.. function:: get_annotation_lst(key_lst)
Creates one annotation marker vector, given vector of lengths key_lst.
:parameters:
key_lst : np.ndarray[int]
Array of lengths in ascending order.
:returns:
anno_lst_out : np.ndarray[int]
Array of one possible set of annotation markers for key_lst.
.. function:: get_y_labels(width_vec, anno_vec)
Generates the labels for visualization with width_vec and anno_vec.
:parameters:
width_vec : np.ndarray[int]
Vector of widths for a visualization.
anno_vec : np.ndarray[int]
Array of annotations for a visualization.
:returns:
y_labels : np.ndarray[str]
Labels for the y-axis of a visualization.
.. function:: reformat(pattern_mat, pattern_key)
Transforms a binary array with 1's where repeats start and 0's
otherwise into a list of repeated structures. This list consists of
information about the repeats including length, when they occur and when
they end.
Every row has a pair of repeated structure. The first two columns are
the time steps of when the first repeat of a repeated structure start and
end. Similarly, the second two columns are the time steps of when the
second repeat of a repeated structure start and end. The fifth column is
the length of the repeated structure.
Reformat is not used in the main process for creating the
aligned-hierarchies. It is helpful when writing example inputs for
the tests.
:parameters:
pattern_mat : np.ndarray
Binary array with 1's where repeats start and 0's otherwise.
pattern_key : np.ndarray
Array with the lengths of each repeated structure in pattern_mat.
:returns:
info_mat : np.ndarray
Array with the time steps of when the pairs of repeated structures
start and end organized.
| /repytah-0.1.2.tar.gz/repytah-0.1.2/docs/utilities.rst | 0.935428 | 0.928084 | utilities.rst | pypi |
README for Req-Compile Python Requirements Compiler
===================================================
.. image:: https://img.shields.io/pypi/v/req-compile.svg
:alt: PyPI package version
:target: https://pypi.python.org/pypi/req-compile
.. image:: https://github.com/sputt/req-compile/actions/workflows/build.yml/badge.svg
:alt: Github build status
:target: https://github.com/sputt/req-compile
========================================
Req-Compile Python Requirements Compiler
========================================
Req-Compile is a Python requirements compiler geared toward large Python projects. It allows you to:
* Produce an output file consisting of fully constrained exact versions of your requirements
* Identify sources of constraints on your requirements
* Constrain your output requirements using requirements that will not be included in the output
* Save distributions that are downloaded while compiling in a configurable location
* Use a current solution as a source of requirements. In other words, you can easily compile a subset from an existing solution.
Why use it?
-----------
**pip** and **pip-tools** are missing features and lack usability for some important workflows:
* Using a previous solution as an input file to avoid hitting the network
* pip-compile can't consider constraints that are not included in the final output. While pip accepts a constraints file, there is no way to stop at the "solving" phase, which would be used to push a fully solved solution to your repo
* Track down where conflicting constraints originate
* Treating source directories recursively as sources of requirements, like with --find-links
* Configuring a storage location for downloaded distributions. Finding a fresh solution to a set of input requirements always requires downloading distributions
A common workflow that is difficult to achieve with other tools:
You have a project with requirements ``requirements.txt`` and test requirements ``test-requirements.txt``. You want
to produce a fully constrained output of ``requirements.txt`` to use to deploy your application. Easy, right? Just
compile ``requirements.txt``. However, if your test requirements will in any way constrain packages you need,
even those needed transitively, it means you will have tested with different versions than you'll ship.
For this reason, you can use Req-Compile to compile ``requirements.txt`` using ``test-requirements.txt`` as constraints.
The Basics
----------
Install and run
~~~~~~~~~~~~~~~
Req-Compile can be simply installed by running::
pip install req-compile
Two entrypoint scripts are provided::
req-compile <input reqfile1> ... <input_reqfileN> [--constraints constraint_file] [repositories, such as --index-url https://...]
req-candidates [requirement] [repositories, such as --index-url https://...]
Producing output requirements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To produce a fully constrained set of requirements for a given number of input requirements files, pass requirements
files to req-compile::
> cat requirements.txt
astroid >= 2.0.0
isort >= 4.2.5
mccabe
> req-compile req-compile requirements.txt
astroid==2.9.0 # requirements.txt (>=2.0.0)
isort==5.10.1 # requirements.txt (>=4.2.5)
lazy-object-proxy==1.7.1 # astroid (>=1.4.0)
mccabe==0.6.1 # requirements.txt
setuptools==60.0.1 # astroid (>=20.0)
typed-ast==1.5.1 # astroid (<2.0,>=1.4.0)
typing_extensions==4.0.1 # astroid (>=3.10)
wrapt==1.13.3 # astroid (<1.14,>=1.11)
Output is always emitted to stdout. Possible inputs include::
> req-compile
> req-compile .
# Compiles the current directory (looks for a setup.py or pyproject.toml)
> req-compile subdir/project
# Compiles the project in the subdir/project directory
> req-candidates --paths-only | req-compile
# Search for candidates and compile them piped in via stdin
> echo flask | req-compile
# Compile the requirement 'flask' using the default remote index (PyPI)
> req-compile . --extra test
# Compiles the current directory with the extra "test"
Specifying source of distributions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Req-Compile supports obtaining python distributions from multiple sources, each of which can be specified more than once. The following sources
can be specified, resolved in the same order (e.g. source takes precedence over index-url):
* ``--solution``
Load a previous solution and use it as a source of distributions. This will allow a full
recompilation of a working solution without requiring any other source. If the
solution file can't be found, a warning will be emitted but not cause a failure
* ``--source``
Use a local filesystem with source python packages to compile from. This will search the entire
tree specified at the source directory, until an __init__.py is reached. ``--remove-source`` can
be supplied to remove results that were obtained from source directories. You may want to do
this if compiling for a project and only third party requirements compilation results need to be saved.
* ``--find-links``
Read a directory to load distributions from. The directory can contain anything
a remote index would, wheels, zips, and source tarballs. This matches pip's command line.
* ``--index-url``
URL of a remote index to search for packages in. When compiling, it's necessary to download
a package to determine its requirements. ``--wheel-dir`` can be supplied to specify where to save
these distributions. Otherwise they will be deleted after compilation is complete. When specified,
replaces the default index that is located in pip.conf/pip.ini on your system.
* ``--extra-index-url``
Extra remote index to search. Same semantics as index-url, but searched afterward. Additionally,
does not replace the default index URL so it can be used as a supplemental source of requirements
without knowing (or recording in the solution) the default index URL.
All options can be repeated multiple times, with the resolution order within types matching what
was passed on the commandline. However, overall resolution order will always match the order
of the list above.
By default, PyPI (https://pypi.org/) is added as a default source. It can be removed by passing
``--no-index`` on the commandline or passing a different index via ``--index-url``.
Identifying source of constraints
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Why did I just get version 1.11.0 of ``six``? Find out by examining the output::
six==1.11.0 # astroid, pathlib2, pymodbus (==1.11.0), pytest (>=1.10.0), more_itertools (<2.0.0,>=1.0.0)
In the above output, the (==1.11.0) indicates that pymodbus, the requirement name listed before the
parenthesis, specifically requested version 1.11.0 of six.
Constraining output
~~~~~~~~~~~~~~~~~~~
Constrain production outputs with test requirements using the ``--constraints`` flag. More than one file can be
passed::
> cat requirements.txt
astroid
> cat test-requirements.txt
pylint<1.6
> req-compile requirements.txt --constraints test-requirements.txt
astroid==1.4.9 # pylint (<1.5.0,>=1.4.5), requirements.txt
lazy-object-proxy==1.7.1 # astroid
six==1.16.0 # astroid, pylint
wrapt==1.13.3 # astroid
Note that astroid is constrained by ``pylint``, even though ``pylint`` is not included in the output.
If a passed constraints file is fully pinned, Req-Compile will not attempt to find a solution for
the requirements passed in the constraints files. This behavior only occurs if ALL of the requirements
listed in the constraints files are pinned. This is because pinning a single requirement may
still bring in transitive requirements that would affect the final solution. The heuristic of
checking that all requirements are pinned assumes that you are providing a full solution.
Advanced Features
-----------------
Compiling a constrained subset
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Input can be supplied via stdin as well as via as through files. For example, to supply a full
solution through a second compilation in order to obtain a subset of requirements, the
following cmdline might be used::
> req-compile requirements.txt --constraints compiled-requirements.txt
or, for example to consider two projects together::
> req-compile /some/other/project /myproject | req-compile /myproject --solution -
which is equivalent to::
> req-compile /myproject --constraints /some/other/project
Resolving constraint conflicts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Conflicts will automatically print the source of each conflicting requirement::
> cat projectreqs.txt
astroid<1.6
pylint>=1.5
> req-compile projectreqs.txt
No version of astroid could possibly satisfy the following requirements (astroid<1.6,<3,>=2.3.0):
projectreqs.txt -> astroid<1.6
projectreqs.txt -> pylint 2.4.1 -> astroid<3,>=2.3.0
Saving distributions
~~~~~~~~~~~~~~~~~~~~
Files downloading during the compile process can be saved for later install. This can optimize
the execution times of builds when a separate compile step is required::
> req-compile projectreqs.txt --wheel-dir .wheeldir > compiledreqs.txt
> pip install -r compiledreqs.txt --find-links .wheeldir --no-index
Cookbook
--------
Some useful patterns for projects are outlined below.
Compile, then install
~~~~~~~~~~~~~~~~~~~~~
After requirements are compiled, the usual next step is to install them
into a virtualenv.
A script for test might run::
> req-compile --extra test --solution compiled-requirements.txt --wheel-dir .wheeldir > compiled-requirements.txt
> pip-sync compiled-requirement.txt --find-links .wheeldir --no-index
or
> pip install -r compiled-requirements.txt --find-links .wheeldir --no-index
This would produce an environment containing all of the requirements and test requirements for the project
in the current directory (as defined by a setup.py). This is a *stable* set, in that only changes to
the requirements and constraints would produce a new output. To produce a totally fresh compilation,
don't pass in a previous solution.
The find-links parameter to the sync or pip install will *reuse* the wheels already downloaded by Req-Compile during
the compilation phase. This will make the installation step entirely offline.
When taking this environment to deploy, trim down the set to the install requirements::
> req-compile --solution compiled-requirements.txt --no-index > install-requirements.txt
install-requirements.txt will contain the pinned requirements that should be installed in your
target environment. The reason for this extra step is that you don't want to distribute
your test requirements, and you also want your installed requirements to be the same
versions that you've tested with. In order to get all of your explicitly declared
requirements and all of the transitive dependencies, you can use the prior solution to
extract a subset. Passing the ``--no-index`` makes it clear that this command will not
hit the remote index at all (though this would naturally be the case as solution files
take precedence over remote indexes in repository search order).
Compile for a group of projects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Req-Compile can discover requirements that are grouped together on the filesystem. The
``req-candidates`` command will print discovered projects and with the ``--paths-only`` options
will dump their paths to stdout. This allows recursive discovery of projects that you
may want to compile together.
For example, consider a filesystem with this layout::
solution
\_ utilities
| \_ network_helper
|_ integrations
| \_ github
\_ frameworks
|_ neural_net
\_ cluster
In each of the leaf nodes, there is a setup.py and full python project. To compile these
together and ensure that their requirements will all install into the same environment::
> cd solution
> req-candidates --paths-only
/home/user/projects/solution/utilities/network_helper
/home/user/projects/solution/integrations/github
/home/user/projects/solution/frameworks/neural_net
/home/user/projects/solution/frameworks/cluster
> req-candidates --paths-only | req-compile --extra test --solution compiled-requirements.txt --wheel-dir .wheeldir > compiled-requirements.txt
.. all reqs and all test reqs compiled together...
| /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/README.rst | 0.862771 | 0.663798 | README.rst | pypi |
import packaging.version
import pkg_resources
from req_compile.utils import parse_version
PART_MAX = "999999999"
def _offset_minor_version(
version: packaging.version.Version, offset: int, pos: int = 2
) -> packaging.version.Version:
parts = str(version).split(".")
for idx, part in enumerate(parts):
for char_idx, char in enumerate(part):
if not char.isdigit():
# If the entire thing starts with a character, drop it
# because it's a suffix
if char_idx == 0:
parts = parts[:idx]
break
parts[idx] = str(int(part[:char_idx]))
break
while len(parts) < 3:
parts += ["0"]
cur_version = int(parts[pos])
if cur_version == 0 and offset < 0:
if pos == 0:
raise ValueError("Cannot create a version less than 0")
parts[pos] = PART_MAX
return _offset_minor_version(parse_version(".".join(parts)), -1, pos=pos - 1)
parts[pos] = str(int(parts[pos]) + offset)
return parse_version(".".join(parts))
def is_possible(
req: pkg_resources.Requirement,
) -> bool: # pylint: disable=too-many-branches
"""Determine whether the requirement with its given specifiers is even possible.
Args:
req: Requirement to check.
Returns:
Whether the constraint can be satisfied.
"""
lower_bound = pkg_resources.parse_version("0.0.0")
exact = None
not_equal = []
upper_bound = pkg_resources.parse_version("{max}.{max}.{max}".format(max=PART_MAX))
if len(req.specifier) == 1: # type: ignore[attr-defined]
return True
for spec in req.specifier: # type: ignore[attr-defined]
version = parse_version(spec.version)
if spec.operator == "==":
if exact is None:
exact = version
if exact != version:
return False
if spec.operator == "!=":
not_equal.append(version)
elif spec.operator == ">":
if version > lower_bound:
lower_bound = _offset_minor_version(version, 1)
elif spec.operator == ">=":
if version >= lower_bound:
lower_bound = version
elif spec.operator == "<":
if version < upper_bound:
upper_bound = _offset_minor_version(version, -1)
elif spec.operator == "<=":
if version <= upper_bound:
upper_bound = version
# Some kind of parsing error occurred
if upper_bound is None or lower_bound is None:
return True
if upper_bound < lower_bound:
return False
if exact is not None:
if exact > upper_bound or exact < lower_bound:
return False
for check in not_equal:
if check == exact:
return False
return True | /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/req_compile/versions.py | 0.625324 | 0.229719 | versions.py | pypi |
from __future__ import print_function
import collections.abc
import itertools
import logging
import sys
from typing import (
Any,
Callable,
Dict,
Iterable,
Iterator,
List,
Optional,
Set,
Tuple,
Union,
)
import packaging.requirements
import packaging.version
import pkg_resources
from req_compile.containers import RequirementContainer
from req_compile.repos import Repository
from req_compile.utils import (
NormName,
merge_requirements,
normalize_project_name,
parse_requirement,
)
class DependencyNode:
"""
Class representing a node in the dependency graph of a resolution. Contains information
about whether or not this node has a solution yet -- meaning, is it resolved to a
concrete requirement resolved from a Repository
"""
def __init__(self, key: NormName, metadata: Optional[RequirementContainer]) -> None:
self.key = key
self.metadata = metadata
self.dependencies = (
{}
) # type: Dict[DependencyNode, Optional[pkg_resources.Requirement]]
self.reverse_deps = set() # type: Set[DependencyNode]
self.repo = None # type: Optional[Repository]
self.complete = (
False # Whether this node and all of its dependency are completely solved
)
def __repr__(self):
# type: () -> str
return self.key
def __hash__(self):
# type: () -> int
return hash(self.key)
def __str__(self):
# type: () -> str
if self.metadata is None:
return self.key + " [UNSOLVED]"
if self.metadata.meta:
return self.metadata.name
return "==".join(str(x) for x in self.metadata.to_definition(self.extras))
def __lt__(self, other):
# type: (Any) -> bool
return self.key < other.key
@property
def extras(self):
# type: () -> Set[str]
extras = set()
for rdep in self.reverse_deps:
assert (
rdep.metadata is not None
), "Reverse dependency should already have a solution"
reason = rdep.dependencies[self]
if reason is not None:
extras |= set(reason.extras)
return extras
def add_reason(self, node, reason):
# type: (DependencyNode, Optional[pkg_resources.Requirement]) -> None
self.dependencies[node] = reason
def build_constraints(self):
# type: () -> pkg_resources.Requirement
result = None
for rdep_node in self.reverse_deps:
assert (
rdep_node.metadata is not None
), "Reverse dependency should already have a solution"
all_reqs = set(rdep_node.metadata.requires())
for extra in rdep_node.extras:
all_reqs |= set(rdep_node.metadata.requires(extra=extra))
for req in all_reqs:
if normalize_project_name(req.project_name) == self.key:
result = merge_requirements(result, req)
if result is None:
if self.metadata is None:
result = parse_requirement(self.key)
else:
result = parse_requirement(self.metadata.name)
assert result is not None
if self.extras:
result.extras = self.extras
# Reparse to create a correct hash
result = parse_requirement(str(result))
assert result is not None
return result
def build_constraints(root_node):
# type: (DependencyNode) -> Iterable[str]
constraints = [] # type: List[str]
for node in root_node.reverse_deps:
assert (
node.metadata is not None
), "Reverse dependency should already have a solution"
all_reqs = set(node.metadata.requires())
for extra in node.extras:
all_reqs |= set(node.metadata.requires(extra=extra))
for req in all_reqs:
if normalize_project_name(req.project_name) == root_node.key:
_process_constraint_req(req, node, constraints)
return constraints
def _process_constraint_req(req, node, constraints):
# type: (pkg_resources.Requirement, DependencyNode, List[str]) -> None
assert node.metadata is not None, "Node {} must be solved".format(node)
extra = None
if req.marker:
for marker in req.marker._markers: # pylint: disable=protected-access
if (
isinstance(marker, tuple)
and marker[0].value == "extra"
and marker[1].value == "=="
):
extra = marker[2].value
source = node.metadata.name + (("[" + extra + "]") if extra else "")
specifics = " (" + str(req.specifier) + ")" if req.specifier else "" # type: ignore[attr-defined]
constraints.extend([source + specifics])
class DistributionCollection:
"""A collection of dependencies and their distributions. This is the main representation
of the graph of dependencies when putting together a resolution. As distributions are
added to the collection and provide a concrete RequirementContainer (like a DistInfo from
a wheel), the corresponding node in this collection will be marked solved."""
def __init__(self):
# type: () -> None
self.nodes: Dict[NormName, DependencyNode] = {}
self.logger = logging.getLogger("req_compile.dists")
@staticmethod
def _build_key(name: str) -> NormName:
return normalize_project_name(name)
def add_dist(
self,
name_or_metadata: Union[str, RequirementContainer],
source: Optional[DependencyNode],
reason: Optional[pkg_resources.Requirement],
) -> Set[DependencyNode]:
"""Add a distribution as a placeholder or as a solution.
Args:
name_or_metadata: Distribution info to add, or if it is unknown, the
name of the distribution, so it can be added as a placeholder.
source: The source of the distribution. This is used to build the graph.
reason: The requirement that caused this distribution to be added to the
graph. This is used to constrain which solutions will be allowed.
"""
self.logger.debug("Adding dist: %s %s %s", name_or_metadata, source, reason)
if isinstance(name_or_metadata, str):
req_name = name_or_metadata
metadata_to_apply = None # type: Optional[RequirementContainer]
else:
assert isinstance(name_or_metadata, RequirementContainer)
metadata_to_apply = name_or_metadata
req_name = metadata_to_apply.name
key = DistributionCollection._build_key(req_name)
if key in self.nodes:
node = self.nodes[key]
else:
node = DependencyNode(key, metadata_to_apply)
self.nodes[key] = node
# If a new extra is being supplied, update the metadata
if (
reason
and node.metadata
and reason.extras
and set(reason.extras) - node.extras
):
metadata_to_apply = node.metadata
node.complete = False
if source is not None and source.key in self.nodes:
node.reverse_deps.add(source)
source.add_reason(node, reason)
nodes = set()
if metadata_to_apply is not None:
nodes |= self._update_dists(node, metadata_to_apply)
self._discard_metadata_if_necessary(node, reason)
if node.key not in self.nodes:
raise ValueError("The node {} is gone, while adding".format(node.key))
return nodes
def _discard_metadata_if_necessary(
self, node: DependencyNode, reason: Optional[pkg_resources.Requirement]
) -> None:
if node.metadata is not None and not node.metadata.meta and reason is not None:
if node.metadata.version is not None and not reason.specifier.contains(
node.metadata.version, prereleases=True
):
self.logger.debug(
"Existing solution (%s) invalidated by %s", node.metadata, reason
)
# Discard the metadata
self.remove_dists(node, remove_upstream=False)
def _update_dists(
self, node: DependencyNode, metadata: RequirementContainer
) -> Set[DependencyNode]:
node.metadata = metadata
add_nodes = {node}
for extra in {None} | node.extras:
for req in metadata.requires(extra):
# This adds a placeholder entry
add_nodes |= self.add_dist(req.name, node, req)
return add_nodes
def remove_dists(
self,
node: Union[DependencyNode, Iterable[DependencyNode]],
remove_upstream: bool = True,
) -> None:
if isinstance(node, collections.abc.Iterable):
for single_node in node:
self.remove_dists(single_node, remove_upstream=remove_upstream)
return
self.logger.info("Removing dist(s): %s (upstream = %s)", node, remove_upstream)
if node.key not in self.nodes:
self.logger.debug("Node %s was already removed", node.key)
return
if remove_upstream:
del self.nodes[node.key]
for reverse_dep in node.reverse_deps:
del reverse_dep.dependencies[node]
for dep in node.dependencies:
if remove_upstream or dep.key != node.key:
dep.reverse_deps.remove(node)
if not dep.reverse_deps:
self.remove_dists(dep)
if not remove_upstream:
node.dependencies = {}
node.metadata = None
node.complete = False
def build(
self, roots: Iterable[DependencyNode]
) -> Iterable[pkg_resources.Requirement]:
results = self.generate_lines(roots)
return [
parse_requirement("==".join([result[0][0], str(result[0][1])]))
for result in results
]
def visit_nodes(
self,
roots: Iterable[DependencyNode],
max_depth: int = sys.maxsize,
reverse: bool = False,
_visited: Set[DependencyNode] = None,
_cur_depth: int = 0,
) -> Iterable[DependencyNode]:
if _visited is None:
_visited = set()
if _cur_depth == max_depth:
return _visited
if reverse:
next_nodes: Iterable[DependencyNode] = itertools.chain(
*[root.reverse_deps for root in roots]
)
else:
next_nodes = set()
for root in roots:
next_nodes |= set(root.dependencies.keys())
for node in next_nodes:
if node in _visited:
continue
_visited.add(node)
self.visit_nodes(
[node],
reverse=reverse,
max_depth=max_depth,
_visited=_visited,
_cur_depth=_cur_depth + 1,
)
return _visited
def generate_lines(
self,
roots: Iterable[DependencyNode],
req_filter: Callable[[DependencyNode], bool] = None,
strip_extras: bool = False,
) -> Iterable[
Tuple[Tuple[str, Optional[packaging.version.Version], Optional[str]], str]
]:
"""
Generate the lines of a results file from this collection
Args:
roots (iterable[DependencyNode]): List of roots to generate lines from
req_filter (Callable): Filter to apply to each element of the collection.
Return True to keep a node, False to exclude it
Returns:
(list[str]) List of rendered node entries in the form of
reqname==version # reasons
"""
req_filter = req_filter or (lambda _: True)
results: List[
Tuple[Tuple[str, Optional[packaging.version.Version], Optional[str]], str]
] = []
for node in self.visit_nodes(roots):
if node.metadata is None:
continue
if not node.metadata.meta and req_filter(node):
constraints = build_constraints(node)
name, version = node.metadata.to_definition(node.extras)
if strip_extras:
name = name.split("[", 1)[0]
constraint_text = ", ".join(sorted(constraints))
results.append(((name, version, node.metadata.hash), constraint_text))
return results
def __contains__(self, project_name: str) -> bool:
req_name = project_name.split("[")[0]
return normalize_project_name(req_name) in self.nodes
def __iter__(self) -> Iterator[DependencyNode]:
return iter(self.nodes.values())
def __getitem__(self, project_name: str) -> DependencyNode:
req_name = project_name.split("[")[0]
return self.nodes[normalize_project_name(req_name)] | /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/req_compile/dists.py | 0.744749 | 0.220941 | dists.py | pypi |
import typing
from collections import defaultdict
from functools import lru_cache
from typing import DefaultDict, Dict, Iterable, Optional, Tuple
import packaging.version
import pkg_resources
def reduce_requirements(
raw_reqs: Iterable[pkg_resources.Requirement],
) -> Iterable[pkg_resources.Requirement]:
"""Reduce a list of requirements to a minimal list.
Combine requirements with the same key.
"""
reqs: DefaultDict[str, Optional[pkg_resources.Requirement]] = defaultdict(
lambda: None
)
for req in raw_reqs:
reqs[req.project_name] = merge_requirements(reqs[req.project_name], req)
return list(req for req in reqs.values() if req is not None)
class CommentError(ValueError):
def __str__(self):
return "Text given is a comment"
@lru_cache(maxsize=None)
def parse_requirement(req_text: str) -> pkg_resources.Requirement:
"""Parse a string into a Requirement object.
Args:
req_text (str): The pkg_resources style requirement string,
e.g. flask==1.1 ; python_version >= '3.0'
Returns:
(pkg_resources.Requirement) The parsed requirement.
Raises:
A flavor of a ValueError if the requirement string is invalid.
"""
req_text = req_text.strip()
if not req_text:
raise ValueError("No requirement given")
if req_text[0] == "#":
raise CommentError
return pkg_resources.Requirement.parse(req_text)
@lru_cache(maxsize=None)
def parse_version(version: str) -> packaging.version.Version:
"""Parse a string into a packaging version.
Args:
version: Version to parse
"""
return pkg_resources.parse_version(version) # type: ignore
def parse_requirements(reqs):
# type: (Iterable[str]) -> Iterable[pkg_resources.Requirement]
"""Parse a list of strings into Requirements."""
for req in reqs:
req = req.strip().rstrip("\\")
if "\n" in req:
for inner_req in parse_requirements(req.split("\n")):
yield inner_req
else:
if not req:
continue
if req[0] == "#" or req.startswith("--"):
continue
result = parse_requirement(req)
if result is not None:
yield result
def merge_extras(
extras1: Optional[Iterable[str]], extras2: Optional[Iterable[str]]
) -> Iterable[str]:
"""Merge two iterables of extra into a single sorted tuple. Case-sensitive."""
if extras1 and extras2:
return tuple(sorted(set(extras1) | set(extras2)))
if not extras1 and extras2:
return extras2
if not extras2 and extras1:
return extras1
return []
def merge_requirements(
req1: Optional[pkg_resources.Requirement],
req2: Optional[pkg_resources.Requirement],
) -> pkg_resources.Requirement:
"""Merge two requirements into a single requirement that would satisfy both."""
if req1 is not None and req2 is None:
return req1
if req2 is not None and req1 is None:
return req2
assert req1 is not None
assert req2 is not None
req1_name_norm = normalize_project_name(req1.name)
if req1_name_norm != normalize_project_name(req2.name):
raise ValueError("Reqs don't match: {} != {}".format(req1, req2))
all_specs = set(req1.specifier) | set(req2.specifier)
# Handle markers
if req1.marker and req2.marker:
if str(req1.marker) != str(req2.marker):
if str(req1.marker) in str(req2.marker):
new_marker = ";" + str(req1.marker)
elif str(req2.marker) in str(req1.marker):
new_marker = ";" + str(req2.marker)
else:
new_marker = ""
else:
new_marker = ";" + str(req1.marker)
else:
new_marker = ""
extras = merge_extras(req1.extras, req2.extras)
extras_str = ""
if extras:
extras_str = "[" + ",".join(extras) + "]"
req_str = (
req1_name_norm
+ extras_str
+ ",".join(str(part) for part in all_specs)
+ new_marker
)
return parse_requirement(req_str)
NormName = typing.NewType("NormName", str)
NAME_CACHE = {} # type: Dict[str, NormName]
def normalize_project_name(project_name: str) -> NormName:
"""Normalize a project name."""
if project_name in NAME_CACHE:
return NAME_CACHE[project_name]
value = NormName(
project_name.lower().replace("-", "_").replace(".", "_").replace(" ", "_")
)
NAME_CACHE[project_name] = value
return value
def is_pinned_requirement(req: pkg_resources.Requirement) -> bool:
"""Returns whether an InstallRequirement is a "pinned" requirement.
An InstallRequirement is considered pinned if:
- Is not editable
- It has exactly one specifier
- That specifier is "=="
- The version does not contain a wildcard
Examples:
django==1.8 # pinned
django>1.8 # NOT pinned
django~=1.8 # NOT pinned
django==1.* # NOT pinned
"""
return any(
(spec.operator == "==" or spec.operator == "===")
and not spec.version.endswith(".*")
for spec in req.specifier
)
def has_prerelease(req: pkg_resources.Requirement) -> bool:
"""Returns whether an InstallRequirement has a prerelease specifier."""
return any(parse_version(spec.version).is_prerelease for spec in req.specifier)
@lru_cache(maxsize=None)
def get_glibc_version():
# type: () -> Optional[Tuple[int, int]]
"""Based on PEP 513/600."""
import ctypes # pylint: disable=bad-option-value,import-outside-toplevel
try:
process_namespace = ctypes.CDLL(None)
gnu_get_libc_version = process_namespace.gnu_get_libc_version
except (AttributeError, TypeError):
# Symbol doesn't exist -> therefore, we are not linked to
# glibc.
return None
# Call gnu_get_libc_version, which returns a string like "2.5".
gnu_get_libc_version.restype = ctypes.c_char_p
version_str = gnu_get_libc_version()
# py2 / py3 compatibility:
if not isinstance(version_str, str):
version_str = version_str.decode("ascii")
# Parse string and check against requested version.
version = [int(piece) for piece in version_str.split(".")]
assert len(version) == 2
return version[0], version[1] | /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/req_compile/utils.py | 0.786991 | 0.249664 | utils.py | pypi |
from __future__ import print_function
import os
import sys
from typing import Any, Iterable, Optional, Sequence, Tuple
import pkg_resources
from overrides import overrides
import req_compile.containers
import req_compile.dists
import req_compile.utils
from req_compile.containers import RequirementContainer
from req_compile.dists import DependencyNode, DistributionCollection
from req_compile.errors import NoCandidateException
from req_compile.repos import RepositoryInitializationError
from req_compile.repos.repository import Candidate, DistributionType, Repository
from req_compile.repos.source import ReferenceSourceRepository
def _candidate_from_node(node: DependencyNode) -> Candidate:
assert node.metadata is not None
if node.metadata.version is None:
raise ValueError(f"No version given for {node.key}")
candidate = Candidate(
node.key,
None,
node.metadata.version,
None,
None,
"any",
None,
DistributionType.SOURCE,
)
candidate.preparsed = node.metadata
return candidate
class SolutionRepository(Repository):
"""A repository that provides distributions from a previous solution."""
def __init__(self, filename: str, excluded_packages: Iterable[str] = None) -> None:
"""Constructor."""
super(SolutionRepository, self).__init__("solution", allow_prerelease=True)
self.filename = os.path.abspath(filename) if filename != "-" else "-"
self.excluded_packages = excluded_packages or []
if excluded_packages:
self.excluded_packages = [
req_compile.utils.normalize_project_name(pkg)
for pkg in excluded_packages
]
# Partial line when parsing requirements files with multiline
# hashes
self._partial_line = ""
if os.path.exists(filename) or self.filename == "-":
self.load_from_file(self.filename)
else:
self.solution = DistributionCollection()
def __repr__(self) -> str:
return "--solution {}".format(self.filename)
def __eq__(self, other: Any) -> bool:
return (
isinstance(other, SolutionRepository)
and super(SolutionRepository, self).__eq__(other)
and self.filename == other.filename
)
def __hash__(self) -> int:
return hash("solution") ^ hash(self.filename)
@overrides
def get_candidates(
self, req: Optional[pkg_resources.Requirement]
) -> Sequence[Candidate]:
if req is None:
return [_candidate_from_node(node) for node in self.solution]
if (
req_compile.utils.normalize_project_name(req.project_name)
in self.excluded_packages
):
return []
try:
node = self.solution[req.project_name]
candidate = _candidate_from_node(node)
return [candidate]
except KeyError:
return []
@overrides
def resolve_candidate(
self, candidate: Candidate
) -> Tuple[RequirementContainer, bool]:
if candidate.preparsed is None:
raise NoCandidateException(
req_compile.utils.parse_requirement(candidate.name)
)
return candidate.preparsed, True
@overrides
def close(self) -> None:
pass
def load_from_file(self, filename: str) -> None:
self.solution = req_compile.dists.DistributionCollection()
if filename == "-":
reqfile = sys.stdin
else:
reqfile = open(filename, encoding="utf-8")
try:
self._load_from_lines(reqfile.readlines(), meta_file=filename)
finally:
if reqfile is not sys.stdin:
reqfile.close()
self._remove_nodes()
def _load_from_lines(self, lines: Iterable[str], meta_file: str = None) -> None:
for line in lines:
# Skip directives we don't process in solutions (like --index-url)
if line.strip().startswith("--") and not self._partial_line:
continue
self._parse_line(line, meta_file)
if self._partial_line:
self._parse_multi_line("", meta_file)
def _remove_nodes(self) -> None:
nodes_to_remove = []
missing_ver = req_compile.utils.parse_version("0+missing")
for node in self.solution:
if node.metadata is None or node.metadata.version == missing_ver:
nodes_to_remove.append(node)
for node in nodes_to_remove:
try:
del self.solution.nodes[node.key]
except KeyError:
pass
def _parse_line(self, line: str, meta_file: str = None) -> None:
if self._partial_line:
self._parse_multi_line(line, meta_file)
return
req_part, has_comment, _ = line.partition("#")
req_part = req_part.strip()
if not req_part:
return
# Is the last non-comment character a line break? If so treat this as
# a multi-line entry.
if not has_comment or req_part[-1] == "\\":
self._parse_multi_line(line, meta_file)
return
self._parse_single_line(line)
def _parse_single_line(self, line: str, meta_file: str = None) -> None:
req_hash_part, _, source_part = line.partition("#")
req_hash_part = req_hash_part.strip()
if not req_hash_part:
return
hashes = req_hash_part.split("--hash=")
req_part = hashes[0]
req = req_compile.utils.parse_requirement(req_part)
if (
not source_part.strip()
or "#" in source_part
or source_part.startswith(" via")
):
parts = source_part.strip().split("#")
in_sources = False
sources = []
for part in parts:
part = part.strip()
if in_sources:
sources.append(part)
continue
if part.startswith("via"):
if part != "via":
sources.append(part[4:])
in_sources = True
if not sources:
raise RepositoryInitializationError(
SolutionRepository,
"Solution file {} is not fully annotated and cannot be used. Consider"
" compiling the solution against a remote index to add annotations.".format(
meta_file
),
)
else:
# Strip of the repository index if --annotate was used.
source_part = source_part.strip()
if source_part[0] == "[":
_, _, source_part = source_part.partition("] ")
sources = source_part.split(", ")
dist_hash: Optional[str] = None
if len(hashes) > 1:
dist_hash = hashes[1]
if len(hashes) > 2:
self.logger.debug("Discarding %d hashes, using first", len(hashes) - 2)
try:
self._add_sources(req, sources, dist_hash=dist_hash)
except Exception:
raise ValueError(f"Failed to parse line: {line}")
def _parse_multi_line(self, line: str, meta_file: str = None) -> None:
stripped_line = line.strip()
stripped_line = stripped_line.rstrip("\\")
# Is this the start of a new requirement, or the end of the document?
if self._partial_line and (
not stripped_line or not stripped_line.startswith(("#", "--"))
):
self._parse_single_line(self._partial_line, meta_file=meta_file)
self._partial_line = ""
self._partial_line += stripped_line
def _add_sources(
self,
req: pkg_resources.Requirement,
sources: Iterable[str],
dist_hash: str = None,
) -> None:
pkg_names = map(lambda x: x.split(" ")[0], sources)
constraints = map(
lambda x: x.split(" ")[1].replace("(", "").replace(")", "")
if "(" in x
else None,
sources,
)
version = req_compile.utils.parse_version(list(req.specs)[0][1])
metadata = None
if req.project_name in self.solution:
metadata = self.solution[req.project_name].metadata
if metadata is None:
metadata = req_compile.containers.DistInfo(req.name, version, [])
metadata.hash = dist_hash
metadata.version = version
metadata.origin = self
self.solution.add_dist(metadata, None, req)
for name, constraint in zip(pkg_names, constraints):
if name and not (
name.endswith(".txt")
or name.endswith(".out")
or "\\" in name
or "/" in name
):
constraint_req = None
try:
constraint_req = req_compile.utils.parse_requirement(name)
# Use .name instead of .project_name to avoid normalization
proj_name = constraint_req.name
except ValueError:
proj_name = name
self.solution.add_dist(proj_name, None, constraint_req)
reverse_dep = self.solution[name]
if reverse_dep.metadata is None:
inner_meta = req_compile.containers.DistInfo(
proj_name, req_compile.utils.parse_version("0+missing"), [],
)
inner_meta.origin = ReferenceSourceRepository(inner_meta)
reverse_dep.metadata = inner_meta
else:
reverse_dep = None
reason = _create_metadata_req(req, metadata, name, constraint)
if reverse_dep is not None:
assert reverse_dep.metadata is not None
reverse_dep.metadata.reqs.append(reason)
self.solution.add_dist(metadata.name, reverse_dep, reason)
def _create_metadata_req(
req: pkg_resources.Requirement,
metadata: RequirementContainer,
name: str,
constraints: Optional[str],
) -> pkg_resources.Requirement:
marker = ""
if "[" in name:
extra = next(iter(req_compile.utils.parse_requirement(name).extras))
marker = ' ; extra == "{}"'.format(extra)
return req_compile.utils.parse_requirement(
"{}{}{}{}".format(
metadata.name,
("[" + ",".join(req.extras) + "]") if req.extras else "",
constraints if constraints else "",
marker,
)
) | /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/req_compile/repos/solution.py | 0.731155 | 0.186373 | solution.py | pypi |
import enum
import logging
import os
import re
import sys
import time
import urllib
import urllib.parse
import warnings
from functools import lru_cache
from hashlib import sha256
from html.parser import HTMLParser
from typing import Any, List, Optional, Sequence, Tuple
import pkg_resources
import requests
from overrides import overrides
from req_compile.containers import RequirementContainer
from req_compile.errors import MetadataError
from req_compile.metadata import extract_metadata
from req_compile.repos.repository import Candidate, Repository, filename_to_candidate
LOG = logging.getLogger("req_compile.repository.pypi")
SYS_PY_VERSION = pkg_resources.parse_version(
sys.version.split(" ", 1)[0].replace("+", "")
)
SYS_PY_MAJOR = pkg_resources.parse_version("{}".format(sys.version_info.major))
SYS_PY_MAJOR_MINOR = pkg_resources.parse_version(
"{}.{}".format(sys.version_info.major, sys.version_info.minor)
)
OPS = {
"<": lambda x, y: x < y,
">": lambda x, y: x > y,
"==": lambda x, y: x == y,
"!=": lambda x, y: x != y,
">=": lambda x, y: x >= y,
"<=": lambda x, y: x <= y,
}
def check_python_compatibility(requires_python: str) -> bool:
if requires_python is None:
return True
try:
return all(
_check_py_constraint(part)
for part in requires_python.split(",")
if part.strip()
)
except ValueError:
raise ValueError(
"Unable to parse requires python expression: {}".format(requires_python)
)
def _check_py_constraint(version_constraint: str) -> bool:
ref_version = SYS_PY_VERSION
version_part = re.split("[!=<>~]", version_constraint)[-1].strip()
operator = version_constraint.replace(version_part, "").strip()
if version_part and not operator:
operator = "=="
dotted_parts = len(version_part.split("."))
if version_part.endswith(".*"):
version_part = version_part.replace(".*", "")
if dotted_parts == 3:
ref_version = SYS_PY_MAJOR_MINOR
elif dotted_parts == 2:
ref_version = SYS_PY_MAJOR
else:
if dotted_parts == 2:
ref_version = SYS_PY_MAJOR_MINOR
elif dotted_parts == 1:
ref_version = SYS_PY_MAJOR_MINOR
version_part += ".0"
version = pkg_resources.parse_version(version_part)
if operator == "~=":
# Convert ~= to the >=, < equivalent check
# See: https://packaging.python.org/guides/distributing-packages-using-setuptools/#python-requires
major_num = int(str(version_part).split(".", maxsplit=1)[0])
equivalent_check = ">={},<{}".format(version_part, major_num + 1)
return check_python_compatibility(equivalent_check)
try:
return OPS[operator](ref_version, version)
except KeyError:
raise ValueError("Unable to parse constraint {}".format(version_constraint))
class LinksHTMLParser(HTMLParser):
def __init__(self, url: str) -> None:
super().__init__()
self.url = url
self.dists: List[Candidate] = []
self.active_link: Optional[Tuple[str, Optional[str]]] = None
self.active_skip = False
warnings.filterwarnings(
"ignore", category=pkg_resources.PkgResourcesDeprecationWarning # type: ignore[attr-defined]
)
def handle_starttag(self, tag: str, attrs: List[Tuple[str, Optional[str]]]) -> None:
self.active_link = None
if tag == "a":
self.active_skip = False
requires_python = None
for attr in attrs:
if attr[0] == "href":
self.active_link = self.url, attr[1]
elif (
attr[0] == "metadata-requires-python"
or attr[0] == "data-requires-python"
):
requires_python = attr[1]
if requires_python:
try:
self.active_skip = not check_python_compatibility(requires_python)
except ValueError:
LOG.error(
'Failed to parse requires expression "%s" for requirement %s',
requires_python,
self.active_link,
)
def handle_data(self, data: str) -> None:
if self.active_link is None or self.active_skip:
return
candidate = filename_to_candidate(self.active_link, data)
if candidate is not None:
self.dists.append(candidate)
def error(self, message: str) -> None:
raise RuntimeError(message)
def normalize(name: str) -> str:
"""Normalize per PEP-0503."""
return re.sub(r"(\s|[-_.])+", "-", name).lower()
@lru_cache(maxsize=None)
def _scan_page_links(
index_url: str, project_name: str, session: requests.Session, retries: int
) -> Sequence[Candidate]:
"""Scan a Python index's HTML page for links for a given project.
Args:
index_url: Base index URL to request from.
project_name: From to fetch candidates for.
session: Open requests session.
retries: Numer of times to retry.
Returns:
Candidates on this index's page.
"""
url = "{index_url}/{project_name}".format(
index_url=index_url, project_name=normalize(project_name)
)
LOG.info("Fetching versions for %s from %s", project_name, url)
if session is None:
session = requests
response = session.get(url + "/")
if retries and 500 <= response.status_code < 600:
time.sleep(0.1)
return _scan_page_links(index_url, project_name, session, retries - 1)
# Raise for any error status that's not 404
if response.status_code != 404:
response.raise_for_status()
parser = LinksHTMLParser(response.url)
parser.feed(response.content.decode("utf-8"))
return parser.dists
def _do_download(
logger: logging.Logger,
filename: str,
link: Tuple[str, str],
session: requests.Session,
wheeldir: str,
) -> Tuple[str, bool]:
url, resource = link
split_link = resource.split("#sha256=")
if len(split_link) > 1:
sha = split_link[1]
else:
sha = None
output_file = os.path.join(wheeldir, filename)
if sha is not None and os.path.exists(output_file):
hasher = sha256()
with open(output_file, "rb") as handle:
while True:
block = handle.read(4096)
if not block:
break
hasher.update(block)
if hasher.hexdigest() == sha:
logger.info("Reusing %s", output_file)
return output_file, True
logger.debug("No hash match for downloaded file, removing")
os.remove(output_file)
else:
logger.debug("No file in wheel-dir")
full_link = urllib.parse.urljoin(url, resource)
logger.info("Downloading %s -> %s", full_link, output_file)
if session is None:
session = requests
response = session.get(full_link, stream=True)
with open(output_file, "wb") as handle:
for block in response.iter_content(4 * 1024):
handle.write(block)
return output_file, False
class IndexType(enum.Enum):
DEFAULT = 0
INDEX_URL = 1
EXTRA_INDEX_URL = 2
class PyPIRepository(Repository):
"""A repository that conforms to the PEP standard for webpage index of python distributions."""
def __init__(
self,
index_url: str,
wheeldir: str,
allow_prerelease: bool = False,
retries: int = 3,
index_type: IndexType = IndexType.INDEX_URL,
) -> None:
"""Constructor.
Args:
index_url (str): URL of the base index
wheeldir (str): Directory to download wheels and source dists to, if required
allow_prerelease (bool, optional): Whether to consider prereleases
retries (int): Number of times to retry. A value of 0 will never retry
index_type: Type of PyPI repository this is, e.g. --extra-index-url.
"""
super().__init__("pypi", allow_prerelease)
if index_url.endswith("/"):
index_url = index_url[:-1]
self.index_url = index_url
if wheeldir is not None:
self.wheeldir = os.path.abspath(wheeldir)
else:
self.wheeldir = None
self.allow_prerelease = allow_prerelease
self.retries = retries
self.index_type = index_type
self.session = requests.Session()
def __repr__(self) -> str:
if self.index_type == IndexType.DEFAULT:
return f"<default index> {self.index_url}"
elif self.index_type == IndexType.INDEX_URL:
return f"--index-url {self.index_url}"
elif self.index_type == IndexType.EXTRA_INDEX_URL:
return f"--extra-index-url {self.index_url}"
raise ValueError("Unknown url type")
def __eq__(self, other: Any) -> bool:
return (
isinstance(other, PyPIRepository)
and super(PyPIRepository, self).__eq__(other)
and self.index_url == other.index_url
)
def __hash__(self) -> int:
return hash("pypi") ^ hash(self.index_url)
@overrides
def get_candidates(
self, req: Optional[pkg_resources.Requirement]
) -> Sequence[Candidate]:
if req is None:
return []
return _scan_page_links(
self.index_url, req.project_name, self.session, self.retries
)
@overrides
def resolve_candidate(
self, candidate: Candidate
) -> Tuple[RequirementContainer, bool]:
filename, cached = None, True
try:
# In this repository type, filename is always provided.
if candidate.filename is None:
raise ValueError("Could not find the local filename to download to.")
filename, cached = _do_download(
self.logger,
candidate.filename,
candidate.link,
self.session,
self.wheeldir,
)
dist_info = extract_metadata(filename, origin=self)
_, resource = candidate.link
if "#" in resource:
_, _, hash_pair = resource.partition("#")
dist_info.hash = hash_pair.replace("=", ":")
return dist_info, cached
except MetadataError:
if not cached and filename is not None:
try:
os.remove(filename)
except EnvironmentError:
pass
raise
@overrides
def close(self) -> None:
self.session.close() | /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/req_compile/repos/pypi.py | 0.57523 | 0.159512 | pypi.py | pypi |
import logging
import os
import re
import zipfile
from contextlib import closing
from typing import Iterable, Optional
from req_compile import utils
from req_compile.containers import DistInfo
from req_compile.errors import MetadataError
LOG = logging.getLogger("req_compile.metadata.dist_info")
def _find_dist_info_metadata(project_name, namelist):
# type: (str, Iterable[str]) -> Optional[str]
"""
In a list of zip path entries, find the one that matches the dist-info for this project
Args:
project_name (str): Project name to match
namelist (list[str]): List of zip paths
Returns:
(str) The best zip path that matches this project
"""
for best_match in (
r"^(.+/)?{}-.+\.dist-info/METADATA$".format(project_name),
r"^.*\.dist-info/METADATA",
):
for info in namelist:
if re.match(best_match, info):
LOG.debug(
"Found dist-info in the zip: %s (with regex %s)", info, best_match
)
return info
return None
def _fetch_from_wheel(wheel):
# type: (str) -> Optional[DistInfo]
"""
Fetch metadata from a wheel file
Args:
wheel (str): Wheel filename
Returns:
(DistInfo, None) The metadata for this zip, or None if it could not be found or parsed
"""
project_name = os.path.basename(wheel).split("-")[0]
result = None
zfile = None
try:
zfile = zipfile.ZipFile(wheel, "r")
with closing(zfile):
# Reverse since metadata details are supposed to be written at the end of the zip
infos = list(reversed(zfile.namelist()))
result = _find_dist_info_metadata(project_name, infos)
if result is not None:
return _parse_flat_metadata(
zfile.read(result).decode("utf-8", "ignore")
)
LOG.warning("Could not find .dist-info/METADATA in the zip archive")
except zipfile.BadZipfile as ex:
LOG.warning("Bad zip file: %s", ex)
return None
def _parse_flat_metadata(contents: str) -> DistInfo:
name = None
version = None
raw_reqs = []
for line in contents.split("\n"):
lower_line = line.lower()
if name is None and lower_line.startswith("name:"):
name = line.split(":")[1].strip()
elif version is None and lower_line.startswith("version:"):
version = utils.parse_version(line.split(":")[1].strip())
elif lower_line.startswith("requires-dist:"):
raw_reqs.append(line.partition(":")[2].strip())
if name is None:
raise MetadataError(
"unknown", version, ValueError("Missing name metadata for package")
)
return DistInfo(name, version, list(utils.parse_requirements(raw_reqs))) | /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/req_compile/metadata/dist_info.py | 0.680772 | 0.155944 | dist_info.py | pypi |
import functools
import logging
import os
import zipfile
from typing import List, Optional
import pkg_resources
from req_compile.containers import RequirementContainer
from req_compile.errors import MetadataError
from req_compile.repos.repository import Repository
from ..utils import parse_version
from .dist_info import _fetch_from_wheel
from .extractor import NonExtractor, TarExtractor, ZipExtractor
from .pyproject import fetch_from_pyproject
from .source import _fetch_from_source
LOG = logging.getLogger("req_compile.metadata")
def extract_metadata(
filename: str, allow_run_setup_py: bool = True, origin: Repository = None
) -> RequirementContainer:
"""Extract a DistInfo from a file or directory
Args:
filename: File or path to extract metadata from
allow_run_setup_py: Whether this call is permitted to run setup.py files
origin: Origin of the metadata
Returns:
(RequirementContainer) the result of the metadata extraction
"""
LOG.info("Extracting metadata for %s", filename)
basename, ext = os.path.splitext(filename)
result: Optional[RequirementContainer] = None
ext = ext.lower()
# Gather setup requires from setup.py and pyproject.toml.
setup_requires: List[pkg_resources.Requirement] = []
if ext == ".whl":
LOG.debug("Extracting from wheel")
try:
result = _fetch_from_wheel(filename)
except zipfile.BadZipfile as ex:
raise MetadataError(
os.path.basename(filename).replace(".whl", ""), parse_version("0.0"), ex
)
elif ext == ".zip":
LOG.debug("Extracting from a zipped source package")
result = _fetch_from_source(
filename, ZipExtractor, run_setup_py=allow_run_setup_py
)
elif ext in (".gz", ".bz2", ".tgz"):
LOG.debug("Extracting from a tar package")
if ext == ".tgz":
ext = "gz"
result = _fetch_from_source(
os.path.abspath(filename),
functools.partial(TarExtractor, ext.replace(".", "")),
run_setup_py=allow_run_setup_py,
)
elif ext in (".egg",):
LOG.debug("Attempted to resolve an unsupported format")
raise MetadataError(basename, None, ValueError(".egg files are not supported"))
elif os.path.exists(os.path.join(filename, "pyproject.toml")):
LOG.debug("Extracting from a pyproject.toml")
result, setup_requires = fetch_from_pyproject(filename)
if result is None:
LOG.debug("Extracting directly from a source directory")
result = _fetch_from_source(
os.path.abspath(filename), NonExtractor, run_setup_py=allow_run_setup_py
)
if result is not None:
result.origin = origin
if result is None:
raise MetadataError(basename, None, ValueError("Could not extract metadata"))
result.setup_reqs.extend(setup_requires)
return result | /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/req_compile/metadata/metadata.py | 0.647241 | 0.158272 | metadata.py | pypi |
from __future__ import annotations
import abc
import codecs
import io
import logging
import os
import shutil
import tarfile
import zipfile
from io import BytesIO
from types import TracebackType
from typing import (
IO,
Any,
Dict,
Iterable,
Iterator,
List,
Optional,
Type,
Union,
cast,
)
from typing_extensions import Literal
LOG = logging.getLogger("req_compile.extractor")
class Extractor(metaclass=abc.ABCMeta):
"""Abstract base class for file extractors. These classes operate on archive files
or directories in order to expose files to metadata analysis and executing setup.pys.
"""
def __init__(self, extractor_type: str, file_or_path: str) -> None:
self.logger = LOG.getChild(extractor_type)
self.fake_root: str = os.path.abspath(os.sep + os.path.basename(file_or_path))
self.io_open = io.open
self.renames: Dict[Union[str, int], Union[str, int]] = {}
def contains_path(self, path: str) -> bool:
"""Whether or not the archive contains the given path, based on the fake root.
Returns:
(bool)
"""
return os.path.abspath(path).startswith(self.fake_root)
def add_rename(self, name: str, new_name: str) -> None:
"""Add a rename entry for a file in the archive"""
self.renames[self.to_relative(new_name)] = self.to_relative(name)
def open(
self, file: str, mode: str = "r", encoding: str = None, **_kwargs: Any
) -> IO[str]:
"""Open a real file or a file within the archive"""
relative_filename = self.to_relative(file)
if (
isinstance(relative_filename, int)
or file == os.devnull
or os.path.isabs(relative_filename)
):
return self.io_open(file, mode=mode, encoding=encoding)
handle = self._open_handle(relative_filename)
if "b" in mode:
return handle
return cast(IO[str], WithDecoding(handle, encoding or "ascii"))
@abc.abstractmethod
def names(self) -> Iterable[str]:
"""Fetch all names within the archive
Returns:
(generator[str]): Filenames, in the context of the archive
"""
raise NotImplementedError
@abc.abstractmethod
def _open_handle(self, filename: str) -> Any:
raise NotImplementedError
@abc.abstractmethod
def _check_exists(self, filename: str) -> bool:
raise NotImplementedError
def exists(self, filename: Union[str, int]) -> bool:
"""Check whether a file or directory exists within the archive.
Will not check non-archive files"""
relative_fd = self.to_relative(filename)
if isinstance(relative_fd, int):
return os.path.exists(filename)
return self._check_exists(relative_fd)
def close(self) -> None:
raise NotImplementedError
def to_relative(self, filename: Union[str, int]) -> Union[str, int]:
"""Convert a path to an archive relative path if possible. If the target file is not
within the archive, the path will be returned as is
Returns:
(str) The path to use to open the file or check existence
"""
if isinstance(filename, int):
return filename
if filename.replace("\\", "/").startswith("./"):
filename = filename[2:]
result = filename
if os.path.isabs(filename):
if self.contains_path(filename):
result = filename.replace(self.fake_root, ".", 1)
else:
cur = os.getcwd()
if cur != self.fake_root:
result = os.path.relpath(cur, self.fake_root) + "/" + filename
result = result.replace("\\", "/")
if result.startswith("./"):
result = result[2:]
mapped_result: Union[str, int] = result
if result in self.renames:
mapped_result = self.renames[result]
return mapped_result
def contents(self, name: str) -> str:
"""Read the full contents of a file opened with Extractor.open
Returns:
(str) The full file contents
"""
with self.open(name, encoding="utf-8") as handle:
return handle.read()
@abc.abstractmethod
def extract(self, target_dir: str) -> None:
raise NotImplementedError
class NonExtractor(Extractor):
"""An extractor that operates on the filesystem directory instead of an archive"""
def __init__(self, path: str) -> None:
super(NonExtractor, self).__init__("fs", path)
self.path = path
self.os_path_exists = os.path.exists
def names(self) -> Iterable[str]:
for root, _, files in os.walk(self.path):
rel_root = root.replace(self.path, ".").replace("\\", "/")
if rel_root != ".":
rel_root += "/"
else:
rel_root = ""
for filename in files:
yield rel_root + filename
def extract(self, target_dir: str) -> None:
# Copy the entire file tree to the target directory
for filename in os.listdir(self.path):
path = os.path.join(self.path, filename)
if os.path.isdir(path):
shutil.copytree(path, os.path.join(target_dir, filename))
else:
shutil.copy2(path, target_dir)
def _check_exists(self, filename: str) -> bool:
return self.os_path_exists(self.path + "/" + filename)
def _open_handle(self, filename: str) -> IO[bytes]:
return self.io_open(os.path.join(self.path, filename), "rb")
def close(self) -> None:
pass
class TarExtractor(Extractor):
"""An extractor for tar files. Accepts an additional first parameter for the decoding codec"""
def __init__(self, ext: Literal["gz"], filename: str):
super(TarExtractor, self).__init__("tar", filename)
self.tar = tarfile.open(filename, "r:" + ext)
self.io_open = io.open
def names(self) -> Iterable[str]:
return (info.name for info in self.tar.getmembers() if info.type != b"5")
def _check_exists(self, filename: str) -> bool:
try:
self.tar.getmember(filename)
return True
except KeyError:
return False
def extract(self, target_dir: str) -> None:
self.tar.extractall(path=target_dir)
def _open_handle(self, filename: str) -> Any:
try:
return self.tar.extractfile(filename)
except KeyError:
raise IOError("Could not find {}".format(filename))
def close(self) -> None:
self.tar.close()
class ZipExtractor(Extractor):
"""An extractor for zip files"""
def __init__(self, filename: str) -> None:
super(ZipExtractor, self).__init__("gz", filename)
self.zfile = zipfile.ZipFile(os.path.abspath(filename), "r")
self.io_open = io.open
def names(self) -> Iterable[str]:
return (name for name in self.zfile.namelist() if name[-1] != "/")
def _check_exists(self, filename: str) -> bool:
try:
self.zfile.getinfo(filename)
return True
except KeyError:
return any(name.startswith(filename + "/") for name in self.names())
def extract(self, target_dir: str) -> None:
self.zfile.extractall(path=target_dir)
def _open_handle(self, filename: str) -> Any:
try:
return BytesIO(self.zfile.read(filename))
except KeyError:
raise IOError("Could not find {}".format(filename))
def close(self) -> None:
self.zfile.close()
class WithDecoding:
"""Wrap a file object and handle decoding."""
def __init__(self, wrap: IO[bytes], encoding: str) -> None:
super().__init__()
if wrap is None:
raise FileNotFoundError
self.wrap = wrap
self.reader = codecs.getreader(encoding)(wrap)
def read(self, __n: int = 1024 * 1024) -> str:
return self.reader.read(__n)
def readline(self, __limit: int = None) -> str:
return self.reader.readline(__limit)
def readlines(self, __hint: int = None) -> List[str]:
return self.reader.readlines(__hint)
def write(self, data: Any) -> int:
del data
return 0
def __getattr__(self, item: str) -> Any:
return getattr(self.wrap, item)
def __iter__(self) -> Iterator[str]:
return iter(self.reader)
def __next__(self) -> str:
return next(self.reader)
def __enter__(self) -> WithDecoding:
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc: Optional[BaseException],
traceback: Optional[TracebackType],
) -> None:
pass
def close(self) -> None:
self.wrap.close() | /req-compile-1.0.0rc3.tar.gz/req-compile-1.0.0rc3/req_compile/metadata/extractor.py | 0.77518 | 0.169991 | extractor.py | pypi |
from __future__ import annotations
import os
from pathlib import Path
import subprocess
from typing import Optional, Union
BRANCH_NAME = 'dep-update'
COMMIT_MESSAGE = 'Update {package} package to {version}'
SubprocessOutput = Union[
subprocess.CalledProcessError,
subprocess.CompletedProcess,
]
class Updater:
def __init__(self) -> None:
self.util = Util()
def check_applicable(self) -> bool:
"""
Return if this updater is applicable for the current repository
"""
return False
def update_dependencies(self) -> bool:
"""
Update dependencies
Return if updates were made
"""
return False
class Util:
def __init__(self) -> None:
self.push = False
self.verbose = False
self.dry_run = True
self.branch_exists = False
def check_repository_cleanliness(self) -> None:
"""
Check that the repository is ready for updating dependencies.
Non-clean repositories will raise a RuntimeError
"""
# Make sure there are no uncommitted files
if self.dry_run:
return
command = ['git', 'status', '--porcelain']
try:
result = self.execute_shell(command, True)
except subprocess.CalledProcessError as error:
raise RuntimeError('Must run within a git repository') from error
lines = result.stdout.split('\n')
# Do not count untracked files when checking for repository cleanliness
lines = [line for line in lines if line and line[:2] != '??']
if lines:
raise RuntimeError('Repository not clean')
def commit_git(self, commit_message: str) -> None:
"""Create a git commit of all changed files"""
self.log(commit_message)
command = ['git', 'commit', '-am', commit_message]
self.execute_shell(command, False)
def commit_dependency_update(self, dependency: str, version: str) -> None:
"""Create a commit with a dependency update"""
commit_message = COMMIT_MESSAGE.format(
package=dependency,
version=version,
)
self.commit_git(commit_message)
def create_branch(self) -> None:
"""Create a new branch for committing dependency updates"""
# Make sure branch does not already exist
command = ['git', 'branch', '--list', BRANCH_NAME]
result = self.execute_shell(command, True)
output = result.stdout
if output.strip() != '':
command = ['git', 'checkout', BRANCH_NAME]
self.branch_exists = True
else:
command = ['git', 'checkout', '-b', BRANCH_NAME]
self.execute_shell(command, False)
def rollback_branch(self) -> None:
"""Delete the dependency update branch"""
if self.branch_exists:
return
command = ['git', 'checkout', '-']
self.execute_shell(command, False)
command = ['git', 'branch', '-d', BRANCH_NAME]
self.execute_shell(command, False)
def reset_changes(self) -> None:
"""Reset any noncommitted changes to the branch"""
command = ['git', 'checkout', '.']
self.execute_shell(command, False)
def push_dependency_update(self) -> None:
"""Git push any commits to remote"""
if not self.push:
return
self.log('Pushing commit to git remote')
command = ['git', 'push', '-u', 'origin']
self.execute_shell(command, False)
def check_major_version_update(
self, dependency: str, old_version: str, new_version: str
) -> Optional[bool]:
"""
Try to parse versions as semver and compare major version numbers.
Log a warning if the major version numbers are different.
Returns True if there is a major version bump
Returns False if there is not a major version bump
Returns None if versions are not semver
"""
old_version_parsed = old_version.lstrip('^~ ').split('.')
new_version_parsed = new_version.lstrip('^~ ').split('.')
if len(old_version_parsed) != 3 or len(new_version_parsed) != 3:
return None
try:
old_version_major = int(old_version_parsed[0])
except ValueError:
return None
try:
new_version_major = int(new_version_parsed[0])
except ValueError:
return None
if old_version_major == new_version_major:
return False
self.warn(
'Warning: Major version change on %s: %s updated to %s'
% (dependency, old_version, new_version)
)
return True
def execute_shell(
self,
command: list[str],
readonly: bool,
cwd: Optional[Path] = None,
suppress_output: bool = False,
ignore_exit_code: bool = False,
) -> SubprocessOutput:
"""Helper method to execute commands in a shell and return output"""
if self.verbose:
self.log(' '.join(command))
if self.dry_run and not readonly:
return subprocess.CompletedProcess(
command, 0, stdout='', stderr=''
)
try:
result = subprocess.run(
command,
cwd=cwd,
capture_output=True,
check=True,
encoding='utf-8',
)
except subprocess.CalledProcessError as error:
if ignore_exit_code:
return error
if not suppress_output:
self.log(error.stdout)
self.warn(error.stderr)
raise
return result
@staticmethod
def is_no_color() -> bool:
"""Return if output should have no color: https://no-color.org/"""
return 'NO_COLOR' in os.environ
def warn(self, data: str) -> None:
"""Helper method for warn-level logs"""
if not Util.is_no_color():
data = f'\033[93m{data}\033[0m'
return self.log(data)
def log(self, data: str) -> None:
"""Helper method for taking care of logging statements"""
print(data) | /req-update-2.3.0.tar.gz/req-update-2.3.0/req_update/util.py | 0.780077 | 0.178097 | util.py | pypi |
# reqman (3.X)
Reqman is a postman killer ;-)
**Reqman** is a [postman](https://www.getpostman.com/) killer. It shares the same goal, but without GUI ... the GUI is simply your favorite text editor, because requests/tests are only simple yaml files. **Reqman** is a command line tool, available on any platforms.
[Video (1minute)](https://www.youtube.com/embed/ToK-5VwxhP4?autoplay=1&loop=1&playlist=ToK-5VwxhP4&cc_load_policy=1)
All configurations is done via simple yaml files, editable with any text editors. Results are displayed in console and in an html output. It's scriptable, and can be used as a daemon/cron task to automate your tests.
[Documentation](https://reqman-docs.glitch.me/)
[Changelog](https://github.com/manatlan/reqman/blob/master/changelog) !
[Online tool to convert swagger/openapi3, OR postman collections](https://reqman-tools.glitch.me/) to reqman's tests
[DEMO](https://test-reqman.glitch.me)
**Features**
* Light (simple py3 module, 3000 lines of code, and x3 lines for unittests, in TDD mind ... cov:97%)
* Powerful (at least as postman free version)
* tests are simple (no code !)
* Variable pool
* can create(save)/re-use variables per request
* "procedures" (declarations & re-use/call), local or global
* Environment aware (switch easily)
* https/ssl ok (bypass)
* http 1.0, 1.1, 2.0
* support HTTP(S)_PROXY environment variables
* headers inherits
* ~~tests inherits~~
* ~~timed requests + average times~~
* html tests renderer (with request/response contents)
* encoding aware
* cookie handling
* color output in console (when [colorama](https://pypi.org/project/colorama/) is present)
* variables can be computed/transformed (in a chain way)
* tests files extension : .yml or .rml (ReqManLanguage)
* generate conf/rml (with 'new' command)
* can paralleliz tests (option `--p`)
* versionning
* NEW 2.0 :
* rewrite from scratch, a lot stronger & speeder !
* advanced save mechanisms
* new switches system
* a lot of new options (auto open browser, set html filename, compare, ...)
* ability to save the state, to compare with newer ones
* ability to replay given tests (rmr file)
* dual mode : compare switches vs others switches (-env1 +env2) in html output
* shebang mode
* better html output
* fully compatible with reqman1 conf/ymls
* xml/xpath support tests
* used as a lib/module, you can add easily your own features (see below)
* NEW 3.0 :
* full rewrite of resolving mechanism (more robust, more maintanable)
* "ignorable vars" (avoid ResolveException, with `<<var?>>`)
## Getting started : installation
If you are on an *nix platform, you can start with pip :
$ pip3 install reqman
it will install the _reqman_ script in your path (perhaps, you'll need to Add the path `~/.local/bin` to the _PATH_ environment variable.)
If you are on microsoft windows, just download [reqman.exe (v2)](https://github.com/manatlan/reqman/blob/master/dist/reqman.exe). [The old v1 reqman.exe, is still there](https://github.com/manatlan/reqman/blob/reqman1.4.4.0/dist/reqman.exe), and add it in your path.
## Getting started : let's go
Imagine that you want to test the [json api from pypi.org](https://wiki.python.org/moin/PyPIJSON), to verify that [it finds me](https://pypi.org/pypi/reqman/json) ;-)
(if you are on windows, just replace `reqman` with `reqman.exe`)
You can start a new project in your folder, like that:
$ reqman new https://pypi.org/pypi/reqman/json
It's the first start ; it will create a conf file _reqman.conf_ and a (basic) test file _0010_test.rml_. Theses files are [YAML](https://en.wikipedia.org/wiki/YAML), so ensure that your editor understand them !
(Following 'new' command will just create another fresh rml file if a _reqman.conf_ exists)
Now, you can run/test it :
$ reqman .
It will scan your folder "." and run all test files (`*.rml` or `*.yml`) against the _reqman.conf_ ;-)
It will show you what's happened in your console. And generate a `reqman.html` with more details (open it to have an idea)!
If you edit the `reqman.conf`, you will see :
```yaml
root: https://pypi.org
headers:
User-Agent: reqman (https://github.com/manatlan/reqman)
```
the **root** is a `special var` which will be prependded to all relative urls in your requests tests.
the **headers** (which is a `special var` too) is a set of `http headers` which will be added to all your requests.
Change it to, and save it :
```yaml
root: https://pypi.org
headers:
User-Agent: reqman (https://github.com/manatlan/reqman)
switches:
test:
root: https://test.pypi.org
```
Now, you have created your first _switch_. And try to run your tests like this:
$ reqman.py . -test
It will run your tests against the _root_ defined in _test_ section ; and the test is KO, because _reqman_ doesn't exist on test.pypi.org !
In fact; all declared things under _test_ will replace those at the top ! So you can declare multiple environments, with multiple switches !
But you can declare what you want, now edit _reqman.conf_ like this :
```yaml
root: https://pypi.org
headers:
User-Agent: reqman (https://github.com/manatlan/reqman)
package: reqman
switches:
test:
root: https://test.pypi.org
```
You have declared a _var_ **package** ! let's edit the test file _0010_test.rml_ like this :
```yaml
- GET: /pypi/<<package>>/json
tests:
- status: 200
```
Now, your test will use the **package** var which was declared in _reqman.conf_ ! So, you can create a _switch_ to change the package thru the command line, simply edit your _reqman.conf_ like that :
```yaml
root: https://pypi.org
headers:
User-Agent: reqman (https://github.com/manatlan/reqman)
package: reqman
switches:
test:
root: https://test.pypi.org
colorama:
package: colorama
```
Now, you can check that 'colorama' exists on pypi.org, like that :
$ reqman . -colorama
And you can check that 'colorama' exists on test.pypi.org, like that :
$ reqman . -colorama -test
As you can imagine, it's possible to make a lot of fun things easily. (see a more complex [reqman.conf](https://github.com/manatlan/reqman/blob/master/examples/reqman.conf))
Now, you can edit your rml file, and try the things available in this [tuto](https://github.com/manatlan/reqman/blob/reqman1.4.4.0/examples/tuto.yml).
Organize your tests as you want : you can make many requests in a rml file, you can make many files with many requests, you can make folders which contain many rml files. _Reqman_ will not scan sub-folders starting with "_" or ".".
_reqman_ will return an `exit code` which contains the number of KO tests : 0 if everything is OK, or -1 if there is a trouble (tests can't be runned) : so it's easily scriptable in your automated workflows !
# Ability to override reqman's features for your propers needs (reqman>=2.8.1)
Now, it's super easy to override reqman with your own features. Using 'reqman' as a lib/module for your python's code.
You can declare your own methods, to fulfill your specials needs (hide special mechanism, use external libs, ...):
(Thoses real python methods, just needs to respect the same signature as pyhon methods declared in confs.)
```python
import reqman
reqman.__usage__ = "USAGE: reqmanBis ..." # override usage or not ;-)
@reqman.expose
def SpecialMethod(x,ENV):
"""
Do what you want ...
Args:
'x' : the input var (any) or None.
'ENV' : the current env (dict)
Returns:
any
"""
...
return "What you need"
if __name__ == "__main__":
sys.exit(reqman.main())
```
Now, you can use `SpecialMethod` in your scripts, ex: `<<value|SpecialMethod>>` or `<<SpecialMethod>>`!
Use and abuse !
[](https://huntr.dev)
| /reqman-3.0.0a5.tar.gz/reqman-3.0.0a5/README.md | 0.602179 | 0.812607 | README.md | pypi |
from textwrap import TextWrapper
from collections import OrderedDict
def twClosure(replace_whitespace=False,
break_long_words=False,
maxWidth=120,
maxLength=-1,
maxDepth=-1,
initial_indent=''):
"""
Deals with indentation of dictionaries with very long key, value pairs.
replace_whitespace: Replace each whitespace character with a single space.
break_long_words: If True words longer than width will be broken.
width: The maximum length of wrapped lines.
initial_indent: String that will be prepended to the first line of the output
Wraps all strings for both keys and values to 120 chars.
Uses 4 spaces indentation for both keys and values.
Nested dictionaries and lists go to next line.
"""
twr = TextWrapper(replace_whitespace=replace_whitespace,
break_long_words=break_long_words,
width=maxWidth,
initial_indent=initial_indent)
def twEnclosed(obj, ind='', depthReached=0, reCall=False):
"""
The inner function of the closure
ind: Initial indentation for the single output string
reCall: Flag to indicate a recursive call (should not be used outside)
"""
output = ''
if isinstance(obj, dict):
obj = OrderedDict(sorted(list(obj.items()),
key=lambda t: t[0],
reverse=False))
if reCall:
output += '\n'
ind += ' '
depthReached += 1
lengthReached = 0
for key, value in list(obj.items()):
lengthReached += 1
if lengthReached > maxLength and maxLength >= 0:
output += "%s...\n" % ind
break
if depthReached <= maxDepth or maxDepth < 0:
output += "%s%s: %s" % (ind,
''.join(twr.wrap(key)),
twEnclosed(value, ind, depthReached=depthReached, reCall=True))
elif isinstance(obj, (list, set)):
if reCall:
output += '\n'
ind += ' '
lengthReached = 0
for value in obj:
lengthReached += 1
if lengthReached > maxLength and maxLength >= 0:
output += "%s...\n" % ind
break
if depthReached <= maxDepth or maxDepth < 0:
output += "%s%s" % (ind, twEnclosed(value, ind, depthReached=depthReached, reCall=True))
else:
output += "%s\n" % str(obj) # join(twr.wrap(str(obj)))
return output
return twEnclosed
def twPrint(obj, maxWidth=120, maxLength=-1, maxDepth=-1):
"""
A simple caller of twClosure (see docstring for twClosure)
"""
twPrinter = twClosure(maxWidth=maxWidth,
maxLength=maxLength,
maxDepth=maxDepth)
print(twPrinter(obj))
def twFormat(obj, maxWidth=120, maxLength=-1, maxDepth=-1):
"""
A simple caller of twClosure (see docstring for twClosure)
"""
twFormatter = twClosure(maxWidth=maxWidth,
maxLength=maxLength,
maxDepth=maxDepth)
return twFormatter(obj) | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/TwPrint.py | 0.757436 | 0.299387 | TwPrint.py | pypi |
import io
import os
import stat
import subprocess
import time
import zlib
from Utils.Utilities import decodeBytesToUnicode
def calculateChecksums(filename):
"""
_calculateChecksums_
Get the adler32 and crc32 checksums of a file. Return None on error
Process line by line and adjust for known signed vs. unsigned issues
http://docs.python.org/library/zlib.html
The cksum UNIX command line tool implements a CRC32 checksum that is
different than any of the python algorithms, therefore open cksum
in a subprocess and feed it the same chunks of data that are used
to calculate the adler32 checksum.
"""
adler32Checksum = 1 # adler32 of an empty string
cksumProcess = subprocess.Popen("cksum", stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# the lambda basically creates an iterator function with zero
# arguments that steps through the file in 4096 byte chunks
with open(filename, 'rb') as f:
for chunk in iter((lambda: f.read(4096)), b''):
adler32Checksum = zlib.adler32(chunk, adler32Checksum)
cksumProcess.stdin.write(chunk)
cksumProcess.stdin.close()
cksumProcess.wait()
cksumStdout = cksumProcess.stdout.read().split()
cksumProcess.stdout.close()
# consistency check on the cksum output
filesize = os.stat(filename)[stat.ST_SIZE]
if len(cksumStdout) != 2 or int(cksumStdout[1]) != filesize:
raise RuntimeError("Something went wrong with the cksum calculation !")
cksumStdout[0] = decodeBytesToUnicode(cksumStdout[0])
return (format(adler32Checksum & 0xffffffff, '08x'), cksumStdout[0])
def tail(filename, nLines=20):
"""
_tail_
A version of tail
Adapted from code on http://stackoverflow.com/questions/136168/get-last-n-lines-of-a-file-with-python-similar-to-tail
"""
assert nLines >= 0
pos, lines = nLines + 1, []
# make sure only valid utf8 encoded chars will be passed along
with io.open(filename, 'r', encoding='utf8', errors='ignore') as f:
while len(lines) <= nLines:
try:
f.seek(-pos, 2)
except IOError:
f.seek(0)
break
finally:
lines = list(f)
pos *= 2
text = "".join(lines[-nLines:])
return text
def getFileInfo(filename):
"""
_getFileInfo_
Return file info in a friendly format
"""
filestats = os.stat(filename)
fileInfo = {'Name': filename,
'Size': filestats[stat.ST_SIZE],
'LastModification': time.strftime("%m/%d/%Y %I:%M:%S %p", time.localtime(filestats[stat.ST_MTIME])),
'LastAccess': time.strftime("%m/%d/%Y %I:%M:%S %p", time.localtime(filestats[stat.ST_ATIME]))}
return fileInfo
def findMagicStr(filename, matchString):
"""
_findMagicStr_
Parse a log file looking for a pattern string
"""
with io.open(filename, 'r', encoding='utf8', errors='ignore') as logfile:
# TODO: can we avoid reading the whole file
for line in logfile:
if matchString in line:
yield line
def getFullPath(name, envPath="PATH"):
"""
:param name: file name
:param envPath: any environment variable specified for path (PATH, PYTHONPATH, etc)
:return: full path if it is under PATH env
"""
for path in os.getenv(envPath).split(os.path.pathsep):
fullPath = os.path.join(path, name)
if os.path.exists(fullPath):
return fullPath
return None | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/FileTools.py | 0.555556 | 0.398641 | FileTools.py | pypi |
from builtins import str, bytes
def portForward(port):
"""
Decorator wrapper function for port forwarding of the REST calls of any
function to a given port.
Currently there are three constraints for applying this decorator.
1. The function to be decorated must be defined within a class and not being a static method.
The reason for that is because we need to be sure the function's signature will
always include the class instance as its first argument.
2. The url argument must be present as the second one in the positional argument list
of the decorated function (right after the class instance argument).
3. The url must follow the syntax specifications in RFC 1808:
https://tools.ietf.org/html/rfc1808.html
If all of the above constraints are fulfilled and the url is part of the
urlMangleList, then the url is parsed and the port is substituted with the
one provided as an argument to the decorator's wrapper function.
param port: The port to which the REST call should be forwarded.
"""
def portForwardDecorator(callFunc):
"""
The actual decorator
"""
def portMangle(callObj, url, *args, **kwargs):
"""
Function used to check if the url coming with the current argument list
is to be forwarded and if so change the port to the one provided as an
argument to the decorator wrapper.
:param classObj: This is the class object (slef from within the class)
which is always to be present in the signature of a
public method. We will never use this argument, but
we need it there for not breaking the positional
argument order
:param url: This is the actual url to be (eventually) forwarded
:param *args: The positional argument list coming from the original function
:param *kwargs: The keywords argument list coming from the original function
"""
forwarded = False
try:
if isinstance(url, str):
urlToMangle = 'https://cmsweb'
if url.startswith(urlToMangle):
newUrl = url.replace('.cern.ch/', '.cern.ch:%d/' % port, 1)
forwarded = True
elif isinstance(url, bytes):
urlToMangle = b'https://cmsweb'
if url.startswith(urlToMangle):
newUrl = url.replace(b'.cern.ch/', b'.cern.ch:%d/' % port, 1)
forwarded = True
except Exception:
pass
if forwarded:
return callFunc(callObj, newUrl, *args, **kwargs)
else:
return callFunc(callObj, url, *args, **kwargs)
return portMangle
return portForwardDecorator
class PortForward():
"""
A class with a call method implementing a simple way to use the functionality
provided by the protForward decorator as a pure functional call:
EXAMPLE:
from Utils.PortForward import PortForward
portForwarder = PortForward(8443)
url = 'https://cmsweb-testbed.cern.ch/couchdb'
url = portForwarder(url)
"""
def __init__(self, port):
"""
The init method for the PortForward call class. This one is supposed
to simply provide an initial class instance with a logger.
"""
self.port = port
def __call__(self, url):
"""
The call method for the PortForward class
"""
def dummyCall(self, url):
return url
return portForward(self.port)(dummyCall)(self, url) | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/PortForward.py | 0.825273 | 0.496277 | PortForward.py | pypi |
from builtins import object
from functools import reduce
class Functor(object):
"""
A simple functor class used to construct a function call which later to be
applied on an (any type) object.
NOTE:
It expects a function in the constructor and an (any type) object
passed to the run or __call__ methods, which methods once called they
construct and return the following function:
func(obj, *args, **kwargs)
NOTE:
All the additional arguments which the function may take must be set in
the __init__ method. If any of them are passed during run time an error
will be raised.
:func:
The function to which the rest of the constructor arguments are about
to be attached and then the newly created function will be returned.
- The function needs to take at least one parameter since the object
passed to the run/__call__ methods will always be put as a first
argument to the function.
:Example:
def adder(a, b, *args, **kwargs):
if args:
print("adder args: %s" % args)
if kwargs:
print("adder kwargs: %s" % kwargs)
res = a + b
return res
>>> x=Functor(adder, 8, 'foo', bar=True)
>>> x(2)
adder args: foo
adder kwargs: {'bar': True}
adder res: 10
10
>>> x
<Pipeline.Functor instance at 0x7f319bbaeea8>
"""
def __init__(self, func, *args, **kwargs):
"""
The init method for class Functor
"""
self.func = func
self.args = args
self.kwargs = kwargs
def __call__(self, obj):
"""
The call method for class Functor
"""
return self.run(obj)
def run(self, obj):
return self.func(obj, *self.args, **self.kwargs)
class Pipeline(object):
"""
A simple Functional Pipeline Class: applies a set of functions to an object,
where the output of every previous function is an input to the next one.
"""
# NOTE:
# Similar and inspiring approaches but yet some different implementations
# are discussed in the following two links [1] & [2]. With a quite good
# explanation in [1], which helped a lot. All in all at the bottom always
# sits the reduce function.
# [1]
# https://softwarejourneyman.com/python-function-pipelines.html
# [2]
# https://gitlab.com/mc706/functional-pipeline
def __init__(self, funcLine=None, name=None):
"""
:funcLine: A list of functions or Functors of function + arguments (see
the Class definition above) that are to be applied sequentially
to the object.
- If any of the elements of 'funcLine' is a function, a direct
function call with the object as an argument is performed.
- If any of the elements of 'funcLine' is a Functor, then the
first argument of the Functor constructor is the function to
be evaluated and the object is passed as a first argument to
the function with all the rest of the arguments passed right
after it eg. the following Functor in the funcLine:
Functor(func, 'foo', bar=True)
will result in the following function call later when the
pipeline is executed:
func(obj, 'foo', bar=True)
:Example:
(using the adder function from above and an object of type int)
>>> pipe = Pipeline([Functor(adder, 5),
Functor(adder, 6),
Functor(adder, 7, "extraArg"),
Functor(adder, 8, update=True)])
>>> pipe.run(1)
adder res: 6
adder res: 12
adder args: extraArg
adder res: 19
adder kwargs: {'update': True}
adder res: 27
"""
self.funcLine = funcLine or []
self.name = name
def getPipelineName(self):
"""
__getPipelineName__
"""
name = self.name or "Unnamed Pipeline"
return name
def run(self, obj):
return reduce(lambda obj, functor: functor(obj), self.funcLine, obj) | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/Pipeline.py | 0.750918 | 0.499512 | Pipeline.py | pypi |
# system modules
import os
import ssl
import time
import logging
import traceback
# third part library
try:
import jwt
except ImportError:
traceback.print_exc()
jwt = None
from Utils.Utilities import encodeUnicodeToBytes
# prevent "SSL: CERTIFICATE_VERIFY_FAILED" error
# this will cause pylint warning W0212, therefore we ignore it above
ssl._create_default_https_context = ssl._create_unverified_context
def readToken(name=None):
"""
Read IAM token either from environment or file name
:param name: ether file name containing token or environment name which hold the token value.
If not provided it will be assumed to read token from IAM_TOKEN environment.
:return: token or None
"""
if name and os.path.exists(name):
token = None
with open(name, 'r', encoding='utf-8') as istream:
token = istream.read()
return token
if name:
return os.environ.get(name)
return os.environ.get("IAM_TOKEN")
def tokenData(token, url="https://cms-auth.web.cern.ch/jwk", audUrl="https://wlcg.cern.ch/jwt/v1/any"):
"""
inspect and extract token data
:param token: token string
:param url: IAM provider URL
:param audUrl: audience string
"""
if not token or not jwt:
return {}
if isinstance(token, str):
token = encodeUnicodeToBytes(token)
jwksClient = jwt.PyJWKClient(url)
signingKey = jwksClient.get_signing_key_from_jwt(token)
key = signingKey.key
headers = jwt.get_unverified_header(token)
alg = headers.get('alg', 'RS256')
data = jwt.decode(
token,
key,
algorithms=[alg],
audience=audUrl,
options={"verify_exp": True},
)
return data
def isValidToken(token):
"""
check if given token is valid or not
:param token: token string
:return: true or false
"""
tokenDict = {}
tokenDict = tokenData(token)
exp = tokenDict.get('exp', 0) # expire, seconds since epoch
if not exp or exp < time.time():
return False
return True
class TokenManager():
"""
TokenManager class handles IAM tokens
"""
def __init__(self,
name=None,
url="https://cms-auth.web.cern.ch/jwk",
audUrl="https://wlcg.cern.ch/jwt/v1/any",
logger=None):
"""
Token manager reads IAM tokens either from file or env.
It caches token along with expiration timestamp.
By default the env variable to use is IAM_TOKEN.
:param name: string representing either file or env where we should read token from
:param url: IAM provider URL
:param audUrl: audience string
:param logger: logger object or none to use default one
"""
self.name = name
self.url = url
self.audUrl = audUrl
self.expire = 0
self.token = None
self.logger = logger if logger else logging.getLogger()
try:
self.token = self.getToken()
except Exception as exc:
self.logger.exception("Failed to get token. Details: %s", str(exc))
def getToken(self):
"""
Return valid token and sets its expire timestamp
"""
if not self.token or not isValidToken(self.token):
self.token = readToken(self.name)
tokenDict = {}
try:
tokenDict = tokenData(self.token, url=self.url, audUrl=self.audUrl)
self.logger.debug(tokenDict)
except Exception as exc:
self.logger.exception(str(exc))
raise
self.expire = tokenDict.get('exp', 0)
return self.token
def getLifetime(self):
"""
Return reamaining lifetime of existing token
"""
return self.expire - int(time.time()) | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/TokenManager.py | 0.66061 | 0.165863 | TokenManager.py | pypi |
from copy import copy
from builtins import object
from time import time
class MemoryCacheException(Exception):
def __init__(self, message):
super(MemoryCacheException, self).__init__(message)
class MemoryCache():
__slots__ = ["lastUpdate", "expiration", "_cache"]
def __init__(self, expiration, initialData=None):
"""
Initializes cache object
:param expiration: expiration time in seconds
:param initialData: initial value for the cache
"""
self.lastUpdate = int(time())
self.expiration = expiration
self._cache = initialData
def __contains__(self, item):
"""
Check whether item is in the current cache
:param item: a simple object (string, integer, etc)
:return: True if the object can be found in the cache, False otherwise
"""
return item in self._cache
def __getitem__(self, keyName):
"""
If the cache is a dictionary, return that item from the cache. Else, raise an exception.
:param keyName: the key name from the dictionary
"""
if isinstance(self._cache, dict):
return copy(self._cache.get(keyName))
else:
raise MemoryCacheException("Cannot retrieve an item from a non-dict MemoryCache object: {}".format(self._cache))
def reset(self):
"""
Resets the cache to its current data type
"""
if isinstance(self._cache, (dict, set)):
self._cache.clear()
elif isinstance(self._cache, list):
del self._cache[:]
else:
raise MemoryCacheException("The cache needs to be reset manually, data type unknown")
def isCacheExpired(self):
"""
Evaluate whether the cache has already expired, returning
True if it did, otherwise it returns False
"""
return self.lastUpdate + self.expiration < int(time())
def getCache(self):
"""
Raises an exception if the cache has expired, otherwise returns
its data
"""
if self.isCacheExpired():
expiredSince = int(time()) - (self.lastUpdate + self.expiration)
raise MemoryCacheException("Memory cache expired for %d seconds" % expiredSince)
return self._cache
def setCache(self, inputData):
"""
Refresh the cache with the content provided (refresh its expiration as well)
This method enforces the user to not change the cache data type
:param inputData: data to store in the cache
"""
if not isinstance(self._cache, type(inputData)):
raise TypeError("Current cache data type: %s, while new value is: %s" %
(type(self._cache), type(inputData)))
self.reset()
self.lastUpdate = int(time())
self._cache = inputData
def addItemToCache(self, inputItem):
"""
Adds new item(s) to the cache, without resetting its expiration.
It, of course, only works for data caches of type: list, set or dict.
:param inputItem: additional item to be added to the current cached data
"""
if isinstance(self._cache, set) and isinstance(inputItem, (list, set)):
# extend another list or set into a set
self._cache.update(inputItem)
elif isinstance(self._cache, set) and isinstance(inputItem, (int, float, str)):
# add a simple object (integer, string, etc) to a set
self._cache.add(inputItem)
elif isinstance(self._cache, list) and isinstance(inputItem, (list, set)):
# extend another list or set into a list
self._cache.extend(inputItem)
elif isinstance(self._cache, list) and isinstance(inputItem, (int, float, str)):
# add a simple object (integer, string, etc) to a list
self._cache.append(inputItem)
elif isinstance(self._cache, dict) and isinstance(inputItem, dict):
self._cache.update(inputItem)
else:
msg = "Input item type: %s cannot be added to a cache type: %s" % (type(self._cache), type(inputItem))
raise TypeError("Cache and input item data type mismatch. %s" % msg) | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/MemoryCache.py | 0.774796 | 0.226185 | MemoryCache.py | pypi |
from builtins import object
import logging
import time
import calendar
from datetime import tzinfo, timedelta
def gmtimeSeconds():
"""
Return GMT time in seconds
"""
return int(time.mktime(time.gmtime()))
def encodeTimestamp(secs):
"""
Encode second since epoch to a string GMT timezone representation
:param secs: input timestamp value (either int or float) in seconds since epoch
:return: time string in GMT timezone representation
"""
if not isinstance(secs, (int, float)):
raise Exception("Wrong input, should be seconds since epoch either int or float value")
return time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(int(secs)))
def decodeTimestamp(timeString):
"""
Decode timestamps in provided document
:param timeString: timestamp string represention in GMT timezone, see encodeTimestamp
:return: seconds since ecouch in GMT timezone
"""
if not isinstance(timeString, str):
raise Exception("Wrong input, should be time string in GMT timezone representation")
return calendar.timegm(time.strptime(timeString, "%Y-%m-%dT%H:%M:%SZ"))
def timeFunction(func):
"""
source: https://www.andreas-jung.com/contents/a-python-decorator-for-measuring-the-execution-time-of-methods
Decorator function to measure how long a method/function takes to run
It returns a tuple with:
* wall clock time spent
* returned result of the function
* the function name
"""
def wrapper(*arg, **kw):
t1 = time.time()
res = func(*arg, **kw)
t2 = time.time()
return round((t2 - t1), 4), res, func.__name__
return wrapper
class CodeTimer(object):
"""
A context manager for timing function calls.
Adapted from https://www.blog.pythonlibrary.org/2016/05/24/python-101-an-intro-to-benchmarking-your-code/
Use like
with CodeTimer(label='Doing something'):
do_something()
"""
def __init__(self, label='The function', logger=None):
self.start = time.time()
self.label = label
self.logger = logger or logging.getLogger()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
end = time.time()
runtime = round((end - self.start), 3)
self.logger.info(f"{self.label} took {runtime} seconds to complete")
class LocalTimezone(tzinfo):
"""
A required python 2 class to determine current timezone for formatting rfc3339 timestamps
Required for sending alerts to the MONIT AlertManager
Can be removed once WMCore starts using python3
Details of class can be found at: https://docs.python.org/2/library/datetime.html#tzinfo-objects
"""
def __init__(self):
super(LocalTimezone, self).__init__()
self.ZERO = timedelta(0)
self.STDOFFSET = timedelta(seconds=-time.timezone)
if time.daylight:
self.DSTOFFSET = timedelta(seconds=-time.altzone)
else:
self.DSTOFFSET = self.STDOFFSET
self.DSTDIFF = self.DSTOFFSET - self.STDOFFSET
def utcoffset(self, dt):
if self._isdst(dt):
return self.DSTOFFSET
else:
return self.STDOFFSET
def dst(self, dt):
if self._isdst(dt):
return self.DSTDIFF
else:
return self.ZERO
def tzname(self, dt):
return time.tzname[self._isdst(dt)]
def _isdst(self, dt):
tt = (dt.year, dt.month, dt.day,
dt.hour, dt.minute, dt.second,
dt.weekday(), 0, 0)
stamp = time.mktime(tt)
tt = time.localtime(stamp)
return tt.tm_isdst > 0 | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/Timers.py | 0.817028 | 0.254266 | Timers.py | pypi |
import copy
import unittest
class ExtendedUnitTestCase(unittest.TestCase):
"""
Class that can be imported to switch to 'mock'ed versions of
services.
"""
def assertContentsEqual(self, expected_obj, actual_obj, msg=None):
"""
A nested object comparison without regard for the ordering of contents. It asserts that
expected_obj and actual_obj contain the same elements and that their sub-elements are the same.
However, all sequences are allowed to contain the same elements, but in different orders.
"""
def traverse_dict(dictionary):
for key, value in list(dictionary.items()):
if isinstance(value, dict):
traverse_dict(value)
elif isinstance(value, list):
traverse_list(value)
return
def get_dict_sortkey(x):
if isinstance(x, dict):
return list(x.keys())
else:
return x
def traverse_list(theList):
for value in theList:
if isinstance(value, dict):
traverse_dict(value)
elif isinstance(value, list):
traverse_list(value)
theList.sort(key=get_dict_sortkey)
return
if not isinstance(expected_obj, type(actual_obj)):
self.fail(msg="The two objects are different type and cannot be compared: %s and %s" % (
type(expected_obj), type(actual_obj)))
expected = copy.deepcopy(expected_obj)
actual = copy.deepcopy(actual_obj)
if isinstance(expected, dict):
traverse_dict(expected)
traverse_dict(actual)
elif isinstance(expected, list):
traverse_list(expected)
traverse_list(actual)
else:
self.fail(msg="The two objects are different type (%s) and cannot be compared." % type(expected_obj))
return self.assertEqual(expected, actual) | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/ExtendedUnitTestCase.py | 0.664758 | 0.501587 | ExtendedUnitTestCase.py | pypi |
from builtins import str, bytes
import subprocess
import os
import re
import zlib
import base64
import sys
from types import ModuleType, FunctionType
from gc import get_referents
def lowerCmsHeaders(headers):
"""
Lower CMS headers in provided header's dict. The WMCore Authentication
code check only cms headers in lower case, e.g. cms-xxx-yyy.
"""
lheaders = {}
for hkey, hval in list(headers.items()): # perform lower-case
# lower header keys since we check lower-case in headers
if hkey.startswith('Cms-') or hkey.startswith('CMS-'):
lheaders[hkey.lower()] = hval
else:
lheaders[hkey] = hval
return lheaders
def makeList(stringList):
"""
_makeList_
Make a python list out of a comma separated list of strings,
throws a ValueError if the input is not well formed.
If the stringList is already of type list, then return it untouched.
"""
if isinstance(stringList, list):
return stringList
if isinstance(stringList, str):
toks = stringList.lstrip(' [').rstrip(' ]').split(',')
if toks == ['']:
return []
return [str(tok.strip(' \'"')) for tok in toks]
raise ValueError("Can't convert to list %s" % stringList)
def makeNonEmptyList(stringList):
"""
_makeNonEmptyList_
Given a string or a list of strings, return a non empty list of strings.
Throws an exception in case the final list is empty or input data is not
a string or a python list
"""
finalList = makeList(stringList)
if not finalList:
raise ValueError("Input data cannot be an empty list %s" % stringList)
return finalList
def strToBool(string):
"""
Try to convert different variations of True or False (including a string
type object) to a boolean value.
In short:
* True gets mapped from: True, "True", "true", "TRUE".
* False gets mapped from: False, "False", "false", "FALSE"
* anything else will fail
:param string: expects a boolean or a string, but it could be anything else
:return: a boolean value, or raise an exception if value passed in is not supported
"""
if string is False or string is True:
return string
elif string in ["True", "true", "TRUE"]:
return True
elif string in ["False", "false", "FALSE"]:
return False
raise ValueError("Can't convert to bool: %s" % string)
def safeStr(string):
"""
_safeStr_
Cast simple data (int, float, basestring) to string.
"""
if not isinstance(string, (tuple, list, set, dict)):
return str(string)
raise ValueError("We're not supposed to convert %s to string." % string)
def diskUse():
"""
This returns the % use of each disk partition
"""
diskPercent = []
df = subprocess.Popen(["df", "-klP"], stdout=subprocess.PIPE)
output = df.communicate()[0]
output = decodeBytesToUnicode(output).split("\n")
for x in output:
split = x.split()
if split != [] and split[0] != 'Filesystem':
diskPercent.append({'mounted': split[5], 'percent': split[4]})
return diskPercent
def numberCouchProcess():
"""
This returns the number of couch process
"""
ps = subprocess.Popen(["ps", "-ef"], stdout=subprocess.PIPE)
process = ps.communicate()[0]
process = decodeBytesToUnicode(process).count('couchjs')
return process
def rootUrlJoin(base, extend):
"""
Adds a path element to the path within a ROOT url
"""
if base:
match = re.match("^root://([^/]+)/(.+)", base)
if match:
host = match.group(1)
path = match.group(2)
newpath = os.path.join(path, extend)
newurl = "root://%s/%s" % (host, newpath)
return newurl
return None
def zipEncodeStr(message, maxLen=5120, compressLevel=9, steps=100, truncateIndicator=" (...)"):
"""
_zipEncodeStr_
Utility to zip a string and encode it.
If zipped encoded length is greater than maxLen,
truncate message until zip/encoded version
is within the limits allowed.
"""
message = encodeUnicodeToBytes(message)
encodedStr = zlib.compress(message, compressLevel)
encodedStr = base64.b64encode(encodedStr)
if len(encodedStr) < maxLen or maxLen == -1:
return encodedStr
compressRate = 1. * len(encodedStr) / len(base64.b64encode(message))
# Estimate new length for message zip/encoded version
# to be less than maxLen.
# Also, append truncate indicator to message.
truncateIndicator = encodeUnicodeToBytes(truncateIndicator)
strLen = int((maxLen - len(truncateIndicator)) / compressRate)
message = message[:strLen] + truncateIndicator
encodedStr = zipEncodeStr(message, maxLen=-1)
# If new length is not short enough, truncate
# recursively by steps
while len(encodedStr) > maxLen:
message = message[:-steps - len(truncateIndicator)] + truncateIndicator
encodedStr = zipEncodeStr(message, maxLen=-1)
return encodedStr
def getSize(obj):
"""
_getSize_
Function to traverse an object and calculate its total size in bytes
:param obj: a python object
:return: an integer representing the total size of the object
Code extracted from Stack Overflow:
https://stackoverflow.com/questions/449560/how-do-i-determine-the-size-of-an-object-in-python
"""
# Custom objects know their class.
# Function objects seem to know way too much, including modules.
# Exclude modules as well.
BLACKLIST = type, ModuleType, FunctionType
if isinstance(obj, BLACKLIST):
raise TypeError('getSize() does not take argument of type: '+ str(type(obj)))
seen_ids = set()
size = 0
objects = [obj]
while objects:
need_referents = []
for obj in objects:
if not isinstance(obj, BLACKLIST) and id(obj) not in seen_ids:
seen_ids.add(id(obj))
size += sys.getsizeof(obj)
need_referents.append(obj)
objects = get_referents(*need_referents)
return size
def decodeBytesToUnicode(value, errors="strict"):
"""
Accepts an input "value" of generic type.
If "value" is a string of type sequence of bytes (i.e. in py2 `str` or
`future.types.newbytes.newbytes`, in py3 `bytes`), then it is converted to
a sequence of unicode codepoints.
This function is useful for cleaning input data when using the
"unicode sandwich" approach, which involves converting bytes (i.e. strings
of type sequence of bytes) to unicode (i.e. strings of type sequence of
unicode codepoints, in py2 `unicode` or `future.types.newstr.newstr`,
in py3 `str` ) as soon as possible when recieving input data, and
converting unicode back to bytes as late as possible.
achtung!:
- converting unicode back to bytes is not covered by this function
- converting unicode back to bytes is not always necessary. when in doubt,
do not do it.
Reference: https://nedbatchelder.com/text/unipain.html
py2:
- "errors" can be: "strict", "ignore", "replace",
- ref: https://docs.python.org/2/howto/unicode.html#the-unicode-type
py3:
- "errors" can be: "strict", "ignore", "replace", "backslashreplace"
- ref: https://docs.python.org/3/howto/unicode.html#the-string-type
"""
if isinstance(value, bytes):
return value.decode("utf-8", errors)
return value
def decodeBytesToUnicodeConditional(value, errors="ignore", condition=True):
"""
if *condition*, then call decodeBytesToUnicode(*value*, *errors*),
else return *value*
This may be useful when we want to conditionally apply decodeBytesToUnicode,
maintaining brevity.
Parameters
----------
value : any
passed to decodeBytesToUnicode
errors: str
passed to decodeBytesToUnicode
condition: boolean of object with attribute __bool__()
if True, then we run decodeBytesToUnicode. Usually PY2/PY3
"""
if condition:
return decodeBytesToUnicode(value, errors)
return value
def encodeUnicodeToBytes(value, errors="strict"):
"""
Accepts an input "value" of generic type.
If "value" is a string of type sequence of unicode (i.e. in py2 `unicode` or
`future.types.newstr.newstr`, in py3 `str`), then it is converted to
a sequence of bytes.
This function is useful for encoding output data when using the
"unicode sandwich" approach, which involves converting unicode (i.e. strings
of type sequence of unicode codepoints) to bytes (i.e. strings of type
sequence of bytes, in py2 `str` or `future.types.newbytes.newbytes`,
in py3 `bytes`) as late as possible when passing a string to a third-party
function that only accepts bytes as input (pycurl's curl.setop is an
example).
py2:
- "errors" can be: "strict", "ignore", "replace", "xmlcharrefreplace"
- ref: https://docs.python.org/2/howto/unicode.html#the-unicode-type
py3:
- "errors" can be: "strict", "ignore", "replace", "backslashreplace",
"xmlcharrefreplace", "namereplace"
- ref: https://docs.python.org/3/howto/unicode.html#the-string-type
"""
if isinstance(value, str):
return value.encode("utf-8", errors)
return value
def encodeUnicodeToBytesConditional(value, errors="ignore", condition=True):
"""
if *condition*, then call encodeUnicodeToBytes(*value*, *errors*),
else return *value*
This may be useful when we want to conditionally apply encodeUnicodeToBytes,
maintaining brevity.
Parameters
----------
value : any
passed to encodeUnicodeToBytes
errors: str
passed to encodeUnicodeToBytes
condition: boolean of object with attribute __bool__()
if True, then we run encodeUnicodeToBytes. Usually PY2/PY3
"""
if condition:
return encodeUnicodeToBytes(value, errors)
return value | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/Utils/Utilities.py | 0.53777 | 0.283586 | Utilities.py | pypi |
from __future__ import print_function
from subprocess import Popen, PIPE
from Utils.PythonVersion import PY3
from WMCore.Storage.StageOutError import StageOutError
def runCommand(command):
"""
_runCommand_
Run the command without deadlocking stdout and stderr,
Returns the exitCode
"""
# capture stdout and stderr from command
if PY3:
# python2 pylint complains about `encoding` argument
child = Popen(command, shell=True, bufsize=1, stdin=PIPE, close_fds=True, encoding='utf8')
else:
child = Popen(command, shell=True, bufsize=1, stdin=PIPE, close_fds=True)
child.communicate()
retCode = child.returncode
return retCode
def runCommandWithOutput(command):
"""
_runCommandWithOutput_
Run the command without deadlocking stdout and stderr,
echo all output to sys.stdout and sys.stderr
Returns the exitCode and the a string containing std out & error
"""
# capture stdout and stderr from command
if PY3:
# python2 pylint complains about `encoding` argument
child = Popen(command, shell=True, bufsize=1, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True, encoding='utf8')
else:
child = Popen(command, shell=True, bufsize=1, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
sout, serr = child.communicate()
retCode = child.returncode
# If child is terminated by signal, err will be negative value. (Unix only)
sigStr = "Terminated by signal %s\n" % -retCode if retCode < 0 else ""
output = "%sstdout: %s\nstderr: %s" % (sigStr, sout, serr)
return retCode, output
def execute(command):
"""
_execute_
Execute the command provided, throw a StageOutError if it exits
non zero
"""
try:
exitCode, output = runCommandWithOutput(command)
msg = "Command exited with status: %s, Output: (%s)" % (exitCode, output)
print(msg)
except Exception as ex:
msg = "Command threw exception: %s" % str(ex)
print("ERROR: Exception During Stage Out:\n%s" % msg)
raise StageOutError(msg, Command=command, ExitCode=60311)
if exitCode:
msg = "Command exited non-zero: ExitCode:%s \nOutput (%s)" % (exitCode, output)
print("ERROR: Exception During Stage Out:\n%s" % msg)
raise StageOutError(msg, Command=command, ExitCode=exitCode)
return | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Storage/Execute.py | 0.579638 | 0.150809 | Execute.py | pypi |
from builtins import next, str, range
from future.utils import viewitems
from future import standard_library
standard_library.install_aliases()
import os
import re
from urllib.parse import urlsplit
from xml.dom.minidom import Document
from WMCore.Algorithms.ParseXMLFile import xmlFileToNode
_TFCArgSplit = re.compile("\?protocol=")
class TrivialFileCatalog(dict):
"""
_TrivialFileCatalog_
Object that can map LFNs to PFNs based on contents of a Trivial
File Catalog
"""
def __init__(self):
dict.__init__(self)
self['lfn-to-pfn'] = []
self['pfn-to-lfn'] = []
self.preferredProtocol = None # attribute for preferred protocol
def addMapping(self, protocol, match, result,
chain=None, mapping_type='lfn-to-pfn'):
"""
_addMapping_
Add an lfn to pfn mapping to this instance
"""
entry = {}
entry.setdefault("protocol", protocol)
entry.setdefault("path-match-expr", re.compile(match))
entry.setdefault("path-match", match)
entry.setdefault("result", result)
entry.setdefault("chain", chain)
self[mapping_type].append(entry)
def _doMatch(self, protocol, path, style, caller):
"""
Generalised way of building up the mappings.
caller is the method from there this method was called, it's used
for resolving chained rules
Return None if no match
"""
for mapping in self[style]:
if mapping['protocol'] != protocol:
continue
if mapping['path-match-expr'].match(path) or mapping["chain"] != None:
if mapping["chain"] != None:
oldpath = path
path = caller(mapping["chain"], path)
if not path:
continue
splitList = []
if len(mapping['path-match-expr'].split(path, 1)) > 1:
for split in range(len(mapping['path-match-expr'].split(path, 1))):
s = mapping['path-match-expr'].split(path, 1)[split]
if s:
splitList.append(s)
else:
path = oldpath
continue
result = mapping['result']
for split in range(len(splitList)):
result = result.replace("$" + str(split + 1), splitList[split])
return result
return None
def matchLFN(self, protocol, lfn):
"""
_matchLFN_
Return the result for the LFN provided if the LFN
matches the path-match for that protocol
Return None if no match
"""
result = self._doMatch(protocol, lfn, "lfn-to-pfn", self.matchLFN)
return result
def matchPFN(self, protocol, pfn):
"""
_matchLFN_
Return the result for the LFN provided if the LFN
matches the path-match for that protocol
Return None if no match
"""
result = self._doMatch(protocol, pfn, "pfn-to-lfn", self.matchPFN)
return result
def getXML(self):
"""
Converts TFC implementation (dict) into a XML string representation.
The method reflects this class implementation - dictionary containing
list of mappings while each mapping (i.e. entry, see addMapping
method) is a dictionary of key, value pairs.
"""
def _getElementForMappingEntry(entry, mappingStyle):
xmlDocTmp = Document()
element = xmlDocTmp.createElement(mappingStyle)
for k, v in viewitems(entry):
# ignore empty, None or compiled regexp items into output
if not v or (k == "path-match-expr"):
continue
element.setAttribute(k, str(v))
return element
xmlDoc = Document()
root = xmlDoc.createElement("storage-mapping") # root element name
for mappingStyle, mappings in viewitems(self):
for mapping in mappings:
mapElem = _getElementForMappingEntry(mapping, mappingStyle)
root.appendChild(mapElem)
return root.toprettyxml()
def __str__(self):
result = ""
for mapping in ['lfn-to-pfn', 'pfn-to-lfn']:
for item in self[mapping]:
result += "\t%s: protocol=%s path-match-re=%s result=%s" % (
mapping,
item['protocol'],
item['path-match-expr'].pattern,
item['result'])
if item['chain'] != None:
result += " chain=%s" % item['chain']
result += "\n"
return result
def tfcProtocol(contactString):
"""
_tfcProtocol_
Given a Trivial File Catalog contact string, extract the
protocol from it.
"""
args = urlsplit(contactString)[3]
value = args.replace("protocol=", '')
return value
def tfcFilename(contactString):
"""
_tfcFilename_
Extract the filename from a TFC contact string.
"""
value = contactString.replace("trivialcatalog_file:", "")
value = _TFCArgSplit.split(value)[0]
path = os.path.normpath(value)
return path
def readTFC(filename):
"""
_readTFC_
Read the file provided and return a TrivialFileCatalog
instance containing the details found in it
"""
if not os.path.exists(filename):
msg = "TrivialFileCatalog not found: %s" % filename
raise RuntimeError(msg)
try:
node = xmlFileToNode(filename)
except Exception as ex:
msg = "Error reading TrivialFileCatalog: %s\n" % filename
msg += str(ex)
raise RuntimeError(msg)
parsedResult = nodeReader(node)
tfcInstance = TrivialFileCatalog()
for mapping in ['lfn-to-pfn', 'pfn-to-lfn']:
for entry in parsedResult[mapping]:
protocol = entry.get("protocol", None)
match = entry.get("path-match", None)
result = entry.get("result", None)
chain = entry.get("chain", None)
if True in (protocol, match == None):
continue
tfcInstance.addMapping(str(protocol), str(match), str(result), chain, mapping)
return tfcInstance
def loadTFC(contactString):
"""
_loadTFC_
Given the contact string for the tfc, parse out the file location
and the protocol and create a TFC instance
"""
protocol = tfcProtocol(contactString)
catalog = tfcFilename(contactString)
instance = readTFC(catalog)
instance.preferredProtocol = protocol
return instance
def coroutine(func):
"""
_coroutine_
Decorator method used to prime coroutines
"""
def start(*args, **kwargs):
cr = func(*args, **kwargs)
next(cr)
return cr
return start
def nodeReader(node):
"""
_nodeReader_
Given a node, see if we can find what we're looking for
"""
processLfnPfn = {
'path-match': processPathMatch(),
'protocol': processProtocol(),
'result': processResult(),
'chain': processChain()
}
report = {'lfn-to-pfn': [], 'pfn-to-lfn': []}
processSMT = processSMType(processLfnPfn)
processor = expandPhEDExNode(processStorageMapping(processSMT))
processor.send((report, node))
return report
@coroutine
def expandPhEDExNode(target):
"""
_expandPhEDExNode_
If pulling a TFC from the PhEDEx DS, its wrapped in a top level <phedex> node,
this routine handles that extra node if it exists
"""
while True:
report, node = (yield)
sentPhedex = False
for subnode in node.children:
if subnode.name == "phedex":
target.send((report, subnode))
sentPhedex = True
if not sentPhedex:
target.send((report, node))
@coroutine
def processStorageMapping(target):
"""
Process everything
"""
while True:
report, node = (yield)
for subnode in node.children:
if subnode.name == 'storage-mapping':
target.send((report, subnode))
@coroutine
def processSMType(targets):
"""
Process the type of storage-mapping
"""
while True:
report, node = (yield)
for subnode in node.children:
if subnode.name in ['lfn-to-pfn', 'pfn-to-lfn']:
tmpReport = {'path-match-expr': subnode.name}
targets['protocol'].send((tmpReport, subnode.attrs.get('protocol', None)))
targets['path-match'].send((tmpReport, subnode.attrs.get('path-match', None)))
targets['result'].send((tmpReport, subnode.attrs.get('result', None)))
targets['chain'].send((tmpReport, subnode.attrs.get('chain', None)))
report[subnode.name].append(tmpReport)
@coroutine
def processPathMatch():
"""
Process path-match
"""
while True:
report, value = (yield)
report['path-match'] = value
@coroutine
def processProtocol():
"""
Process protocol
"""
while True:
report, value = (yield)
report['protocol'] = value
@coroutine
def processResult():
"""
Process result
"""
while True:
report, value = (yield)
report['result'] = value
@coroutine
def processChain():
"""
Process chain
"""
while True:
report, value = (yield)
if value == "":
report['chain'] = None
else:
report['chain'] = value | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Storage/TrivialFileCatalog.py | 0.637257 | 0.167253 | TrivialFileCatalog.py | pypi |
import json
import urllib
from urllib.parse import urlparse, parse_qs, quote_plus
from collections import defaultdict
from Utils.CertTools import cert, ckey
from dbs.apis.dbsClient import aggFileLumis, aggFileParents
from WMCore.Services.pycurl_manager import getdata as multi_getdata
from Utils.PortForward import PortForward
def dbsListFileParents(dbsUrl, blocks):
"""
Concurrent counter part of DBS listFileParents API
:param dbsUrl: DBS URL
:param blocks: list of blocks
:return: list of file parents
"""
urls = ['%s/fileparents?block_name=%s' % (dbsUrl, quote_plus(b)) for b in blocks]
func = aggFileParents
uKey = 'block_name'
return getUrls(urls, func, uKey)
def dbsListFileLumis(dbsUrl, blocks):
"""
Concurrent counter part of DBS listFileLumis API
:param dbsUrl: DBS URL
:param blocks: list of blocks
:return: list of file lumis
"""
urls = ['%s/filelumis?block_name=%s' % (dbsUrl, quote_plus(b)) for b in blocks]
func = aggFileLumis
uKey = 'block_name'
return getUrls(urls, func, uKey)
def dbsBlockOrigin(dbsUrl, blocks):
"""
Concurrent counter part of DBS files API
:param dbsUrl: DBS URL
:param blocks: list of blocks
:return: list of block origins for a given parent lfns
"""
urls = ['%s/blockorigin?block_name=%s' % (dbsUrl, quote_plus(b)) for b in blocks]
func = None
uKey = 'block_name'
return getUrls(urls, func, uKey)
def dbsParentFilesGivenParentDataset(dbsUrl, parentDataset, fInfo):
"""
Obtain parent files for given fileInfo object
:param dbsUrl: DBS URL
:param parentDataset: parent dataset name
:param fInfo: file info object
:return: list of parent files for given file info object
"""
portForwarder = PortForward(8443)
urls = []
for fileInfo in fInfo:
run = fileInfo['run_num']
lumis = urllib.parse.quote_plus(str(fileInfo['lumi_section_num']))
url = f'{dbsUrl}/files?dataset={parentDataset}&run_num={run}&lumi_list={lumis}'
urls.append(portForwarder(url))
func = None
uKey = None
rdict = getUrls(urls, func, uKey)
parentFiles = defaultdict(set)
for fileInfo in fInfo:
run = fileInfo['run_num']
lumis = urllib.parse.quote_plus(str(fileInfo['lumi_section_num']))
url = f'{dbsUrl}/files?dataset={parentDataset}&run_num={run}&lumi_list={lumis}'
url = portForwarder(url)
if url in rdict:
pFileList = rdict[url]
pFiles = {x['logical_file_name'] for x in pFileList}
parentFiles[fileInfo['logical_file_name']] = \
parentFiles[fileInfo['logical_file_name']].union(pFiles)
return parentFiles
def getUrls(urls, aggFunc, uKey=None):
"""
Perform parallel DBS calls for given set of urls and apply given aggregation
function to the results.
:param urls: list of DBS urls to call
:param aggFunc: aggregation function
:param uKey: url parameter to use for final dictionary
:return: dictionary of resuls where keys are urls and values are obtained results
"""
data = multi_getdata(urls, ckey(), cert())
rdict = {}
for row in data:
url = row['url']
code = int(row.get('code', 200))
error = row.get('error')
if code != 200:
msg = f"Fail to query {url}. Error: {code} {error}"
raise RuntimeError(msg)
if uKey:
key = urlParams(url).get(uKey)
else:
key = url
data = row.get('data', [])
res = json.loads(data)
if aggFunc:
rdict[key] = aggFunc(res)
else:
rdict[key] = res
return rdict
def urlParams(url):
"""
Return dictionary of URL parameters
:param url: URL link
:return: dictionary of URL parameters
"""
parsedUrl = urlparse(url)
rdict = parse_qs(parsedUrl.query)
for key, vals in rdict.items():
if len(vals) == 1:
rdict[key] = vals[0]
return rdict | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Services/DBS/DBSUtils.py | 0.572484 | 0.162746 | DBSUtils.py | pypi |
from __future__ import (division, print_function)
from builtins import str, bytes
from Utils.Utilities import encodeUnicodeToBytes
from io import BytesIO
import re
import xml.etree.cElementTree as ET
int_number_pattern = re.compile(r'(^[0-9-]$|^[0-9-][0-9]*$)')
float_number_pattern = re.compile(r'(^[-]?\d+\.\d*$|^\d*\.{1,1}\d+$)')
def adjust_value(value):
"""
Change null value to None.
"""
pat_float = float_number_pattern
pat_integer = int_number_pattern
if isinstance(value, str):
if value == 'null' or value == '(null)':
return None
elif pat_float.match(value):
return float(value)
elif pat_integer.match(value):
return int(value)
else:
return value
else:
return value
def xml_parser(data, prim_key):
"""
Generic XML parser
:param data: can be of type "file object", unicode string or bytes string
"""
if isinstance(data, (str, bytes)):
stream = BytesIO()
data = encodeUnicodeToBytes(data, "ignore")
stream.write(data)
stream.seek(0)
else:
stream = data
context = ET.iterparse(stream)
for event, elem in context:
row = {}
key = elem.tag
if key != prim_key:
continue
row[key] = elem.attrib
get_children(elem, event, row, key)
elem.clear()
yield row
def get_children(elem, event, row, key):
"""
xml_parser helper function. It gets recursively information about
children for given element tag. Information is stored into provided
row for given key. The change of notations can be applied during
parsing step by using provided notations dictionary.
"""
for child in elem.getchildren():
child_key = child.tag
child_data = child.attrib
if not child_data:
child_dict = adjust_value(child.text)
else:
child_dict = child_data
if child.getchildren(): # we got grand-children
if child_dict:
row[key][child_key] = child_dict
else:
row[key][child_key] = {}
if isinstance(child_dict, dict):
newdict = {child_key: child_dict}
else:
newdict = {child_key: {}}
get_children(child, event, newdict, child_key)
row[key][child_key] = newdict[child_key]
else:
if not isinstance(row[key], dict):
row[key] = {}
row[key].setdefault(child_key, [])
row[key][child_key].append(child_dict)
child.clear() | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Services/TagCollector/XMLUtils.py | 0.567697 | 0.201794 | XMLUtils.py | pypi |
from __future__ import division
from builtins import object
from datetime import timedelta, datetime
import socket
import json
import logging
from WMCore.Services.pycurl_manager import RequestHandler
from Utils.Timers import LocalTimezone
class AlertManagerAPI(object):
"""
A class used to send alerts via the MONIT AlertManager API
"""
def __init__(self, alertManagerUrl, logger=None):
self.alertManagerUrl = alertManagerUrl
# sender's hostname is added as an annotation
self.hostname = socket.gethostname()
self.mgr = RequestHandler()
self.ltz = LocalTimezone()
self.headers = {"Content-Type": "application/json"}
self.validSeverity = ["high", "medium", "low"]
self.logger = logger if logger else logging.getLogger()
def sendAlert(self, alertName, severity, summary, description, service, tag="wmcore", endSecs=600, generatorURL=""):
"""
:param alertName: a unique name for the alert
:param severity: low, medium, high
:param summary: a short description of the alert
:param description: a longer informational message with details about the alert
:param service: the name of the service firing an alert
:param tag: a unique tag used to help route the alert
:param endSecs: how many minutes until the alarm is silenced
:param generatorURL: this URL will be sent to AlertManager and configured as a clickable "Source" link in the web interface
AlertManager JSON format reference: https://www.prometheus.io/docs/alerting/latest/clients/
[
{
"labels": {
"alertname": "<requiredAlertName>",
"<labelname>": "<labelvalue>",
...
},
"annotations": {
"<labelname>": "<labelvalue>",
...
},
"startsAt": "<rfc3339>", # optional, will be current time if not present
"endsAt": "<rfc3339>",
"generatorURL": "<generator_url>" # optional
},
]
"""
if not self._isValidSeverity(severity):
return False
request = []
alert = {}
labels = {}
annotations = {}
# add labels
labels["alertname"] = alertName
labels["severity"] = severity
labels["tag"] = tag
labels["service"] = service
alert["labels"] = labels
# add annotations
annotations["hostname"] = self.hostname
annotations["summary"] = summary
annotations["description"] = description
alert["annotations"] = annotations
# In python3 we won't need the LocalTimezone class
# Will change to d = datetime.now().astimezone() + timedelta(seconds=endSecs)
d = datetime.now(self.ltz) + timedelta(seconds=endSecs)
alert["endsAt"] = d.isoformat("T")
alert["generatorURL"] = generatorURL
request.append(alert)
# need to do this because pycurl_manager only accepts dict and encoded strings type
params = json.dumps(request)
res = self.mgr.getdata(self.alertManagerUrl, params=params, headers=self.headers, verb='POST')
return res
def _isValidSeverity(self, severity):
"""
Used to check if the severity of the alert matches the valid levels: low, medium, high
:param severity: severity of the alert
:return: True or False
"""
if severity not in self.validSeverity:
logging.critical("Alert submitted to AlertManagerAPI with invalid severity: %s", severity)
return False
return True | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Services/AlertManager/AlertManagerAPI.py | 0.810554 | 0.161849 | AlertManagerAPI.py | pypi |
from __future__ import division
from builtins import object
import os.path
from WMCore.DataStructs.File import File
from WMCore.FwkJobReport import Report
from WMCore.Services.UUIDLib import makeUUID
class ReportEmu(object):
"""
_ReportEmu_
Job Report Emulator that creates a Report given a WMTask/WMStep and a Job instance.
"""
def __init__(self, **options):
"""
___init___
Options contain the settings for producing the report instance from the provided step
"""
self.step = options.get("WMStep", None)
self.job = options.get("Job", None)
return
def addInputFilesToReport(self, report):
"""
_addInputFilesToReport_
Pull all of the input files out of the job and add them to the report.
"""
report.addInputSource("PoolSource")
for inputFile in self.job["input_files"]:
inputFileSection = report.addInputFile("PoolSource", lfn=inputFile["lfn"],
size=inputFile["size"],
events=inputFile["events"])
Report.addRunInfoToFile(inputFileSection, inputFile["runs"])
return
def determineOutputSize(self):
"""
_determineOutputSize_
Determine the total size of and number of events in the input files and
use the job mask to scale that to something that would reasonably
approximate the size of and number of events in the output.
"""
totalSize = 0
totalEvents = 0
for inputFile in self.job["input_files"]:
totalSize += inputFile["size"]
totalEvents += inputFile["events"]
if self.job["mask"]["FirstEvent"] is not None and \
self.job["mask"]["LastEvent"] is not None:
outputTotalEvents = self.job["mask"]["LastEvent"] - self.job["mask"]["FirstEvent"] + 1
else:
outputTotalEvents = totalEvents
outputSize = int(totalSize * outputTotalEvents / totalEvents )
return (outputSize, outputTotalEvents)
def addOutputFilesToReport(self, report):
"""
_addOutputFilesToReport_
Add output files to every output module in the step. Scale the size
and number of events in the output files appropriately.
"""
(outputSize, outputEvents) = self.determineOutputSize()
if not os.path.exists('ReportEmuTestFile.txt'):
with open('ReportEmuTestFile.txt', 'w') as f:
f.write('A Shubbery')
for outputModuleName in self.step.listOutputModules():
outputModuleSection = self.step.getOutputModule(outputModuleName)
outputModuleSection.fixedLFN = False
outputModuleSection.disableGUID = False
outputLFN = "%s/%s.root" % (outputModuleSection.lfnBase,
str(makeUUID()))
outputFile = File(lfn=outputLFN, size=outputSize, events=outputEvents,
merged=False)
outputFile.setLocation(self.job["location"])
outputFile['pfn'] = "ReportEmuTestFile.txt"
outputFile['guid'] = "ThisIsGUID"
outputFile["checksums"] = {"adler32": "1234", "cksum": "5678"}
outputFile["dataset"] = {"primaryDataset": outputModuleSection.primaryDataset,
"processedDataset": outputModuleSection.processedDataset,
"dataTier": outputModuleSection.dataTier,
"applicationName": "cmsRun",
"applicationVersion": self.step.getCMSSWVersion()}
outputFile["module_label"] = outputModuleName
outputFileSection = report.addOutputFile(outputModuleName, outputFile)
for inputFile in self.job["input_files"]:
Report.addRunInfoToFile(outputFileSection, inputFile["runs"])
return
def __call__(self):
report = Report.Report(self.step.name())
report.id = self.job["id"]
report.task = self.job["task"]
report.workload = None
self.addInputFilesToReport(report)
self.addOutputFilesToReport(report)
return report | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/FwkJobReport/ReportEmu.py | 0.624294 | 0.164483 | ReportEmu.py | pypi |
from builtins import str
from WMCore.Database.DBFormatter import DBFormatter
from WMCore.WMException import WMException
from WMCore.WMExceptions import WMEXCEPTION
class DBCreator(DBFormatter):
"""
_DBCreator_
Generic class for creating database tables.
"""
def __init__(self, logger, dbinterface):
"""
_init_
Call the constructor of the parent class and create empty dictionaries
to hold table create statements, constraint statements and insert
statements.
"""
DBFormatter.__init__(self, logger, dbinterface)
self.create = {}
self.constraints = {}
self.inserts = {}
self.indexes = {}
def execute(self, conn = None, transaction = False):
"""
_execute_
Generic method to create tables and constraints by execute
sql statements in the create, and constraints dictionaries.
Before execution the keys assigned to the tables in the self.create
dictionary are sorted, to offer the possibilitiy of executing
table creation in a certain order.
"""
# create tables
for i in sorted(self.create.keys()):
try:
self.dbi.processData(self.create[i],
conn = conn,
transaction = transaction)
except Exception as e:
msg = WMEXCEPTION['WMCORE-2'] + '\n\n' +\
str(self.create[i]) +'\n\n' +str(e)
self.logger.debug( msg )
raise WMException(msg,'WMCORE-2')
# create indexes
for i in self.indexes:
try:
self.dbi.processData(self.indexes[i],
conn = conn,
transaction = transaction)
except Exception as e:
msg = WMEXCEPTION['WMCORE-2'] + '\n\n' +\
str(self.indexes[i]) +'\n\n' +str(e)
self.logger.debug( msg )
raise WMException(msg,'WMCORE-2')
# set constraints
for i in self.constraints:
try:
self.dbi.processData(self.constraints[i],
conn = conn,
transaction = transaction)
except Exception as e:
msg = WMEXCEPTION['WMCORE-2'] + '\n\n' +\
str(self.constraints[i]) +'\n\n' +str(e)
self.logger.debug( msg )
raise WMException(msg,'WMCORE-2')
# insert permanent data
for i in self.inserts:
try:
self.dbi.processData(self.inserts[i],
conn = conn,
transaction = transaction)
except Exception as e:
msg = WMEXCEPTION['WMCORE-2'] + '\n\n' +\
str(self.inserts[i]) +'\n\n' +str(e)
self.logger.debug( msg )
raise WMException(msg,'WMCORE-2')
return True
def __str__(self):
"""
_str_
Return a well formatted text representation of the schema held in the
self.create, self.constraints, self.inserts, self.indexes dictionaries.
"""
string = ''
for i in self.create, self.constraints, self.inserts, self.indexes:
for j in i:
string = string + i[j].lstrip() + '\n'
return string | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Database/DBCreator.py | 0.526586 | 0.233969 | DBCreator.py | pypi |
from __future__ import division, print_function
from builtins import str, object
try:
import mongomock
except ImportError:
# this library should only be required by unit tests
mongomock = None
from pymongo import MongoClient, errors, IndexModel
from pymongo.errors import ConnectionFailure
class MongoDB(object):
"""
A simple wrapper class for creating a connection to a MongoDB instance
"""
def __init__(self, database=None, server=None,
create=False, collections=None, testIndexes=False,
logger=None, mockMongoDB=False, **kwargs):
"""
:databases: A database Name to connect to
:server: The server url or a list of (server:port) pairs (see https://docs.mongodb.com/manual/reference/connection-string/)
:create: A flag to trigger a database creation (if missing) during
object construction, together with collections if present.
:collections: A list of tuples describing collections with indexes -
the first element is considered the collection name, all
the rest elements are considered as indexes
:testIndexes: A flag to trigger index test and eventually to create them
if missing (TODO)
:mockMongoDB: A flag to trigger a database simulation instead of trying
to connect to a real database server.
:logger: Logger
Here follows a short list of usefull optional parameters accepted by the
MongoClient which may be passed as keyword arguments to the current module:
:replicaSet: The name of the replica set to connect to. The driver will verify
that all servers it connects to match this name. Implies that the
hosts specified are a seed list and the driver should attempt to
find all members of the set. Defaults to None.
:port: The port number on which to connect. It is overwritten by the ports
defined in the Url string or from the tuples listed in the server list
:connect: If True, immediately begin connecting to MongoDB in the background.
Otherwise connect on the first operation.
:directConnection: If True, forces the client to connect directly to the specified MongoDB
host as a standalone. If False, the client connects to the entire
replica set of which the given MongoDB host(s) is a part.
If this is True and a mongodb+srv:// URI or a URI containing multiple
seeds is provided, an exception will be raised.
:username: A string
:password: A string
Although username and password must be percent-escaped in a MongoDB URI,
they must not be percent-escaped when passed as parameters. In this example,
both the space and slash special characters are passed as-is:
MongoClient(username="user name", password="pass/word")
"""
self.server = server
self.logger = logger
self.mockMongoDB = mockMongoDB
if mockMongoDB and mongomock is None:
msg = "You are trying to mock MongoDB, but you do not have mongomock in the python path."
self.logger.critical(msg)
raise ImportError(msg)
# NOTE: We need to explicitely check for server availiability.
# From pymongo Documentation: https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html
# """
# ...
# Starting with version 3.0 the :class:`MongoClient`
# constructor no longer blocks while connecting to the server or
# servers, and it no longer raises
# :class:`~pymongo.errors.ConnectionFailure` if they are
# unavailable, nor :class:`~pymongo.errors.ConfigurationError`
# if the user's credentials are wrong. Instead, the constructor
# returns immediately and launches the connection process on
# background threads.
# ...
# """
try:
if mockMongoDB:
self.client = mongomock.MongoClient()
self.logger.info("NOTICE: MongoDB is set to use mongomock, instead of real database.")
else:
self.client = MongoClient(host=self.server, **kwargs)
self.client.server_info()
self.client.admin.command('ping')
except ConnectionFailure as ex:
msg = "Could not connect to MongoDB server: %s. Server not available. \n"
msg += "Giving up Now."
self.logger.error(msg, self.server)
raise ex from None
except Exception as ex:
msg = "Could not connect to MongoDB server: %s. Due to unknown reason: %s\n"
msg += "Giving up Now."
self.logger.error(msg, self.server, str(ex))
raise ex from None
self.create = create
self.testIndexes = testIndexes
self.dbName = database
self.collections = collections or []
self._dbConnect(database)
if self.create and self.collections:
for collection in self.collections:
self._collCreate(collection, database)
if self.testIndexes and self.collections:
for collection in self.collections:
self._indexTest(collection[0], collection[1])
def _indexTest(self, collection, index):
pass
def _collTest(self, coll, db):
# self[db].list_collection_names()
pass
def collCreate(self, coll):
"""
A public method for _collCreate
"""
self._collCreate(coll, self.database)
def _collCreate(self, coll, db):
"""
A function used to explicitly create a collection with the relevant
indexes - used to avoid the Lazy Creating from MongoDB and eventual issues
in case we end up with no indexed collection, especially ones missing
the (`unique` index parameter)
:coll: A tuple describing one collection with indexes -
The first element is considered to be the collection name, and all
the rest of the elements are considered to be indexes.
The indexes must be of type IndexModel. See pymongo documentation:
https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.create_index
:db: The database name for the collection
"""
collName = coll[0]
collIndexes = list(coll[1:])
try:
self.client[db].create_collection(collName)
except errors.CollectionInvalid:
# this error is thrown in case of an already existing collection
msg = "Collection '{}' Already exists in database '{}'".format(coll, db)
self.logger.warning(msg)
if collIndexes:
for index in collIndexes:
if not isinstance(index, IndexModel):
msg = "ERR: Bad Index type for collection %s" % collName
raise errors.InvalidName
try:
self.client[db][collName].create_indexes(collIndexes)
except Exception as ex:
msg = "Failed to create indexes on collection: %s\n%s" % (collName, str(ex))
self.logger.error(msg)
raise ex
def _dbTest(self, db):
"""
Tests database connection.
"""
# Test connection (from mongoDB documentation):
# https://api.mongodb.com/python/3.4.0/api/pymongo/mongo_client.html
try:
# The 'ismaster' command is cheap and does not require auth.
self.client.admin.command('ismaster')
except errors.ConnectionFailure as ex:
msg = "Server not available: %s" % str(ex)
self.logger.error(msg)
raise ex
# Test for database existence
if db not in self.client.list_database_names():
msg = "Missing MongoDB databases: %s" % db
self.logger.error(msg)
raise errors.InvalidName
def _dbCreate(self, db):
# creating an empty collection in order to create the database
_initColl = self.client[db].create_collection('_initCollection')
_initColl.insert_one({})
# NOTE: never delete the _initCollection if you want the database to persist
# self.client[db].drop_collection('_initCollection')
def dbConnect(self):
"""
A public method for _dbConnect
"""
self._dbConnect(self.database)
def _dbConnect(self, db):
"""
The function to be used for the initial database connection creation and testing
"""
try:
setattr(self, db, self.client[db])
if not self.mockMongoDB:
self._dbTest(db)
except errors.ConnectionFailure as ex:
msg = "Could not connect to MongoDB server for database: %s\n%s\n" % (db, str(ex))
msg += "Giving up Now."
self.logger.error(msg)
raise ex
except errors.InvalidName as ex:
msg = "Could not connect to a missing MongoDB databases: %s\n%s" % (db, str(ex))
self.logger.error(msg)
if self.create:
msg = "Trying to create: %s" % db
self.logger.error(msg)
try:
# self._dbCreate(getattr(self, db))
self._dbCreate(db)
except Exception as exc:
msg = "Could not create MongoDB databases: %s\n%s\n" % (db, str(exc))
msg += "Giving up Now."
self.logger.error(msg)
raise exc
try:
self._dbTest(db)
except Exception as exc:
msg = "Second failure while testing %s\n%s\n" % (db, str(exc))
msg += "Giving up Now."
self.logger.error(msg)
raise exc
msg = "Database %s successfully created" % db
self.logger.error(msg)
except Exception as ex:
msg = "General Exception while trying to connect to : %s\n%s" % (db, str(ex))
self.logger.error(msg)
raise ex | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Database/MongoDB.py | 0.660829 | 0.271949 | MongoDB.py | pypi |
import logging
import time
from WMCore.DataStructs.WMObject import WMObject
from WMCore.WMException import WMException
from WMCore.WMExceptions import WMEXCEPTION
class Transaction(WMObject):
dbi = None
def __init__(self, dbinterface = None):
"""
Get the connection from the DBInterface and open a new transaction on it
"""
self.dbi = dbinterface
self.conn = None
self.transaction = None
def begin(self):
if self.conn == None:
self.conn = self.dbi.connection()
if self.conn.closed:
self.conn = self.dbi.connection()
if self.transaction == None:
self.transaction = self.conn.begin()
return
def processData(self, sql, binds={}):
"""
Propagates the request to the proper dbcore backend,
and performs checks for lost (or closed) connection.
"""
result = self.dbi.processData(sql, binds, conn = self.conn,
transaction = True)
return result
def commit(self):
"""
Commit the transaction and return the connection to the pool
"""
if not self.transaction == None:
self.transaction.commit()
if not self.conn == None:
self.conn.close()
self.conn = None
self.transaction = None
def rollback(self):
"""
To be called if there is an exception and you want to roll back the
transaction and return the connection to the pool
"""
if self.transaction:
self.transaction.rollback()
if self.conn:
self.conn.close()
self.conn = None
self.transaction = None
return
def rollbackForError(self):
"""
This is called when handling a major exception. This is because sometimes
you can end up in a situation where the transaction appears open, but is not. In
this case, calling a rollback on the transaction will cause an exception, which
then destroys all logging and shutdown of the actual code.
Use only in components.
"""
try:
self.rollback()
except:
pass
return | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Database/Transaction.py | 0.487063 | 0.150809 | Transaction.py | pypi |
from copy import copy
from Utils.IteratorTools import grouper
import WMCore.WMLogging
from WMCore.DataStructs.WMObject import WMObject
from WMCore.Database.ResultSet import ResultSet
class DBInterface(WMObject):
"""
Base class for doing SQL operations using a SQLAlchemy engine, or
pre-exisitng connection.
processData will take a (list of) sql statements and a (list of)
bind variable dictionaries and run the statements on the DB. If
necessary it will substitute binds into the sql (MySQL).
TODO:
Add in some suitable exceptions in one or two places
Test the hell out of it
Support executemany()
"""
logger = None
engine = None
def __init__(self, logger, engine):
self.logger = logger
self.logger.info ("Instantiating base WM DBInterface")
self.engine = engine
self.maxBindsPerQuery = 500
def buildbinds(self, sequence, thename, therest=[{}]):
"""
Build a list of binds. Can be used recursively, e.g.:
buildbinds(file, 'file', buildbinds(pnn, 'location'), {'lumi':123})
TODO: replace with an appropriate map function
"""
binds = []
for r in sequence:
for i in self.makelist(therest):
thebind = copy(i)
thebind[thename] = r
binds.append(thebind)
return binds
def executebinds(self, s=None, b=None, connection=None,
returnCursor=False):
"""
_executebinds_
returns a list of sqlalchemy.engine.base.ResultProxy objects
"""
if b == None:
resultProxy = connection.execute(s)
else:
resultProxy = connection.execute(s, b)
if returnCursor:
return resultProxy
result = ResultSet()
result.add(resultProxy)
resultProxy.close()
return result
def executemanybinds(self, s=None, b=None, connection=None,
returnCursor=False):
"""
_executemanybinds_
b is a list of dictionaries for the binds, e.g.:
b = [ {'bind1':'value1a', 'bind2': 'value2a'},
{'bind1':'value1b', 'bind2': 'value2b'} ]
see: http://www.gingerandjohn.com/archives/2004/02/26/cx_oracle-executemany-example/
Can't executemany() selects - so do each combination of binds here instead.
This will return a list of sqlalchemy.engine.base.ResultProxy object's
one for each set of binds.
returns a list of sqlalchemy.engine.base.ResultProxy objects
"""
s = s.strip()
if s.lower().endswith('select', 0, 6):
"""
Trying to select many
"""
if returnCursor:
result = []
for bind in b:
result.append(connection.execute(s, bind))
else:
result = ResultSet()
for bind in b:
resultproxy = connection.execute(s, bind)
result.add(resultproxy)
resultproxy.close()
return self.makelist(result)
"""
Now inserting or updating many
"""
result = connection.execute(s, b)
return self.makelist(result)
def connection(self):
"""
Return a connection to the engine (from the connection pool)
"""
return self.engine.connect()
def processData(self, sqlstmt, binds={}, conn=None,
transaction=False, returnCursor=False):
"""
set conn if you already have an active connection to reuse
set transaction = True if you already have an active transaction
"""
connection = None
try:
if not conn:
connection = self.connection()
else:
connection = conn
result = []
# Can take either a single statement or a list of statements and binds
sqlstmt = self.makelist(sqlstmt)
binds = self.makelist(binds)
if len(sqlstmt) > 0 and (len(binds) == 0 or (binds[0] == {} or binds[0] == None)):
# Should only be run by create statements
if not transaction:
#WMCore.WMLogging.sqldebug("transaction created in DBInterface")
trans = connection.begin()
for i in sqlstmt:
r = self.executebinds(i, connection=connection,
returnCursor=returnCursor)
result.append(r)
if not transaction:
trans.commit()
elif len(binds) > len(sqlstmt) and len(sqlstmt) == 1:
#Run single SQL statement for a list of binds - use execute_many()
if not transaction:
trans = connection.begin()
for subBinds in grouper(binds, self.maxBindsPerQuery):
result.extend(self.executemanybinds(sqlstmt[0], subBinds,
connection=connection, returnCursor=returnCursor))
if not transaction:
trans.commit()
elif len(binds) == len(sqlstmt):
# Run a list of SQL for a list of binds
if not transaction:
trans = connection.begin()
for i, s in enumerate(sqlstmt):
b = binds[i]
r = self.executebinds(s, b, connection=connection,
returnCursor=returnCursor)
result.append(r)
if not transaction:
trans.commit()
else:
self.logger.exception(
"DBInterface.processData Nothing executed, problem with your arguments")
self.logger.exception(
"DBInterface.processData SQL = %s" % sqlstmt)
WMCore.WMLogging.sqldebug('DBInterface.processData sql is %s items long' % len(sqlstmt))
WMCore.WMLogging.sqldebug('DBInterface.processData binds are %s items long' % len(binds))
assert_value = False
if len(binds) == len(sqlstmt):
assert_value = True
WMCore.WMLogging.sqldebug('DBInterface.processData are binds and sql same length? : %s' % (assert_value))
WMCore.WMLogging.sqldebug('sql: %s\n binds: %s\n, connection:%s\n, transaction:%s\n' %
(sqlstmt, binds, connection, transaction))
WMCore.WMLogging.sqldebug('type check:\nsql: %s\n binds: %s\n, connection:%s\n, transaction:%s\n' %
(type(sqlstmt), type(binds), type(connection), type(transaction)))
raise Exception("""DBInterface.processData Nothing executed, problem with your arguments
Probably mismatched sizes for sql (%i) and binds (%i)""" % (len(sqlstmt), len(binds)))
finally:
if not conn and connection != None:
connection.close() # Return connection to the pool
return result | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Database/DBCore.py | 0.414069 | 0.245401 | DBCore.py | pypi |
import copy
from WMCore.Database.DBCore import DBInterface
from WMCore.Database.ResultSet import ResultSet
def bindVarCompare(a):
"""
_bindVarCompare_
Bind variables are represented as a tuple with the first element being the
variable name and the second being it's position in the query. We sort on
the position in the query.
"""
return a[1]
def stringLengthCompare(a):
"""
_stringLengthCompare_
Sort comparison function to sort strings by length.
Since we want to sort from longest to shortest, this must be reversed when used
"""
return len(a)
class MySQLInterface(DBInterface):
def substitute(self, origSQL, origBindsList):
"""
_substitute_
Transform as set of bind variables from a list of dictionaries to a list
of tuples:
b = [ {'bind1':'value1a', 'bind2': 'value2a'},
{'bind1':'value1b', 'bind2': 'value2b'} ]
Will be transformed into:
b = [ ('value1a', 'value2a'), ('value1b', 'value2b')]
Don't need to substitute in the binds as executemany does that
internally. But the sql will also need to be reformatted, such that
:bind_name becomes %s.
See: http://www.devshed.com/c/a/Python/MySQL-Connectivity-With-Python/5/
"""
if origBindsList == None:
return origSQL, None
origBindsList = self.makelist(origBindsList)
origBind = origBindsList[0]
bindVarPositionList = []
updatedSQL = copy.copy(origSQL)
# We process bind variables from longest to shortest to avoid a shorter
# bind variable matching a longer one. For example if we have two bind
# variables: RELEASE_VERSION and RELEASE_VERSION_ID the former will
# match against the latter, causing problems. We'll sort the variable
# names by length to guard against this.
bindVarNames = list(origBind)
bindVarNames.sort(key=stringLengthCompare, reverse=True)
bindPositions = {}
for bindName in bindVarNames:
searchPosition = 0
while True:
bindPosition = origSQL.lower().find(":%s" % bindName.lower(),
searchPosition)
if bindPosition == -1:
break
if bindPosition not in bindPositions:
bindPositions[bindPosition] = 0
bindVarPositionList.append((bindName, bindPosition))
searchPosition = bindPosition + 1
searchPosition = 0
while True:
bindPosition = updatedSQL.lower().find(":%s" % bindName.lower(),
searchPosition)
if bindPosition == -1:
break
left = updatedSQL[0:bindPosition]
right = updatedSQL[bindPosition + len(bindName) + 1:]
updatedSQL = left + "%s" + right
bindVarPositionList.sort(key=bindVarCompare)
mySQLBindVarsList = []
for origBind in origBindsList:
mySQLBindVars = []
for bindVarPosition in bindVarPositionList:
mySQLBindVars.append(origBind[bindVarPosition[0]])
mySQLBindVarsList.append(tuple(mySQLBindVars))
return (updatedSQL, mySQLBindVarsList)
def executebinds(self, s = None, b = None, connection = None,
returnCursor = False):
"""
_executebinds_
Execute a SQL statement that has a single set of bind variables.
Transform the bind variables into the format that MySQL expects.
"""
s, b = self.substitute(s, b)
return DBInterface.executebinds(self, s, b, connection, returnCursor)
def executemanybinds(self, s = None, b = None, connection = None,
returnCursor = False):
"""
_executemanybinds_
Execute a SQL statement that has multiple sets of bind variables.
Transform the bind variables into the format that MySQL expects.
"""
newsql, binds = self.substitute(s, b)
return DBInterface.executemanybinds(self, newsql, binds, connection,
returnCursor) | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Database/MySQLCore.py | 0.637031 | 0.431105 | MySQLCore.py | pypi |
from __future__ import print_function
from builtins import str, bytes, int
from future.utils import viewitems
from Utils.PythonVersion import PY2
import sys
import types
class _EmptyClass(object):
pass
class JSONThunker(object):
"""
_JSONThunker_
Converts an arbitrary object to <-> from a jsonable object.
Will, for the most part "do the right thing" about various instance objects
by storing their class information along with their data in a dict. Handles
a recursion limit to prevent infinite recursion.
self.passThroughTypes - stores a list of types that should be passed
through unchanged to the JSON parser
self.blackListedModules - a list of modules that should not be stored in
the JSON.
"""
def __init__(self):
self.passThroughTypes = (type(None),
bool,
int,
float,
complex,
str,
bytes,
)
# objects that inherit from dict should be treated as a dict
# they don't store their data in __dict__. There was enough
# of those classes that it warrented making a special case
self.dictSortOfObjects = (('WMCore.Datastructs.Job', 'Job'),
('WMCore.WMBS.Job', 'Job'),
('WMCore.Database.CMSCouch', 'Document'))
# ditto above, but for lists
self.listSortOfObjects = (('WMCore.DataStructs.JobPackage', 'JobPackage'),
('WMCore.WMBS.JobPackage', 'JobPackage'),)
self.foundIDs = {}
# modules we don't want JSONed
self.blackListedModules = ('sqlalchemy.engine.threadlocal',
'WMCore.Database.DBCore',
'logging',
'WMCore.DAOFactory',
'WMCore.WMFactory',
'WMFactory',
'WMCore.Configuration',
'WMCore.Database.Transaction',
'threading',
'datetime')
def checkRecursion(self, data):
"""
handles checking for infinite recursion
"""
if id(data) in self.foundIDs:
if self.foundIDs[id(data)] > 5:
self.unrecurse(data)
return "**RECURSION**"
else:
self.foundIDs[id(data)] += 1
return data
else:
self.foundIDs[id(data)] = 1
return data
def unrecurse(self, data):
"""
backs off the recursion counter if we're returning from _thunk
"""
try:
self.foundIDs[id(data)] -= 1
except:
print("Could not find count for id %s of type %s data %s" % (id(data), type(data), data))
raise
def checkBlackListed(self, data):
"""
checks to see if a given object is from a blacklisted module
"""
try:
# special case
if data.__class__.__module__ == 'WMCore.Database.CMSCouch' and data.__class__.__name__ == 'Document':
data.__class__ = type({})
return data
if data.__class__.__module__ in self.blackListedModules:
return "Blacklisted JSON object: module %s, name %s, str() %s" % \
(data.__class__.__module__, data.__class__.__name__, str(data))
else:
return data
except Exception:
return data
def thunk(self, toThunk):
"""
Thunk - turns an arbitrary object into a JSONable object
"""
self.foundIDs = {}
data = self._thunk(toThunk)
return data
def unthunk(self, data):
"""
unthunk - turns a previously 'thunked' object back into a python object
"""
return self._unthunk(data)
def handleSetThunk(self, toThunk):
toThunk = self.checkRecursion(toThunk)
tempDict = {'thunker_encoded_json': True, 'type': 'set'}
tempDict['set'] = self._thunk(list(toThunk))
self.unrecurse(toThunk)
return tempDict
def handleListThunk(self, toThunk):
toThunk = self.checkRecursion(toThunk)
for k, v in enumerate(toThunk):
toThunk[k] = self._thunk(v)
self.unrecurse(toThunk)
return toThunk
def handleDictThunk(self, toThunk):
toThunk = self.checkRecursion(toThunk)
special = False
tmpdict = {}
for k, v in viewitems(toThunk):
if type(k) == type(int):
special = True
tmpdict['_i:%s' % k] = self._thunk(v)
elif type(k) == type(float):
special = True
tmpdict['_f:%s' % k] = self._thunk(v)
else:
tmpdict[k] = self._thunk(v)
if special:
toThunk['thunker_encoded_json'] = self._thunk(True)
toThunk['type'] = self._thunk('dict')
toThunk['dict'] = tmpdict
else:
toThunk.update(tmpdict)
self.unrecurse(toThunk)
return toThunk
def handleObjectThunk(self, toThunk):
toThunk = self.checkRecursion(toThunk)
toThunk = self.checkBlackListed(toThunk)
if isinstance(toThunk, (str, bytes)):
# things that got blacklisted
return toThunk
if hasattr(toThunk, '__to_json__'):
# Use classes own json thunker
toThunk2 = toThunk.__to_json__(self)
self.unrecurse(toThunk)
return toThunk2
elif isinstance(toThunk, dict):
toThunk2 = self.handleDictObjectThunk(toThunk)
self.unrecurse(toThunk)
return toThunk2
elif isinstance(toThunk, list):
# a mother thunking list
toThunk2 = self.handleListObjectThunk(toThunk)
self.unrecurse(toThunk)
return toThunk2
else:
try:
thunktype = '%s.%s' % (toThunk.__class__.__module__,
toThunk.__class__.__name__)
tempDict = {'thunker_encoded_json': True, 'type': thunktype}
tempDict[thunktype] = self._thunk(toThunk.__dict__)
self.unrecurse(toThunk)
return tempDict
except Exception as e:
tempDict = {'json_thunk_exception_': "%s" % e}
self.unrecurse(toThunk)
return tempDict
def handleDictObjectThunk(self, data):
thunktype = '%s.%s' % (data.__class__.__module__,
data.__class__.__name__)
tempDict = {'thunker_encoded_json': True,
'is_dict': True,
'type': thunktype,
thunktype: {}}
for k, v in viewitems(data.__dict__):
tempDict[k] = self._thunk(v)
for k, v in viewitems(data):
tempDict[thunktype][k] = self._thunk(v)
return tempDict
def handleDictObjectUnThunk(self, value, data):
data.pop('thunker_encoded_json', False)
data.pop('is_dict', False)
thunktype = data.pop('type', False)
for k, v in viewitems(data):
if k == thunktype:
for k2, v2 in viewitems(data[thunktype]):
value[k2] = self._unthunk(v2)
else:
value.__dict__[k] = self._unthunk(v)
return value
def handleListObjectThunk(self, data):
thunktype = '%s.%s' % (data.__class__.__module__,
data.__class__.__name__)
tempDict = {'thunker_encoded_json': True,
'is_list': True,
'type': thunktype,
thunktype: []}
for k, v in enumerate(data):
tempDict['thunktype'].append(self._thunk(v))
for k, v in viewitems(data.__dict__):
tempDict[k] = self._thunk(v)
return tempDict
def handleListObjectUnThunk(self, value, data):
data.pop('thunker_encoded_json', False)
data.pop('is_list', False)
thunktype = data.pop('type')
for k, v in viewitems(data[thunktype]):
setattr(value, k, self._unthunk(v))
for k, v in viewitems(data):
if k == thunktype:
continue
value.__dict__ = self._unthunk(v)
return value
def _thunk(self, toThunk):
"""
helper function for thunk, does the actual work
"""
if isinstance(toThunk, self.passThroughTypes):
return toThunk
elif type(toThunk) is list:
return self.handleListThunk(toThunk)
elif type(toThunk) is dict:
return self.handleDictThunk(toThunk)
elif type(toThunk) is set:
return self.handleSetThunk(toThunk)
elif type(toThunk) is types.FunctionType:
self.unrecurse(toThunk)
return "function reference"
elif isinstance(toThunk, object):
return self.handleObjectThunk(toThunk)
else:
self.unrecurse(toThunk)
raise RuntimeError(type(toThunk))
def _unthunk(self, jsondata):
"""
_unthunk - does the actual work for unthunk
"""
if PY2 and type(jsondata) is str:
return jsondata.encode("utf-8")
if type(jsondata) is dict:
if 'thunker_encoded_json' in jsondata:
# we've got a live one...
if jsondata['type'] == 'set':
newSet = set()
for i in self._unthunk(jsondata['set']):
newSet.add(self._unthunk(i))
return newSet
if jsondata['type'] == 'dict':
# We have a "special" dict
data = {}
for k, v in viewitems(jsondata['dict']):
tmp = self._unthunk(v)
if k.startswith('_i:'):
data[int(k.lstrip('_i:'))] = tmp
elif k.startswith('_f:'):
data[float(k.lstrip('_f:'))] = tmp
else:
data[k] = tmp
return data
else:
# spawn up an instance.. good luck
# here be monsters
# inspired from python's pickle code
ourClass = self.getThunkedClass(jsondata)
value = _EmptyClass()
if hasattr(ourClass, '__from_json__'):
# Use classes own json loader
try:
value.__class__ = ourClass
except Exception:
value = ourClass()
value = ourClass.__from_json__(value, jsondata, self)
elif 'thunker_encoded_json' in jsondata and 'is_dict' in jsondata:
try:
value.__class__ = ourClass
except Exception:
value = ourClass()
value = self.handleDictObjectUnThunk(value, jsondata)
elif 'thunker_encoded_json' in jsondata:
try:
value.__class__ = ourClass
except Exception:
value = ourClass()
value = self.handleListObjectUnThunk(value, jsondata)
else:
raise RuntimeError('Could not unthunk a class. Code to try was removed because it had errors.')
return value
else:
data = {}
for k, v in viewitems(jsondata):
data[k] = self._unthunk(v)
return data
else:
return jsondata
@staticmethod
def getThunkedClass(jsondata):
"""
Work out the class from it's thunked json representation
"""
module = jsondata['type'].rsplit('.', 1)[0]
name = jsondata['type'].rsplit('.', 1)[1]
if (module == 'WMCore.Services.Requests') and (name == JSONThunker):
raise RuntimeError("Attempted to unthunk a JSONThunker..")
__import__(module)
mod = sys.modules[module]
ourClass = getattr(mod, name)
return ourClass | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Wrappers/JsonWrapper/JSONThunker.py | 0.443118 | 0.360208 | JSONThunker.py | pypi |
from builtins import next, str, object
from future.utils import viewitems
import xml.parsers.expat
class Node(object):
"""
_Node_
Really simple DOM like container to simplify parsing the XML file
and formatting the character data without all the whitespace guff
"""
def __init__(self, name, attrs):
self.name = str(name)
self.attrs = {}
self.text = None
for k, v in viewitems(attrs):
self.attrs.__setitem__(str(k), str(v))
self.children = []
def __str__(self):
result = " %s %s \"%s\"\n" % (self.name, self.attrs, self.text)
for child in self.children:
result += str(child)
return result
def coroutine(func):
"""
_coroutine_
Decorator method used to prime coroutines
"""
def start(*args,**kwargs):
cr = func(*args,**kwargs)
next(cr)
return cr
return start
def xmlFileToNode(reportFile):
"""
_xmlFileToNode_
Use expat and the build coroutine to parse the XML file and build
a node structure
"""
node = Node("JobReports", {})
expat_parse(open(reportFile, 'rb'),
build(node))
return node
def expat_parse(f, target):
"""
_expat_parse_
Expat based XML parsing that feeds a node building coroutine
"""
parser = xml.parsers.expat.ParserCreate()
#parser.buffer_size = 65536
parser.buffer_text = True
# a leftover from the py2py3 migration - TO BE REMOVED
# parser.returns_unicode = False
parser.StartElementHandler = \
lambda name,attrs: target.send(('start',(name,attrs)))
parser.EndElementHandler = \
lambda name: target.send(('end',name))
parser.CharacterDataHandler = \
lambda data: target.send(('text',data))
parser.ParseFile(f)
@coroutine
def build(topNode):
"""
_build_
Node structure builder that is fed from the expat_parse method
"""
nodeStack = [topNode]
charCache = []
while True:
event, value = (yield)
if event == "start":
charCache = []
newnode = Node(value[0], value[1])
nodeStack[-1].children.append(newnode)
nodeStack.append(newnode)
elif event == "text":
charCache.append(value)
else: # end
nodeStack[-1].text = str(''.join(charCache)).strip()
nodeStack.pop()
charCache = [] | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Algorithms/ParseXMLFile.py | 0.592431 | 0.276608 | ParseXMLFile.py | pypi |
from __future__ import print_function, division
from builtins import str, range
import math
import decimal
import logging
from WMCore.WMException import WMException
class MathAlgoException(WMException):
"""
Some simple math algo exceptions
"""
pass
def getAverageStdDev(numList):
"""
_getAverageStdDev_
Given a list, calculate both the average and the
standard deviation.
"""
if len(numList) < 0:
# Nothing to do here
return 0.0, 0.0
total = 0.0
average = 0.0
stdBase = 0.0
# Assemble the average
skipped = 0
for value in numList:
try:
if math.isnan(value) or math.isinf(value):
skipped += 1
continue
else:
total += value
except TypeError:
msg = "Attempted to take average of non-numerical values.\n"
msg += "Expected int or float, got %s: %s" % (value.__class__, value)
logging.error(msg)
logging.debug("FullList: %s", numList)
raise MathAlgoException(msg)
length = len(numList) - skipped
if length < 1:
return average, total
average = total / length
for value in numList:
tmpValue = value - average
stdBase += (tmpValue * tmpValue)
stdDev = math.sqrt(stdBase / length)
if math.isnan(average) or math.isinf(average):
average = 0.0
if math.isnan(stdDev) or math.isinf(average) or not decimal.Decimal(str(stdDev)).is_finite():
stdDev = 0.0
if not isinstance(stdDev, (int, float)):
stdDev = 0.0
return average, stdDev
def createHistogram(numList, nBins, limit):
"""
_createHistogram_
Create a histogram proxy (a list of bins) for a
given list of numbers
"""
average, stdDev = getAverageStdDev(numList = numList)
underflow = []
overflow = []
histEvents = []
histogram = []
for value in numList:
if math.fabs(average - value) <= limit * stdDev:
# Then we counted this event
histEvents.append(value)
elif average < value:
overflow.append(value)
elif average > value:
underflow.append(value)
if len(underflow) > 0:
binAvg, binStdDev = getAverageStdDev(numList=underflow)
histogram.append({'type': 'underflow',
'average': binAvg,
'stdDev': binStdDev,
'nEvents': len(underflow)})
if len(overflow) > 0:
binAvg, binStdDev = getAverageStdDev(numList=overflow)
histogram.append({'type': 'overflow',
'average': binAvg,
'stdDev': binStdDev,
'nEvents': len(overflow)})
if len(histEvents) < 1:
# Nothing to do?
return histogram
histEvents.sort()
upperBound = max(histEvents)
lowerBound = min(histEvents)
if lowerBound == upperBound:
# This is a problem
logging.debug("Only one value in the histogram!")
nBins = 1
upperBound = upperBound + 1
lowerBound = lowerBound - 1
binSize = (upperBound - lowerBound)/nBins
binSize = floorTruncate(binSize)
for x in range(nBins):
lowerEdge = floorTruncate(lowerBound + (x * binSize))
histogram.append({'type': 'standard',
'lowerEdge': lowerEdge,
'upperEdge': lowerEdge + binSize,
'average': 0.0,
'stdDev': 0.0,
'nEvents': 0})
for bin_ in histogram:
if bin_['type'] != 'standard':
continue
binList = []
for value in histEvents:
if value >= bin_['lowerEdge'] and value <= bin_['upperEdge']:
# Then we're in the bin
binList.append(value)
elif value > bin_['upperEdge']:
# Because this is a sorted list we are now out of the bin range
# Calculate our values and break
break
else:
continue
# If we get here, it's because we're out of values in the bin
# Time to do some math
if len(binList) < 1:
# Nothing to do here, leave defaults
continue
binAvg, binStdDev = getAverageStdDev(numList=binList)
bin_['average'] = binAvg
bin_['stdDev'] = binStdDev
bin_['nEvents'] = len(binList)
return histogram
def floorTruncate(value, precision=3):
"""
_floorTruncate_
Truncate a value to a set number of decimal points
Always truncates to a LOWER value, this is so that using it for
histogram binning creates values beneath the histogram lower edge.
"""
prec = math.pow(10, precision)
return math.floor(value * prec)/prec
def sortDictionaryListByKey(dictList, key, reverse=False):
"""
_sortDictionaryListByKey_
Given a list of dictionaries and a key with a numerical
value, sort that dictionary in order of that key's value.
NOTE: If the key does not exist, this will not raise an exception
This is because this is used for sorting of performance histograms
And not all histograms have the same value
"""
return sorted(dictList, key=lambda k: float(k.get(key, 0.0)), reverse=reverse)
def getLargestValues(dictList, key, n=1):
"""
_getLargestValues_
Take a list of dictionaries, sort them by the value of a
particular key, and return the n largest entries.
Key must be a numerical key.
"""
sortedList = sortDictionaryListByKey(dictList=dictList, key=key, reverse=True)
return sortedList[:n]
def validateNumericInput(value):
"""
_validateNumericInput_
Check that the value is actually an usable number
"""
value = float(value)
try:
if math.isnan(value) or math.isinf(value):
return False
except TypeError:
return False
return True
def calculateRunningAverageAndQValue(newPoint, n, oldM, oldQ):
"""
_calculateRunningAverageAndQValue_
Use the algorithm described in:
Donald E. Knuth (1998). The Art of Computer Programming, volume 2: Seminumerical Algorithms, 3rd ed.., p. 232. Boston: Addison-Wesley.
To calculate an average and standard deviation while getting data, the standard deviation
can be obtained from the so-called Q value with the following equation:
sigma = sqrt(Q/n)
This is also contained in the function calculateStdDevFromQ in this module. The average is equal to M.
"""
if not validateNumericInput(newPoint): raise MathAlgoException("Provided a non-valid newPoint")
if not validateNumericInput(n): raise MathAlgoException("Provided a non-valid n")
if n == 1:
M = newPoint
Q = 0.0
else:
if not validateNumericInput(oldM): raise MathAlgoException("Provided a non-valid oldM")
if not validateNumericInput(oldQ): raise MathAlgoException("Provided a non-valid oldQ")
M = oldM + (newPoint - oldM) / n
Q = oldQ + ((n - 1) * (newPoint - oldM) * (newPoint - oldM) / n)
return M, Q
def calculateStdDevFromQ(Q, n):
"""
_calculateStdDevFromQ_
If Q is the sum of the squared differences of some points to their average,
then the standard deviation is given by:
sigma = sqrt(Q/n)
This function calculates that formula
"""
if not validateNumericInput(Q): raise MathAlgoException("Provided a non-valid Q")
if not validateNumericInput(n): raise MathAlgoException("Provided a non-valid n")
sigma = math.sqrt(Q / n)
if not validateNumericInput(sigma): return 0.0
return sigma | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/Algorithms/MathAlgos.py | 0.591841 | 0.33565 | MathAlgos.py | pypi |
from builtins import str as newstr
import random, cherrypy
class RESTError(Exception):
"""Base class for REST errors.
.. attribute:: http_code
Integer, HTTP status code for this error. Also emitted as X-Error-HTTP
header value.
.. attribute:: app_code
Integer, application error code, to be emitted as X-REST-Status header.
.. attribute:: message
String, information about the error, to be emitted as X-Error-Detail
header. Should not contain anything sensitive, and in particular should
never include any unvalidated or unsafe data, e.g. input parameters or
data from a database. Normally a fixed label with one-to-one match with
the :obj:`app-code`. If the text exceeds 200 characters, it's truncated.
Since this is emitted as a HTTP header, it cannot contain newlines or
anything encoding-dependent.
.. attribute:: info
String, additional information beyond :obj:`message`, to be emitted as
X-Error-Info header. Like :obj:`message` should not contain anything
sensitive or unsafe, or text inappropriate for a HTTP response header,
and should be short enough to fit in 200 characters. This is normally
free form text to clarify why the error happened.
.. attribute:: errid
String, random unique identifier for this error, to be emitted as
X-Error-ID header and output into server logs when logging the error.
The purpose is that clients save this id when they receive an error,
and further error reporting or debugging can use this value to identify
the specific error, and for example to grep logs for more information.
.. attribute:: errobj
If the problem was caused by another exception being raised in the code,
reference to the original exception object. For example if the code dies
with an :class:`KeyError`, this is the original exception object. This
error is logged to the server logs when reporting the error, but no
information about it is returned to the HTTP client.
.. attribute:: trace
The origin of the exception as returned by :func:`format_exc`. The full
trace is emitted to the server logs, each line prefixed with timestamp.
This information is not returned to the HTTP client.
"""
http_code = None
app_code = None
message = None
info = None
errid = None
errobj = None
trace = None
def __init__(self, info = None, errobj = None, trace = None):
self.errid = "%032x" % random.randrange(1 << 128)
self.errobj = errobj
self.info = info
self.trace = trace
def __str__(self):
return "%s %s [HTTP %d, APP %d, MSG %s, INFO %s, ERR %s]" \
% (self.__class__.__name__, self.errid, self.http_code, self.app_code,
repr(self.message).replace("\n", " ~~ "),
repr(self.info).replace("\n", " ~~ "),
repr(self.errobj).replace("\n", " ~~ "))
class NotAcceptable(RESTError):
"Client did not specify format it accepts, or no compatible format was found."
http_code = 406
app_code = 201
message = "Not acceptable"
class UnsupportedMethod(RESTError):
"Client used HTTP request method which isn't supported for any API call."
http_code = 405
app_code = 202
message = "Request method not supported"
class MethodWithoutQueryString(RESTError):
"Client provided a query string which isn't acceptable for this request method."
http_code = 405
app_code = 203
message = "Query arguments not supported for this request method"
class APIMethodMismatch(RESTError):
"""Both the API and HTTP request methods are supported, but not in that
combination."""
http_code = 405
app_code = 204
message = "API not supported for this request method"
class APINotSpecified(RESTError):
"The request URL is missing API argument."
http_code = 400
app_code = 205
message = "API not specified"
class NoSuchInstance(RESTError):
"""The request URL is missing instance argument or the specified instance
does not exist."""
http_code = 404
app_code = 206
message = "No such instance"
class APINotSupported(RESTError):
"The request URL provides wrong API argument."
http_code = 404
app_code = 207
message = "API not supported"
class DataCacheEmpty(RESTError):
"The wmstats data cache has not be created."
http_code = 503
app_code = 208
message = "DataCache is Empty"
class DatabaseError(RESTError):
"""Parent class for database-related errors.
.. attribute: lastsql
A tuple of *(sql, binds, kwbinds),* where `sql` is the last SQL statement
executed and `binds`, `kwbinds` are the bind values used with it. Any
sensitive parts like passwords have already been censored from the `sql`
string. Note that for massive requests `binds` or `kwbinds` can get large.
These are logged out in the server logs when reporting the error, but no
information about these are returned to the HTTP client.
.. attribute: intance
String, the database instance for which the error occurred. This is
reported in the error message output to server logs, but no information
about this is returned to the HTTP client."""
lastsql = None
instance = None
def __init__(self, info = None, errobj = None, trace = None,
lastsql = None, instance = None):
RESTError.__init__(self, info, errobj, trace)
self.lastsql = lastsql
self.instance = instance
class DatabaseUnavailable(DatabaseError):
"""The instance argument is correct, but cannot connect to the database.
This error will only occur at initial attempt to connect to the database,
:class:`~.DatabaseConnectionError` is raised instead if the connection
ends prematurely after the transaction has already begun successfully."""
http_code = 503
app_code = 401
message = "Database unavailable"
class DatabaseConnectionError(DatabaseError):
"""Database was available when the operation started, but the connection
was lost or otherwise failed during the application operation."""
http_code = 504
app_code = 402
message = "Database connection failure"
class DatabaseExecutionError(DatabaseError):
"""Database operation failed."""
http_code = 500
app_code = 403
message = "Execution error"
class MissingParameter(RESTError):
"Client did not supply a parameter which is required."
http_code = 400
app_code = 301
message = "Missing required parameter"
class InvalidParameter(RESTError):
"Client supplied invalid value for a parameter."
http_code = 400
app_code = 302
message = "Invalid input parameter"
class MissingObject(RESTError):
"""An object required for the operation is missing. This might be a
pre-requisite needed to create a reference, or attempt to delete
an object which does not exist."""
http_code = 400
app_code = 303
message = "Required object is missing"
class TooManyObjects(RESTError):
"""Too many objects matched specified criteria. Usually this means
more than one object was matched, deleted, or inserted, when only
exactly one should have been subject to the operation."""
http_code = 400
app_code = 304
message = "Too many objects"
class ObjectAlreadyExists(RESTError):
"""An already existing object is on the way of the operation. This
is usually caused by uniqueness constraint violations when creating
new objects."""
http_code = 400
app_code = 305
message = "Object already exists"
class InvalidObject(RESTError):
"The specified object is invalid."
http_code = 400
app_code = 306
message = "Invalid object"
class ExecutionError(RESTError):
"""Input was in principle correct but there was an error processing
the request. This normally means either programming error, timeout, or
an unusual and unexpected problem with the database. For security reasons
little additional information is returned. If the problem persists, client
should contact service operators. The returned error id can be used as a
reference."""
http_code = 500
app_code = 403
message = "Execution error"
def report_error_header(header, val):
"""If `val` is non-empty, set CherryPy response `header` to `val`.
Replaces all newlines with "; " characters. If the resulting value is
longer than 200 characters, truncates it to the first 197 characters
and leaves a trailing ellipsis "..."."""
if val:
val = val.replace("\n", "; ")
if len(val) > 200: val = val[:197] + "..."
cherrypy.response.headers[header] = val
def report_rest_error(err, trace, throw):
"""Report a REST error: generate an appropriate log message, set the
response headers and raise an appropriate :class:`~.HTTPError`.
Normally `throw` would be True to translate the exception `err` into
a HTTP server error, but the function can also be called with `throw`
set to False if the purpose is merely to log an exception message.
:arg err: exception object.
:arg trace: stack trace to use in case `err` doesn't have one.
:arg throw: raise a :class:`~.HTTPError` if True."""
if isinstance(err, DatabaseError) and err.errobj:
offset = None
sql, binds, kwbinds = err.lastsql
if sql and err.errobj.args and hasattr(err.errobj.args[0], 'offset'):
offset = err.errobj.args[0].offset
sql = sql[:offset] + "<**>" + sql[offset:]
cherrypy.log("SERVER DATABASE ERROR %d/%d %s %s.%s %s [instance: %s] (%s);"
" last statement: %s; binds: %s, %s; offset: %s"
% (err.http_code, err.app_code, err.message,
getattr(err.errobj, "__module__", "__builtins__"),
err.errobj.__class__.__name__,
err.errid, err.instance, newstr(err.errobj).rstrip(),
sql, binds, kwbinds, offset))
for line in err.trace.rstrip().split("\n"): cherrypy.log(" " + line)
cherrypy.response.headers["X-REST-Status"] = newstr(err.app_code)
cherrypy.response.headers["X-Error-HTTP"] = newstr(err.http_code)
cherrypy.response.headers["X-Error-ID"] = err.errid
report_error_header("X-Error-Detail", err.message)
report_error_header("X-Error-Info", err.info)
if throw: raise cherrypy.HTTPError(err.http_code, err.message)
elif isinstance(err, RESTError):
if err.errobj:
cherrypy.log("SERVER REST ERROR %s.%s %s (%s); derived from %s.%s (%s)"
% (err.__module__, err.__class__.__name__,
err.errid, err.message,
getattr(err.errobj, "__module__", "__builtins__"),
err.errobj.__class__.__name__,
newstr(err.errobj).rstrip()))
trace = err.trace
else:
cherrypy.log("SERVER REST ERROR %s.%s %s (%s)"
% (err.__module__, err.__class__.__name__,
err.errid, err.message))
for line in trace.rstrip().split("\n"): cherrypy.log(" " + line)
cherrypy.response.headers["X-REST-Status"] = newstr(err.app_code)
cherrypy.response.headers["X-Error-HTTP"] = newstr(err.http_code)
cherrypy.response.headers["X-Error-ID"] = err.errid
report_error_header("X-Error-Detail", err.message)
report_error_header("X-Error-Info", err.info)
if throw: raise cherrypy.HTTPError(err.http_code, err.message)
elif isinstance(err, cherrypy.HTTPError):
errid = "%032x" % random.randrange(1 << 128)
cherrypy.log("SERVER HTTP ERROR %s.%s %s (%s)"
% (err.__module__, err.__class__.__name__,
errid, newstr(err).rstrip()))
for line in trace.rstrip().split("\n"): cherrypy.log(" " + line)
cherrypy.response.headers["X-REST-Status"] = newstr(200)
cherrypy.response.headers["X-Error-HTTP"] = newstr(err.status)
cherrypy.response.headers["X-Error-ID"] = errid
report_error_header("X-Error-Detail", err._message)
if throw: raise err
else:
errid = "%032x" % random.randrange(1 << 128)
cherrypy.log("SERVER OTHER ERROR %s.%s %s (%s)"
% (getattr(err, "__module__", "__builtins__"),
err.__class__.__name__,
errid, newstr(err).rstrip()))
for line in trace.rstrip().split("\n"): cherrypy.log(" " + line)
cherrypy.response.headers["X-REST-Status"] = 400
cherrypy.response.headers["X-Error-HTTP"] = 500
cherrypy.response.headers["X-Error-ID"] = errid
report_error_header("X-Error-Detail", "Server error")
if throw: raise cherrypy.HTTPError(500, "Server error") | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/REST/Error.py | 0.835752 | 0.247783 | Error.py | pypi |
from builtins import str as newstr, bytes as newbytes
from WMCore.REST.Error import *
import math
import re
import numbers
from Utils.Utilities import decodeBytesToUnicodeConditional, encodeUnicodeToBytesConditional
from Utils.PythonVersion import PY3, PY2
def return_message(main_err, custom_err):
if custom_err:
return custom_err
return main_err
def _arglist(argname, kwargs):
val = kwargs.get(argname, None)
if val == None:
return []
elif not isinstance(val, list):
return [ val ]
else:
return val
def _check_rx(argname, val, custom_err = None):
if not isinstance(val, (newstr, newbytes)):
raise InvalidParameter(return_message("Incorrect '%s' parameter" % argname, custom_err))
try:
return re.compile(val)
except:
raise InvalidParameter(return_message("Incorrect '%s' parameter" % argname, custom_err))
def _check_str(argname, val, rx, custom_err = None):
"""
This is not really check val is ASCII.
2021 09: we are now using version 17.4.0 -> we do not need to convert to
bytes here anymore, we are using a recent verison of cherrypy.
We merged the funcionality of _check_str and _check_ustr into a single function
:type val: str or bytes (only utf8 encoded string) in py3, unicode or str in py2
:type rx: regex, compiled from native str (unicode in py3, bytes in py2)
"""
val = decodeBytesToUnicodeConditional(val, condition=PY3)
val = encodeUnicodeToBytesConditional(val, condition=PY2)
# `val` should now be a "native str" (unicode in py3, bytes in py2)
# here str has not been redefined. it is default `str` in both py2 and py3.
if not isinstance(val, str) or not rx.match(val):
raise InvalidParameter(return_message("Incorrect '%s' parameter %s %s" % (argname, type(val), val), custom_err))
return val
def _check_num(argname, val, bare, minval, maxval, custom_err = None):
if not isinstance(val, numbers.Integral) and (not isinstance(val, (newstr, newbytes)) or (bare and not val.isdigit())):
raise InvalidParameter(return_message("Incorrect '%s' parameter" % argname, custom_err))
try:
n = int(val)
if (minval != None and n < minval) or (maxval != None and n > maxval):
raise InvalidParameter(return_message("Parameter '%s' value out of bounds" % argname, custom_err))
return n
except InvalidParameter:
raise
except:
raise InvalidParameter(return_message("Invalid '%s' parameter" % argname, custom_err))
def _check_real(argname, val, special, minval, maxval, custom_err = None):
if not isinstance(val, numbers.Number) and not isinstance(val, (newstr, newbytes)):
raise InvalidParameter(return_message("Incorrect '%s' parameter" % argname, custom_err))
try:
n = float(val)
if not special and (math.isnan(n) or math.isinf(n)):
raise InvalidParameter(return_message("Parameter '%s' improper value" % argname, custom_err))
if (minval != None and n < minval) or (maxval != None and n > maxval):
raise InvalidParameter(return_message("Parameter '%s' value out of bounds" % argname, custom_err))
return n
except InvalidParameter:
raise
except:
raise InvalidParameter(return_message("Invalid '%s' parameter" % argname, custom_err))
def _validate_one(argname, param, safe, checker, optional, *args):
val = param.kwargs.get(argname, None)
if optional and val == None:
safe.kwargs[argname] = None
else:
safe.kwargs[argname] = checker(argname, val, *args)
del param.kwargs[argname]
def _validate_all(argname, param, safe, checker, *args):
safe.kwargs[argname] = [checker(argname, v, *args) for v in _arglist(argname, param.kwargs)]
if argname in param.kwargs:
del param.kwargs[argname]
def validate_rx(argname, param, safe, optional = False, custom_err = None):
"""Validates that an argument is a valid regexp.
Checks that an argument named `argname` exists in `param.kwargs`,
and it a string which compiles into a python regular expression.
If successful, the regexp object (not the string) is copied into
`safe.kwargs` and the string value is removed from `param.kwargs`.
If `optional` is True, the argument is not required to exist in
`param.kwargs`; None is then inserted into `safe.kwargs`. Otherwise
a missing value raises an exception."""
_validate_one(argname, param, safe, _check_rx, optional, custom_err)
def validate_str(argname, param, safe, rx, optional = False, custom_err = None):
"""Validates that an argument is a string and matches a regexp.
Checks that an argument named `argname` exists in `param.kwargs`
and it is a string which matches regular expression `rx`. If
successful the string is copied into `safe.kwargs` and the value
is removed from `param.kwargs`.
Accepts both unicode strings and utf8-encoded bytes strings as argument
string.
Accepts regex compiled only with "native strings", which means str in both
py2 and py3 (unicode in py3, bytes of utf8-encoded strings in py2)
If `optional` is True, the argument is not required to exist in
`param.kwargs`; None is then inserted into `safe.kwargs`. Otherwise
a missing value raises an exception."""
_validate_one(argname, param, safe, _check_str, optional, rx, custom_err)
def validate_ustr(argname, param, safe, rx, optional = False, custom_err = None):
"""Validates that an argument is a string and matches a regexp,
During the py2->py3 modernization, _check_str and _check_ustr have been
merged into a single function called _check_str.
This function is now the same as validate_str, but is kept nonetheless
not to break our client's code.
Checks that an argument named `argname` exists in `param.kwargs`
and it is a string which matches regular expression `rx`. If
successful the string is copied into `safe.kwargs` and the value
is removed from `param.kwargs`.
If `optional` is True, the argument is not required to exist in
`param.kwargs`; None is then inserted into `safe.kwargs`. Otherwise
a missing value raises an exception."""
_validate_one(argname, param, safe, _check_str, optional, rx, custom_err)
def validate_num(argname, param, safe, optional = False,
bare = False, minval = None, maxval = None, custom_err = None):
"""Validates that an argument is a valid integer number.
Checks that an argument named `argname` exists in `param.kwargs`,
and it is an int or a string convertible to a valid number. If successful
the integer value (not the string) is copied into `safe.kwargs`
and the original int/string value is removed from `param.kwargs`.
If `optional` is True, the argument is not required to exist in
`param.kwargs`; None is then inserted into `safe.kwargs`. Otherwise
a missing value raises an exception.
If `bare` is True, the number is required to be a pure digit sequence if it is a string.
Otherwise anything accepted by `int(val)` is acceted, including for
example leading white space or sign. Note that either way arbitrarily
large values are accepted; if you want to prevent abuse against big
integers, use the `minval` and `maxval` thresholds described below,
or check the length the of the string against some limit first.
If `minval` or `maxval` are given, values less than or greater than,
respectively, the threshold are rejected."""
_validate_one(argname, param, safe, _check_num, optional, bare, minval, maxval, custom_err)
def validate_real(argname, param, safe, optional = False,
special = False, minval = None, maxval = None, custom_err = None):
"""Validates that an argument is a valid real number.
Checks that an argument named `argname` exists in `param.kwargs`,
and it is float number or a string convertible to a valid number. If successful
the float value (not the string) is copied into `safe.kwargs`
and the original float/string value is removed from `param.kwargs`.
If `optional` is True, the argument is not required to exist in
`param.kwargs`; None is then inserted into `safe.kwargs`. Otherwise
a missing value raises an exception.
Anything accepted by `float(val)` is accepted, including for example
leading white space, sign and exponent. However NaN and +/- Inf are
rejected unless `special` is True.
If `minval` or `maxval` are given, values less than or greater than,
respectively, the threshold are rejected."""
_validate_one(argname, param, safe, _check_real, optional, special, minval, maxval, custom_err)
def validate_rxlist(argname, param, safe, custom_err = None):
"""Validates that an argument is an array of strings, each of which
can be compiled into a python regexp object.
Checks that an argument named `argname` is either a single string or
an array of strings, each of which compiles into a regular expression.
If successful the array is copied into `safe.kwargs` and the value is
removed from `param.kwargs`. The value always becomes an array in
`safe.kwargs`, even if no or only one argument was provided.
Note that an array of zero length is accepted, meaning there were no
`argname` parameters at all in `param.kwargs`."""
_validate_all(argname, param, safe, _check_rx, custom_err)
def validate_strlist(argname, param, safe, rx, custom_err = None):
"""Validates that an argument is an array of strings, each of which
matches a regexp.
Checks that an argument named `argname` is either a single string or
an array of strings, each of which matches the regular expression
`rx`. If successful the array is copied into `safe.kwargs` and the
value is removed from `param.kwargs`. The value always becomes an
array in `safe.kwargs`, even if no or only one argument was provided.
Use `validate_ustrlist` instead if the argument string might need
to be converted from utf-8 into unicode first. Use this method only
for inputs which are meant to be bare strings.
Note that an array of zero length is accepted, meaning there were no
`argname` parameters at all in `param.kwargs`."""
_validate_all(argname, param, safe, _check_str, rx, custom_err)
def validate_ustrlist(argname, param, safe, rx, custom_err = None):
"""Validates that an argument is an array of strings, each of which
matches a regexp once converted from utf-8 into unicode.
Checks that an argument named `argname` is either a single string or
an array of strings, each of which matches the regular expression
`rx`. If successful the array is copied into `safe.kwargs` and the
value is removed from `param.kwargs`. The value always becomes an
array in `safe.kwargs`, even if no or only one argument was provided.
Use `validate_strlist` instead if the argument strings should always
be bare strings. This one automatically converts everything into
unicode and expects input exclusively in utf-8, which may not be
appropriate constraints for some uses.
Note that an array of zero length is accepted, meaning there were no
`argname` parameters at all in `param.kwargs`."""
_validate_all(argname, param, safe, _check_ustr, rx, custom_err)
def validate_numlist(argname, param, safe, bare=False, minval=None, maxval=None, custom_err = None):
"""Validates that an argument is an array of integers, as checked by
`validate_num()`.
Checks that an argument named `argname` is either a single string/int or
an array of strings/int, each of which validates with `validate_num` and
`bare`, `minval` and `maxval` arguments. If successful the array is
copied into `safe.kwargs` and the value is removed from `param.kwargs`.
The value always becomes an array in `kwsafe`, even if no or only one
argument was provided.
Note that an array of zero length is accepted, meaning there were no
`argname` parameters at all in `param.kwargs`."""
_validate_all(argname, param, safe, _check_num, bare, minval, maxval, custom_err)
def validate_reallist(argname, param, safe, special=False, minval=None, maxval=None, custom_err = None):
"""Validates that an argument is an array of integers, as checked by
`validate_real()`.
Checks that an argument named `argname` is either a single string/float or
an array of strings/floats, each of which validates with `validate_real` and
`special`, `minval` and `maxval` arguments. If successful the array is
copied into `safe.kwargs` and the value is removed from `param.kwargs`.
The value always becomes an array in `safe.kwargs`, even if no or only
one argument was provided.
Note that an array of zero length is accepted, meaning there were no
`argname` parameters at all in `param.kwargs`."""
_validate_all(argname, param, safe, _check_real, special, minval, maxval, custom_err)
def validate_no_more_input(param):
"""Verifies no more input is left in `param.args` or `param.kwargs`."""
if param.args:
raise InvalidParameter("Excess path arguments, not validated args='%s'" % param.args)
if param.kwargs:
raise InvalidParameter("Excess keyword arguments, not validated kwargs='%s'" % param.kwargs)
def validate_lengths(safe, *names):
"""Verifies that all `names` exist in `safe.kwargs`, are lists, and
all the lists have the same length. This is convenience function for
checking that an API accepting multiple values receives equal number
of values for all of its parameters."""
refname = names[0]
if refname not in safe.kwargs or not isinstance(safe.kwargs[refname], list):
raise InvalidParameter("Incorrect '%s' parameter" % refname)
reflen = len(safe.kwargs[refname])
for other in names[1:]:
if other not in safe.kwargs or not isinstance(safe.kwargs[other], list):
raise InvalidParameter("Incorrect '%s' parameter" % other)
elif len(safe.kwargs[other]) != reflen:
raise InvalidParameter("Mismatched number of arguments: %d %s vs. %d %s"
% (reflen, refname, len(safe.kwargs[other]), other)) | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/REST/Validation.py | 0.600774 | 0.233335 | Validation.py | pypi |
from __future__ import print_function
import gzip
from builtins import str, bytes, object
from Utils.PythonVersion import PY3
from Utils.Utilities import encodeUnicodeToBytes, encodeUnicodeToBytesConditional
from future.utils import viewitems
import hashlib
import json
import xml.sax.saxutils
import zlib
from traceback import format_exc
import cherrypy
from WMCore.REST.Error import RESTError, ExecutionError, report_rest_error
try:
from cherrypy.lib import httputil
except ImportError:
from cherrypy.lib import http as httputil
def vary_by(header):
"""Add 'Vary' header for `header`."""
varies = cherrypy.response.headers.get('Vary', '')
varies = [x.strip() for x in varies.split(",") if x.strip()]
if header not in varies:
varies.append(header)
cherrypy.response.headers['Vary'] = ", ".join(varies)
def is_iterable(obj):
"""Check if `obj` is iterable."""
try:
iter(obj)
except TypeError:
return False
else:
return True
class RESTFormat(object):
def __call__(self, stream, etag):
"""Main entry point for generating output for `stream` using `etag`
object to generate ETag header. Returns a generator function for
producing a verbatim copy of `stream` item, including any premables
and trailers needed for the selected format. The intention is that
the caller will use the iterable to generate chunked HTTP transfer
encoding, or a simple result such as an image."""
# Make 'stream' iterable. We convert everything to chunks here.
# The final stream consumer will collapse small responses back
# to a single string. Convert files to 1MB chunks.
if stream is None:
stream = ['']
elif isinstance(stream, (str, bytes)):
stream = [stream]
elif hasattr(stream, "read"):
# types.FileType is not available anymore in python3,
# using it raises pylint W1624.
# Since cherrypy.lib.file_generator only uses the .read() attribute
# of a file, we simply check if stream.read() is present instead.
# https://github.com/cherrypy/cherrypy/blob/2a8aaccd649eb1011382c39f5cd93f76f980c0b1/cherrypy/lib/__init__.py#L64
stream = cherrypy.lib.file_generator(stream, 512 * 1024)
return self.stream_chunked(stream, etag, *self.chunk_args(stream))
def chunk_args(self, stream):
"""Return extra arguments needed for `stream_chunked()`. The default
return an empty tuple, so no extra arguments. Override in the derived
class if `stream_chunked()` needs preamble or trailer arguments."""
return tuple()
class XMLFormat(RESTFormat):
"""Format an iterable of objects into XML encoded in UTF-8.
Generates normally first a preamble, a stream of XML-rendered objects,
then the trailer, computing an ETag on the output string in the process.
This is designed exclusively for use with iterables for chunked transfer
encoding HTTP responses; it's not a general purpose formatting utility.
Outputs first a preamble, then XML encoded output of input stream, and
finally a trailer. Any exceptions raised by input stream are reported to
`report_rest_error` and swallowed, as this is normally used to generate
output for CherryPy responses, which cannot handle exceptions reasonably
after the output generation begins; later processing may reconvert those
back to exceptions however (cf. stream_maybe_etag()). Once the preamble
has been emitted, the trailer is also emitted even if the input stream
raises an exception, in order to make the output well-formed; the client
must inspect the X-REST-Status trailer header to find out if it got the
complete output. No ETag header is generated in case of an exception.
The ETag generation is deterministic only if iterating over input is
deterministic. Beware in particular the key order for a dict is
arbitrary and may differ for two semantically identical dicts.
A X-REST-Status trailer header is added only in case of error. There is
normally 'X-REST-Status: 100' in normal response headers, and it remains
valid in case of success.
The output is generated as an XML document whose top-level entity name
is defined by the label given at the formatter construction time. The
caller must define ``cherrypy.request.rest_generate_data`` to element
name for wrapping stream contents. Usually the top-level entity is the
application name and the ``cherrypy.request.rest_generate_data`` is
``result``.
Iterables are output as ``<array><i>ITEM</i><i>ITEM</i></array>``,
dictionaries as ``<dict><key>KEY</key><value>VALUE</value></dict>``.
`None` is output as empty contents, and hence there is no way to
distinguish `None` and an empty string from each other. Scalar types
are output as rendered by `str()`, but obviously XML encoding unsafe
characters. This class does not support formatting arbitrary types.
The formatter does not insert any spaces into the output. Although the
output is generated as a preamble, stream of objects, and trailer just
like by the `JSONFormatter`, each of which is a separate HTTP transfer
chunk, the output does *not* have guaranteed line-oriented structure
like the `JSONFormatter` produces. Note in particular that if the data
stream contains strings with newlines, the output will have arbitrary
line structure. On the other hand, as the output is well-formed XML,
virtually all SAX processors can read the stream incrementally even if
the client isn't able to fully preserve chunked HTTP transfer encoding."""
def __init__(self, label):
self.label = label
@staticmethod
def format_obj(obj):
"""Render an object `obj` into XML."""
if isinstance(obj, type(None)):
result = ""
elif isinstance(obj, str):
result = xml.sax.saxutils.escape(obj).encode("utf-8")
elif isinstance(obj, bytes):
result = xml.sax.saxutils.escape(obj)
elif isinstance(obj, (int, float, bool)):
result = xml.sax.saxutils.escape(str(obj)).encode("utf-8")
elif isinstance(obj, dict):
result = "<dict>"
for k, v in viewitems(obj):
result += "<key>%s</key><value>%s</value>" % \
(xml.sax.saxutils.escape(k).encode("utf-8"),
XMLFormat.format_obj(v))
result += "</dict>"
elif is_iterable(obj):
result = "<array>"
for v in obj:
result += "<i>%s</i>" % XMLFormat.format_obj(v)
result += "</array>"
else:
cherrypy.log("cannot represent object of type %s in xml (%s)"
% (type(obj).__class__.__name__, repr(obj)))
raise ExecutionError("cannot represent object in xml")
return result
def stream_chunked(self, stream, etag, preamble, trailer):
"""Generator for actually producing the output."""
try:
etag.update(preamble)
yield preamble
try:
for obj in stream:
chunk = XMLFormat.format_obj(obj)
etag.update(chunk)
yield chunk
except GeneratorExit:
etag.invalidate()
trailer = None
raise
finally:
if trailer:
etag.update(trailer)
yield trailer
except RESTError as e:
etag.invalidate()
report_rest_error(e, format_exc(), False)
except Exception as e:
etag.invalidate()
report_rest_error(ExecutionError(), format_exc(), False)
def chunk_args(self, stream):
"""Return header and trailer needed to wrap `stream` as XML reply."""
preamble = "<?xml version='1.0' encoding='UTF-8' standalone='yes'?>\n"
preamble += "<%s>" % self.label
if cherrypy.request.rest_generate_preamble:
desc = self.format_obj(cherrypy.request.rest_generate_preamble)
preamble += "<desc>%s</desc>" % desc
preamble += "<%s>" % cherrypy.request.rest_generate_data
trailer = "</%s></%s>" % (cherrypy.request.rest_generate_data, self.label)
return preamble, trailer
class JSONFormat(RESTFormat):
"""Format an iterable of objects into JSON.
Generates normally first a preamble, a stream of JSON-rendered objects,
then the trailer, computing an ETag on the output string in the process.
This is designed exclusively for use with iterables for chunked transfer
encoding HTTP responses; it's not a general purpose formatting utility.
Outputs first a preamble, then JSON encoded output of input stream, and
finally a trailer. Any exceptions raised by input stream are reported to
`report_rest_error` and swallowed, as this is normally used to generate
output for CherryPy responses, which cannot handle exceptions reasonably
after the output generation begins; later processing may reconvert those
back to exceptions however (cf. stream_maybe_etag()). Once the preamble
has been emitted, the trailer is also emitted even if the input stream
raises an exception, in order to make the output well-formed; the client
must inspect the X-REST-Status trailer header to find out if it got the
complete output. No ETag header is generated in case of an exception.
The ETag generation is deterministic only if `cjson.encode()` output is
deterministic for the input. Beware in particular the key order for a
dict is arbitrary and may differ for two semantically identical dicts.
A X-REST-Status trailer header is added only in case of error. There is
normally 'X-REST-Status: 100' in normal response headers, and it remains
valid in case of success.
The output is always generated as a JSON dictionary. The caller must
define ``cherrypy.request.rest_generate_data`` as the key for actual
contents, usually something like "result". The `stream` value will be
generated as an array value for that key.
If ``cherrypy.request.rest_generate_preamble`` is a non-empty list, it
is output as the ``desc`` key value in the preamble before outputting
the `stream` contents. Otherwise the output consists solely of `stream`.
A common use of ``rest_generate_preamble`` is list of column labels
with `stream` an iterable of lists of column values.
The output is guaranteed to contain one line of preamble which starts a
dictionary and an array ("``{key: [``"), one line of JSON rendering of
each object in `stream`, with the first line starting with exactly one
space and second and subsequent lines starting with a comma, and one
final trailer line consisting of "``]}``". Each line is generated as a
HTTP transfer chunk. This format is fixed so readers can be constructed
to read and parse the stream incrementally one line at a time,
facilitating maximum throughput processing of the response."""
def stream_chunked(self, stream, etag, preamble, trailer):
"""Generator for actually producing the output."""
comma = " "
try:
if preamble:
etag.update(preamble)
yield preamble
obj = None
try:
for obj in stream:
chunk = comma + json.dumps(obj) + "\n"
etag.update(chunk)
yield chunk
comma = ","
except cherrypy.HTTPError:
raise
except GeneratorExit:
etag.invalidate()
trailer = None
raise
except Exception as exp:
print("ERROR, json.dumps failed to serialize %s, type %s\nException: %s" \
% (obj, type(obj), str(exp)))
raise
finally:
if trailer:
etag.update(trailer)
yield trailer
cherrypy.response.headers["X-REST-Status"] = 100
except cherrypy.HTTPError:
raise
except RESTError as e:
etag.invalidate()
report_rest_error(e, format_exc(), False)
except Exception as e:
etag.invalidate()
report_rest_error(ExecutionError(), format_exc(), False)
def chunk_args(self, stream):
"""Return header and trailer needed to wrap `stream` as JSON reply."""
comma = ""
preamble = "{"
trailer = "]}\n"
if cherrypy.request.rest_generate_preamble:
desc = json.dumps(cherrypy.request.rest_generate_preamble)
preamble += '"desc": %s' % desc
comma = ", "
preamble += '%s"%s": [\n' % (comma, cherrypy.request.rest_generate_data)
return preamble, trailer
class PrettyJSONFormat(JSONFormat):
""" Format used for human, (web browser)"""
def stream_chunked(self, stream, etag, preamble, trailer):
"""Generator for actually producing the output."""
comma = " "
try:
if preamble:
etag.update(preamble)
yield preamble
try:
for obj in stream:
chunk = comma + json.dumps(obj, indent=2)
etag.update(chunk)
yield chunk
comma = ","
except GeneratorExit:
etag.invalidate()
trailer = None
raise
finally:
if trailer:
etag.update(trailer)
yield trailer
cherrypy.response.headers["X-REST-Status"] = 100
except RESTError as e:
etag.invalidate()
report_rest_error(e, format_exc(), False)
except Exception as e:
etag.invalidate()
report_rest_error(ExecutionError(), format_exc(), False)
class PrettyJSONHTMLFormat(PrettyJSONFormat):
""" Format used for human, (web browser) wrap around html tag on json"""
@staticmethod
def format_obj(obj):
"""Render an object `obj` into HTML."""
if isinstance(obj, type(None)):
result = ""
elif isinstance(obj, str):
obj = xml.sax.saxutils.quoteattr(obj)
result = "<pre>%s</pre>" % obj if '\n' in obj else obj
elif isinstance(obj, bytes):
obj = xml.sax.saxutils.quoteattr(str(obj, "utf-8"))
result = "<pre>%s</pre>" % obj if '\n' in obj else obj
elif isinstance(obj, (int, float, bool)):
result = "%s" % obj
elif isinstance(obj, dict):
result = "<ul>"
for k, v in viewitems(obj):
result += "<li><b>%s</b>: %s</li>" % (k, PrettyJSONHTMLFormat.format_obj(v))
result += "</ul>"
elif is_iterable(obj):
empty = True
result = "<details open><ul>"
for v in obj:
empty = False
result += "<li>%s</li>" % PrettyJSONHTMLFormat.format_obj(v)
result += "</ul></details>"
if empty:
result = ""
else:
cherrypy.log("cannot represent object of type %s in xml (%s)"
% (type(obj).__class__.__name__, repr(obj)))
raise ExecutionError("cannot represent object in xml")
return result
def stream_chunked(self, stream, etag, preamble, trailer):
"""Generator for actually producing the output."""
try:
etag.update(preamble)
yield preamble
try:
for obj in stream:
chunk = PrettyJSONHTMLFormat.format_obj(obj)
etag.update(chunk)
yield chunk
except GeneratorExit:
etag.invalidate()
trailer = None
raise
finally:
if trailer:
etag.update(trailer)
yield trailer
except RESTError as e:
etag.invalidate()
report_rest_error(e, format_exc(), False)
except Exception as e:
etag.invalidate()
report_rest_error(ExecutionError(), format_exc(), False)
def chunk_args(self, stream):
"""Return header and trailer needed to wrap `stream` as XML reply."""
preamble = "<html><body>"
trailer = "</body></html>"
return preamble, trailer
class RawFormat(RESTFormat):
"""Format an iterable of objects as raw data.
Generates raw data completely unmodified, for example image data or
streaming arbitrary external data files including even plain text.
Computes an ETag on the output in the process. The result is always
chunked, even simple strings on input. Usually small enough responses
will automatically be converted back to a single string response post
compression and ETag processing.
Any exceptions raised by input stream are reported to `report_rest_error`
and swallowed, as this is normally used to generate output for CherryPy
responses, which cannot handle exceptions reasonably after the output
generation begins; later processing may reconvert those back to exceptions
however (cf. stream_maybe_etag()). A X-REST-Status trailer header is added
if (and only if) an exception occurs; the client must inspect that to find
out if it got the complete output. There is normally 'X-REST-Status: 100'
in normal response headers, and it remains valid in case of success.
No ETag header is generated in case of an exception."""
def stream_chunked(self, stream, etag):
"""Generator for actually producing the output."""
try:
for chunk in stream:
etag.update(chunk)
yield chunk
except RESTError as e:
etag.invalidate()
report_rest_error(e, format_exc(), False)
except Exception as e:
etag.invalidate()
report_rest_error(ExecutionError(), format_exc(), False)
except BaseException:
etag.invalidate()
raise
class DigestETag(object):
"""Compute hash digest over contents for ETag header."""
algorithm = None
def __init__(self, algorithm=None):
"""Prepare ETag computer."""
self.digest = hashlib.new(algorithm or self.algorithm)
def update(self, val):
"""Process response data `val`."""
if self.digest:
self.digest.update(encodeUnicodeToBytes(val))
def value(self):
"""Return ETag header value for current input."""
return self.digest and '"%s"' % self.digest.hexdigest()
def invalidate(self):
"""Invalidate the ETag calculator so value() will return None."""
self.digest = None
class MD5ETag(DigestETag):
"""Compute MD5 hash over contents for ETag header."""
algorithm = 'md5'
class SHA1ETag(DigestETag):
"""Compute SHA1 hash over contents for ETag header."""
algorithm = 'sha1'
def _stream_compress_identity(reply, *args):
"""Streaming compressor which returns original data unchanged."""
return reply
def _stream_compress_deflate(reply, compress_level, max_chunk):
"""Streaming compressor for the 'deflate' method. Generates output that
is guaranteed to expand at the exact same chunk boundaries as original
reply stream."""
# Create zlib compression object, with raw data stream (negative window size)
z = zlib.compressobj(compress_level, zlib.DEFLATED, -zlib.MAX_WBITS,
zlib.DEF_MEM_LEVEL, 0)
# Data pending compression. We only take entire chunks from original
# reply. Then process reply one chunk at a time. Whenever we have enough
# data to compress, spit it out flushing the zlib engine entirely, so we
# respect original chunk boundaries.
npending = 0
pending = []
for chunk in reply:
pending.append(chunk)
npending += len(chunk)
if npending >= max_chunk:
part = z.compress(encodeUnicodeToBytes("".join(pending))) + z.flush(zlib.Z_FULL_FLUSH)
pending = []
npending = 0
yield part
# Crank the compressor one more time for remaining output.
if npending:
yield z.compress(encodeUnicodeToBytes("".join(pending))) + z.flush(zlib.Z_FINISH)
def _stream_compress_gzip(reply, compress_level, *args):
"""Streaming compressor for the 'gzip' method. Generates output that
is guaranteed to expand at the exact same chunk boundaries as original
reply stream."""
data = []
for chunk in reply:
data.append(chunk)
if data:
yield gzip.compress(encodeUnicodeToBytes("".join(data)), compress_level)
# : Stream compression methods.
_stream_compressor = {
'identity': _stream_compress_identity,
'deflate': _stream_compress_deflate,
'gzip': _stream_compress_gzip
}
def stream_compress(reply, available, compress_level, max_chunk):
"""If compression has been requested via Accept-Encoding request header,
and is granted for this response via `available` compression methods,
convert the streaming `reply` into another streaming response which is
compressed at the exact chunk boundaries of the original response,
except that individual chunks may be coalesced up to `max_chunk` size.
The `compression_level` tells how hard to compress, zero disables the
compression entirely."""
global _stream_compressor
for enc in cherrypy.request.headers.elements('Accept-Encoding'):
if enc.value not in available:
continue
elif enc.value in _stream_compressor and compress_level > 0:
# Add 'Vary' header for 'Accept-Encoding'.
vary_by('Accept-Encoding')
# Compress contents at original chunk boundaries.
if 'Content-Length' in cherrypy.response.headers:
del cherrypy.response.headers['Content-Length']
cherrypy.response.headers['Content-Encoding'] = enc.value
return _stream_compressor[enc.value](reply, compress_level, max_chunk)
return reply
def _etag_match(status, etagval, match, nomatch):
"""Match ETag value against any If-Match / If-None-Match headers."""
# Execute conditions only for status 2xx. We only handle GET/HEAD
# requests here, it makes no sense to try to do this for PUT etc.
# as they need to be handled as request pre-condition, not in the
# streaming out part here.
if cherrypy.request.method in ('GET', 'HEAD'):
status, dummyReason, dummyMsg = httputil.valid_status(status)
if status >= 200 and status <= 299:
if match and ("*" in match or etagval in match):
raise cherrypy.HTTPError(412, "Precondition on ETag %s failed" % etagval)
if nomatch and ("*" in nomatch or etagval in nomatch):
raise cherrypy.HTTPRedirect([], 304)
def _etag_tail(head, tail, etag):
"""Generator which first returns anything in `head`, then `tail`.
Sets ETag header at the end to value of `etag` if it's defined and
yields a value."""
for chunk in head:
yield encodeUnicodeToBytes(chunk)
for chunk in tail:
yield encodeUnicodeToBytes(chunk)
etagval = (etag and etag.value())
if etagval:
cherrypy.response.headers["ETag"] = etagval
def stream_maybe_etag(size_limit, etag, reply):
"""Maybe generate ETag header for the response, and handle If-Match
and If-None-Match request headers. Consumes the reply until at most
`size_limit` bytes. If the response fits into that size, adds the
ETag header and matches it against any If-Match / If-None-Match
request headers and replies appropriately.
If the response is fully buffered, and the `reply` generator actually
results in an error and sets X-Error-HTTP / X-Error-Detail headers,
converts that error back into a real HTTP error response. Otherwise
responds with the fully buffered body directly, without generator
and chunking. In other words, responses smaller than `size_limit`
are always fully buffered and replied immediately without chunking.
If the response is not fully buffered, it's guaranteed to be output
at original chunk boundaries.
Note that if this function is fed the output from `stream_compress()`
as it normally would be, the `size_limit` constrains the compressed
size, and chunk boundaries correspond to compressed chunks."""
req = cherrypy.request
res = cherrypy.response
match = [str(x) for x in (req.headers.elements('If-Match') or [])]
nomatch = [str(x) for x in (req.headers.elements('If-None-Match') or [])]
# If ETag is already set, match conditions and output without buffering.
etagval = res.headers.get('ETag', None)
if etagval:
_etag_match(res.status or 200, etagval, match, nomatch)
res.headers['Trailer'] = 'X-REST-Status'
return _etag_tail([], reply, None)
# Buffer up to size_limit bytes internally. This interally builds up the
# ETag value inside 'etag'. In case of exceptions the ETag invalidates.
# If we exceed the limit, fall back to streaming without checking ETag
# against If-Match/If-None-Match. We'll still set the ETag in the trailer
# headers, so clients which understand trailers will get the value; most
# clients including browsers will ignore them.
size = 0
result = []
for chunk in reply:
result.append(chunk)
size += len(chunk)
if size > size_limit:
res.headers['Trailer'] = 'X-REST-Status'
return _etag_tail(result, reply, etag)
# We've buffered the entire response, but it may be an error reply. The
# generator code does not know if it's allowed to raise exceptions, so
# it swallows all errors and converts them into X-* headers. We recover
# the original HTTP response code and message from X-Error-{HTTP,Detail}
# headers, if any are present.
err = res.headers.get('X-Error-HTTP', None)
if err:
message = res.headers.get('X-Error-Detail', 'Original error lost')
raise cherrypy.HTTPError(int(err), message)
# OK, we buffered the entire reply and it's ok. Check ETag match criteria.
# The original stream generator must guarantee that if it fails it resets
# the 'etag' value, even if the error handlers above didn't run.
etagval = etag.value()
if etagval:
res.headers['ETag'] = etagval
_etag_match(res.status or 200, etagval, match, nomatch)
# OK, respond with the buffered reply as a plain string.
res.headers['Content-Length'] = size
# TODO investigate why `result` is a list of bytes strings in py3
# The current solution seems to work in both py2 and py3
resp = b"" if PY3 else ""
for item in result:
resp += encodeUnicodeToBytesConditional(item, condition=PY3)
assert len(resp) == size
return resp | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/REST/Format.py | 0.843605 | 0.218909 | Format.py | pypi |
from Utils.Utilities import encodeUnicodeToBytes
from future.utils import viewitems, viewvalues, listitems
import os, hmac, hashlib, cherrypy
from tempfile import NamedTemporaryFile
from Utils.PythonVersion import PY3
from WMCore.REST.Main import RESTMain
from WMCore.REST.Auth import authz_canonical
from WMCore.Configuration import Configuration
def fake_authz_headers(hmac_key, method = 'HNLogin',
login='testuser', name='Test User',
dn="/test/dn", roles={}, format="list"):
"""Create fake authentication and authorisation headers compatible
with the CMSWEB front-ends. Assumes you have the HMAC signing key
the back-end will use to validate the headers.
:arg str hmac_key: binary key data for signing headers.
:arg str method: authentication method, one of X509Cert, X509Proxy,
HNLogin, HostIP, AUCookie or None.
:arg str login: account login name.
:arg str name: account user name.
:arg str dn: account X509 subject.
:arg dict roles: role dictionary, each role with 'site' and 'group' lists.
:returns: list of header name, value tuples to add to a HTTP request."""
headers = { 'cms-auth-status': 'OK', 'cms-authn-method': method }
if login:
headers['cms-authn-login'] = login
if name:
headers['cms-authn-name'] = name
if dn:
headers['cms-authn-dn'] = dn
for name, role in viewitems(roles):
name = 'cms-authz-' + authz_canonical(name)
headers[name] = []
for r in 'site', 'group':
if r in role:
headers[name].extend(["%s:%s" % (r, authz_canonical(v)) for v in role[r]])
headers[name] = " ".join(headers[name])
prefix = suffix = ""
hkeys = list(headers)
for hk in sorted(hkeys):
if hk != 'cms-auth-status':
prefix += "h%xv%x" % (len(hk), len(headers[hk]))
suffix += "%s%s" % (hk, headers[hk])
msg = prefix + "#" + suffix
if PY3:
hmac_key = encodeUnicodeToBytes(hmac_key)
msg = encodeUnicodeToBytes(msg)
cksum = hmac.new(hmac_key, msg, hashlib.sha1).hexdigest()
headers['cms-authn-hmac'] = cksum
if format == "list":
return listitems(headers)
else:
return headers
def fake_authz_key_file(delete=True):
"""Create temporary file for fake authorisation hmac signing key.
:returns: Instance of :class:`~.NamedTemporaryFile`, whose *data*
attribute contains the HMAC signing binary key."""
t = NamedTemporaryFile(delete=delete)
with open("/dev/urandom", "rb") as fd:
t.data = fd.read(20)
t.write(t.data)
t.seek(0)
return t
def setup_dummy_server(module_name, class_name, app_name = None, authz_key_file=None, port=8888):
"""Helper function to set up a :class:`~.RESTMain` server from given
module and class. Creates a fake server configuration and instantiates
the server application from it.
:arg str module_name: module from which to import test class.
:arg str class_type: name of the server test class.
:arg str app_name: optional test application name, 'test' by default.
:returns: tuple with the server object and authz hmac signing key."""
if authz_key_file:
test_authz_key = authz_key_file
else:
test_authz_key = fake_authz_key_file()
cfg = Configuration()
main = cfg.section_('main')
main.application = app_name or 'test'
main.silent = True
main.index = 'top'
main.authz_defaults = { 'role': None, 'group': None, 'site': None }
main.section_('tools').section_('cms_auth').key_file = test_authz_key.name
app = cfg.section_(app_name or 'test')
app.admin = 'dada@example.org'
app.description = app.title = 'Test'
views = cfg.section_('views')
top = views.section_('top')
top.object = module_name + "." + class_name
server = RESTMain(cfg, os.getcwd())
server.validate_config()
server.setup_server()
server.install_application()
cherrypy.config.update({'server.socket_port': port})
cherrypy.config.update({'server.socket_host': '127.0.0.1'})
cherrypy.config.update({'request.show_tracebacks': True})
cherrypy.config.update({'environment': 'test_suite'})
for app in viewvalues(cherrypy.tree.apps):
if '/' in app.config:
app.config["/"]["request.show_tracebacks"] = True
return server, test_authz_key | /reqmgr2-2.2.4rc1.tar.gz/reqmgr2-2.2.4rc1/src/python/WMCore/REST/Test.py | 0.631935 | 0.193147 | Test.py | pypi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.