repository_name stringclasses 316 values | func_path_in_repository stringlengths 6 223 | func_name stringlengths 1 134 | language stringclasses 1 value | func_code_string stringlengths 57 65.5k | func_documentation_string stringlengths 1 46.3k | split_name stringclasses 1 value | func_code_url stringlengths 91 315 | called_functions listlengths 1 156 ⌀ | enclosing_scope stringlengths 2 1.48M |
|---|---|---|---|---|---|---|---|---|---|
jwodder/javaproperties | javaproperties/propfile.py | PropertiesFile.dump | python | def dump(self, fp, separator='='):
### TODO: Support setting the timestamp
for line in six.itervalues(self._lines):
if line.source is None:
print(join_key_value(line.key, line.value, separator), file=fp)
else:
fp.write(line.source) | Write the mapping to a file in simple line-oriented ``.properties``
format.
If the instance was originally created from a file or string with
`PropertiesFile.load()` or `PropertiesFile.loads()`, then the output
will include the comments and whitespace from the original input, and
any keys that haven't been deleted or reassigned will retain their
original formatting and multiplicity. Key-value pairs that have been
modified or added to the mapping will be reformatted with
`join_key_value()` using the given separator. All key-value pairs are
output in the order they were defined, with new keys added to the end.
.. note::
Serializing a `PropertiesFile` instance with the :func:`dump()`
function instead will cause all formatting information to be
ignored, as :func:`dump()` will treat the instance like a normal
mapping.
:param fp: A file-like object to write the mapping to. It must have
been opened as a text file with a Latin-1-compatible encoding.
:param separator: The string to use for separating new or modified keys
& values. Only ``" "``, ``"="``, and ``":"`` (possibly with added
whitespace) should ever be used as the separator.
:type separator: text string
:return: `None` | train | https://github.com/jwodder/javaproperties/blob/8b48f040305217ebeb80c98c4354691bbb01429b/javaproperties/propfile.py#L222-L256 | [
"def join_key_value(key, value, separator='='):\n r\"\"\"\n Join a key and value together into a single line suitable for adding to a\n simple line-oriented ``.properties`` file. No trailing newline is added.\n\n >>> join_key_value('possible separators', '= : space')\n 'possible\\\\ separators=\\\\=... | class PropertiesFile(MutableMapping):
"""
.. versionadded:: 0.3.0
A custom mapping class for reading from, editing, and writing to a
``.properties`` file while preserving comments & whitespace in the original
input.
A `PropertiesFile` instance can be constructed from another mapping and/or
iterable of pairs, after which it will act like an
`~collections.OrderedDict`. Alternatively, an instance can be constructed
from a file or string with `PropertiesFile.load()` or
`PropertiesFile.loads()`, and the resulting instance will remember the
formatting of its input and retain that formatting when written back to a
file or string with the `~PropertiesFile.dump()` or
`~PropertiesFile.dumps()` method. The formatting information attached to
an instance ``pf`` can be forgotten by constructing another mapping from it
via ``dict(pf)``, ``OrderedDict(pf)``, or even ``PropertiesFile(pf)`` (Use
the `copy()` method if you want to create another `PropertiesFile` instance
with the same data & formatting).
When not reading or writing, `PropertiesFile` behaves like a normal
`~collections.abc.MutableMapping` class (i.e., you can do ``props[key] =
value`` and so forth), except that (a) like `~collections.OrderedDict`, key
insertion order is remembered and is used when iterating & dumping (and
`reversed` is supported), and (b) like `Properties`, it may only be used to
store strings and will raise a `TypeError` if passed a non-string object as
key or value.
Two `PropertiesFile` instances compare equal iff both their key-value pairs
and comment & whitespace lines are equal and in the same order. When
comparing a `PropertiesFile` to any other type of mapping, only the
key-value pairs are considered, and order is ignored.
`PropertiesFile` currently only supports reading & writing the simple
line-oriented format, not XML.
"""
def __init__(self, mapping=None, **kwargs):
#: mapping from keys to list of line numbers
self._indices = OrderedDict()
#: mapping from line numbers to (key, value, source) tuples
self._lines = OrderedDict()
if mapping is not None:
self.update(mapping)
self.update(kwargs)
def _check(self):
"""
Assert the internal consistency of the instance's data structures.
This method is for debugging only.
"""
for k,ix in six.iteritems(self._indices):
assert k is not None, 'null key'
assert ix, 'Key does not map to any indices'
assert ix == sorted(ix), "Key's indices are not in order"
for i in ix:
assert i in self._lines, 'Key index does not map to line'
assert self._lines[i].key is not None, 'Key maps to comment'
assert self._lines[i].key == k, 'Key does not map to itself'
assert self._lines[i].value is not None, 'Key has null value'
prev = None
for i, line in six.iteritems(self._lines):
assert prev is None or prev < i, 'Line indices out of order'
prev = i
if line.key is None:
assert line.value is None, 'Comment/blank has value'
assert line.source is not None, 'Comment source not stored'
assert loads(line.source) == {}, 'Comment source is not comment'
else:
assert line.value is not None, 'Key has null value'
if line.source is not None:
assert loads(line.source) == {line.key: line.value}, \
'Key source does not deserialize to itself'
assert line.key in self._indices, 'Key is missing from map'
assert i in self._indices[line.key], \
'Key does not map to itself'
def __getitem__(self, key):
if not isinstance(key, six.string_types):
raise TypeError(_type_err)
return self._lines[self._indices[key][-1]].value
def __setitem__(self, key, value):
if not isinstance(key, six.string_types) or \
not isinstance(value, six.string_types):
raise TypeError(_type_err)
try:
ixes = self._indices[key]
except KeyError:
try:
lasti = next(reversed(self._lines))
except StopIteration:
ix = 0
else:
ix = lasti + 1
# We're adding a line to the end of the file, so make sure the
# line before it ends with a newline and (if it's not a
# comment) doesn't end with a trailing line continuation.
lastline = self._lines[lasti]
if lastline.source is not None:
lastsrc = lastline.source
if lastline.key is not None:
lastsrc=re.sub(r'(?<!\\)((?:\\\\)*)\\$', r'\1', lastsrc)
if not lastsrc.endswith(('\r', '\n')):
lastsrc += '\n'
self._lines[lasti] = lastline._replace(source=lastsrc)
else:
# Update the first occurrence of the key and discard the rest.
# This way, the order in which the keys are listed in the file and
# dict will be preserved.
ix = ixes.pop(0)
for i in ixes:
del self._lines[i]
self._indices[key] = [ix]
self._lines[ix] = PropertyLine(key, value, None)
def __delitem__(self, key):
if not isinstance(key, six.string_types):
raise TypeError(_type_err)
for i in self._indices.pop(key):
del self._lines[i]
def __iter__(self):
return iter(self._indices)
def __reversed__(self):
return reversed(self._indices)
def __len__(self):
return len(self._indices)
def _comparable(self):
return [
(None, line.source) if line.key is None else (line.key, line.value)
for i, line in six.iteritems(self._lines)
### TODO: Also include non-final repeated keys???
if line.key is None or self._indices[line.key][-1] == i
]
def __eq__(self, other):
if isinstance(other, PropertiesFile):
return self._comparable() == other._comparable()
### TODO: Special-case OrderedDict?
elif isinstance(other, Mapping):
return dict(self) == other
else:
return NotImplemented
def __ne__(self, other):
return not (self == other)
@classmethod
def load(cls, fp):
"""
Parse the contents of the `~io.IOBase.readline`-supporting file-like
object ``fp`` as a simple line-oriented ``.properties`` file and return
a `PropertiesFile` instance.
``fp`` may be either a text or binary filehandle, with or without
universal newlines enabled. If it is a binary filehandle, its contents
are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param fp: the file from which to read the ``.properties`` document
:type fp: file-like object
:rtype: PropertiesFile
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
obj = cls()
for i, (k, v, src) in enumerate(parse(fp)):
if k is not None:
obj._indices.setdefault(k, []).append(i)
obj._lines[i] = PropertyLine(k, v, src)
return obj
@classmethod
def loads(cls, s):
"""
Parse the contents of the string ``s`` as a simple line-oriented
``.properties`` file and return a `PropertiesFile` instance.
``s`` may be either a text string or bytes string. If it is a bytes
string, its contents are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param string s: the string from which to read the ``.properties``
document
:rtype: PropertiesFile
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
if isinstance(s, six.binary_type):
fp = six.BytesIO(s)
else:
fp = six.StringIO(s)
return cls.load(fp)
def dumps(self, separator='='):
"""
Convert the mapping to a text string in simple line-oriented
``.properties`` format.
If the instance was originally created from a file or string with
`PropertiesFile.load()` or `PropertiesFile.loads()`, then the output
will include the comments and whitespace from the original input, and
any keys that haven't been deleted or reassigned will retain their
original formatting and multiplicity. Key-value pairs that have been
modified or added to the mapping will be reformatted with
`join_key_value()` using the given separator. All key-value pairs are
output in the order they were defined, with new keys added to the end.
.. note::
Serializing a `PropertiesFile` instance with the :func:`dumps()`
function instead will cause all formatting information to be
ignored, as :func:`dumps()` will treat the instance like a normal
mapping.
:param separator: The string to use for separating new or modified keys
& values. Only ``" "``, ``"="``, and ``":"`` (possibly with added
whitespace) should ever be used as the separator.
:type separator: text string
:rtype: text string
"""
s = six.StringIO()
self.dump(s, separator=separator)
return s.getvalue()
def copy(self):
""" Create a copy of the mapping, including formatting information """
dup = type(self)()
dup._indices = OrderedDict(
(k, list(v)) for k,v in six.iteritems(self._indices)
)
dup._lines = self._lines.copy()
return dup
|
jwodder/javaproperties | javaproperties/propfile.py | PropertiesFile.dumps | python | def dumps(self, separator='='):
s = six.StringIO()
self.dump(s, separator=separator)
return s.getvalue() | Convert the mapping to a text string in simple line-oriented
``.properties`` format.
If the instance was originally created from a file or string with
`PropertiesFile.load()` or `PropertiesFile.loads()`, then the output
will include the comments and whitespace from the original input, and
any keys that haven't been deleted or reassigned will retain their
original formatting and multiplicity. Key-value pairs that have been
modified or added to the mapping will be reformatted with
`join_key_value()` using the given separator. All key-value pairs are
output in the order they were defined, with new keys added to the end.
.. note::
Serializing a `PropertiesFile` instance with the :func:`dumps()`
function instead will cause all formatting information to be
ignored, as :func:`dumps()` will treat the instance like a normal
mapping.
:param separator: The string to use for separating new or modified keys
& values. Only ``" "``, ``"="``, and ``":"`` (possibly with added
whitespace) should ever be used as the separator.
:type separator: text string
:rtype: text string | train | https://github.com/jwodder/javaproperties/blob/8b48f040305217ebeb80c98c4354691bbb01429b/javaproperties/propfile.py#L258-L287 | [
"def dump(self, fp, separator='='):\n \"\"\"\n Write the mapping to a file in simple line-oriented ``.properties``\n format.\n\n If the instance was originally created from a file or string with\n `PropertiesFile.load()` or `PropertiesFile.loads()`, then the output\n will include the comments and ... | class PropertiesFile(MutableMapping):
"""
.. versionadded:: 0.3.0
A custom mapping class for reading from, editing, and writing to a
``.properties`` file while preserving comments & whitespace in the original
input.
A `PropertiesFile` instance can be constructed from another mapping and/or
iterable of pairs, after which it will act like an
`~collections.OrderedDict`. Alternatively, an instance can be constructed
from a file or string with `PropertiesFile.load()` or
`PropertiesFile.loads()`, and the resulting instance will remember the
formatting of its input and retain that formatting when written back to a
file or string with the `~PropertiesFile.dump()` or
`~PropertiesFile.dumps()` method. The formatting information attached to
an instance ``pf`` can be forgotten by constructing another mapping from it
via ``dict(pf)``, ``OrderedDict(pf)``, or even ``PropertiesFile(pf)`` (Use
the `copy()` method if you want to create another `PropertiesFile` instance
with the same data & formatting).
When not reading or writing, `PropertiesFile` behaves like a normal
`~collections.abc.MutableMapping` class (i.e., you can do ``props[key] =
value`` and so forth), except that (a) like `~collections.OrderedDict`, key
insertion order is remembered and is used when iterating & dumping (and
`reversed` is supported), and (b) like `Properties`, it may only be used to
store strings and will raise a `TypeError` if passed a non-string object as
key or value.
Two `PropertiesFile` instances compare equal iff both their key-value pairs
and comment & whitespace lines are equal and in the same order. When
comparing a `PropertiesFile` to any other type of mapping, only the
key-value pairs are considered, and order is ignored.
`PropertiesFile` currently only supports reading & writing the simple
line-oriented format, not XML.
"""
def __init__(self, mapping=None, **kwargs):
#: mapping from keys to list of line numbers
self._indices = OrderedDict()
#: mapping from line numbers to (key, value, source) tuples
self._lines = OrderedDict()
if mapping is not None:
self.update(mapping)
self.update(kwargs)
def _check(self):
"""
Assert the internal consistency of the instance's data structures.
This method is for debugging only.
"""
for k,ix in six.iteritems(self._indices):
assert k is not None, 'null key'
assert ix, 'Key does not map to any indices'
assert ix == sorted(ix), "Key's indices are not in order"
for i in ix:
assert i in self._lines, 'Key index does not map to line'
assert self._lines[i].key is not None, 'Key maps to comment'
assert self._lines[i].key == k, 'Key does not map to itself'
assert self._lines[i].value is not None, 'Key has null value'
prev = None
for i, line in six.iteritems(self._lines):
assert prev is None or prev < i, 'Line indices out of order'
prev = i
if line.key is None:
assert line.value is None, 'Comment/blank has value'
assert line.source is not None, 'Comment source not stored'
assert loads(line.source) == {}, 'Comment source is not comment'
else:
assert line.value is not None, 'Key has null value'
if line.source is not None:
assert loads(line.source) == {line.key: line.value}, \
'Key source does not deserialize to itself'
assert line.key in self._indices, 'Key is missing from map'
assert i in self._indices[line.key], \
'Key does not map to itself'
def __getitem__(self, key):
if not isinstance(key, six.string_types):
raise TypeError(_type_err)
return self._lines[self._indices[key][-1]].value
def __setitem__(self, key, value):
if not isinstance(key, six.string_types) or \
not isinstance(value, six.string_types):
raise TypeError(_type_err)
try:
ixes = self._indices[key]
except KeyError:
try:
lasti = next(reversed(self._lines))
except StopIteration:
ix = 0
else:
ix = lasti + 1
# We're adding a line to the end of the file, so make sure the
# line before it ends with a newline and (if it's not a
# comment) doesn't end with a trailing line continuation.
lastline = self._lines[lasti]
if lastline.source is not None:
lastsrc = lastline.source
if lastline.key is not None:
lastsrc=re.sub(r'(?<!\\)((?:\\\\)*)\\$', r'\1', lastsrc)
if not lastsrc.endswith(('\r', '\n')):
lastsrc += '\n'
self._lines[lasti] = lastline._replace(source=lastsrc)
else:
# Update the first occurrence of the key and discard the rest.
# This way, the order in which the keys are listed in the file and
# dict will be preserved.
ix = ixes.pop(0)
for i in ixes:
del self._lines[i]
self._indices[key] = [ix]
self._lines[ix] = PropertyLine(key, value, None)
def __delitem__(self, key):
if not isinstance(key, six.string_types):
raise TypeError(_type_err)
for i in self._indices.pop(key):
del self._lines[i]
def __iter__(self):
return iter(self._indices)
def __reversed__(self):
return reversed(self._indices)
def __len__(self):
return len(self._indices)
def _comparable(self):
return [
(None, line.source) if line.key is None else (line.key, line.value)
for i, line in six.iteritems(self._lines)
### TODO: Also include non-final repeated keys???
if line.key is None or self._indices[line.key][-1] == i
]
def __eq__(self, other):
if isinstance(other, PropertiesFile):
return self._comparable() == other._comparable()
### TODO: Special-case OrderedDict?
elif isinstance(other, Mapping):
return dict(self) == other
else:
return NotImplemented
def __ne__(self, other):
return not (self == other)
@classmethod
def load(cls, fp):
"""
Parse the contents of the `~io.IOBase.readline`-supporting file-like
object ``fp`` as a simple line-oriented ``.properties`` file and return
a `PropertiesFile` instance.
``fp`` may be either a text or binary filehandle, with or without
universal newlines enabled. If it is a binary filehandle, its contents
are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param fp: the file from which to read the ``.properties`` document
:type fp: file-like object
:rtype: PropertiesFile
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
obj = cls()
for i, (k, v, src) in enumerate(parse(fp)):
if k is not None:
obj._indices.setdefault(k, []).append(i)
obj._lines[i] = PropertyLine(k, v, src)
return obj
@classmethod
def loads(cls, s):
"""
Parse the contents of the string ``s`` as a simple line-oriented
``.properties`` file and return a `PropertiesFile` instance.
``s`` may be either a text string or bytes string. If it is a bytes
string, its contents are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param string s: the string from which to read the ``.properties``
document
:rtype: PropertiesFile
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
if isinstance(s, six.binary_type):
fp = six.BytesIO(s)
else:
fp = six.StringIO(s)
return cls.load(fp)
def dump(self, fp, separator='='):
"""
Write the mapping to a file in simple line-oriented ``.properties``
format.
If the instance was originally created from a file or string with
`PropertiesFile.load()` or `PropertiesFile.loads()`, then the output
will include the comments and whitespace from the original input, and
any keys that haven't been deleted or reassigned will retain their
original formatting and multiplicity. Key-value pairs that have been
modified or added to the mapping will be reformatted with
`join_key_value()` using the given separator. All key-value pairs are
output in the order they were defined, with new keys added to the end.
.. note::
Serializing a `PropertiesFile` instance with the :func:`dump()`
function instead will cause all formatting information to be
ignored, as :func:`dump()` will treat the instance like a normal
mapping.
:param fp: A file-like object to write the mapping to. It must have
been opened as a text file with a Latin-1-compatible encoding.
:param separator: The string to use for separating new or modified keys
& values. Only ``" "``, ``"="``, and ``":"`` (possibly with added
whitespace) should ever be used as the separator.
:type separator: text string
:return: `None`
"""
### TODO: Support setting the timestamp
for line in six.itervalues(self._lines):
if line.source is None:
print(join_key_value(line.key, line.value, separator), file=fp)
else:
fp.write(line.source)
def copy(self):
""" Create a copy of the mapping, including formatting information """
dup = type(self)()
dup._indices = OrderedDict(
(k, list(v)) for k,v in six.iteritems(self._indices)
)
dup._lines = self._lines.copy()
return dup
|
jwodder/javaproperties | javaproperties/propfile.py | PropertiesFile.copy | python | def copy(self):
dup = type(self)()
dup._indices = OrderedDict(
(k, list(v)) for k,v in six.iteritems(self._indices)
)
dup._lines = self._lines.copy()
return dup | Create a copy of the mapping, including formatting information | train | https://github.com/jwodder/javaproperties/blob/8b48f040305217ebeb80c98c4354691bbb01429b/javaproperties/propfile.py#L289-L296 | null | class PropertiesFile(MutableMapping):
"""
.. versionadded:: 0.3.0
A custom mapping class for reading from, editing, and writing to a
``.properties`` file while preserving comments & whitespace in the original
input.
A `PropertiesFile` instance can be constructed from another mapping and/or
iterable of pairs, after which it will act like an
`~collections.OrderedDict`. Alternatively, an instance can be constructed
from a file or string with `PropertiesFile.load()` or
`PropertiesFile.loads()`, and the resulting instance will remember the
formatting of its input and retain that formatting when written back to a
file or string with the `~PropertiesFile.dump()` or
`~PropertiesFile.dumps()` method. The formatting information attached to
an instance ``pf`` can be forgotten by constructing another mapping from it
via ``dict(pf)``, ``OrderedDict(pf)``, or even ``PropertiesFile(pf)`` (Use
the `copy()` method if you want to create another `PropertiesFile` instance
with the same data & formatting).
When not reading or writing, `PropertiesFile` behaves like a normal
`~collections.abc.MutableMapping` class (i.e., you can do ``props[key] =
value`` and so forth), except that (a) like `~collections.OrderedDict`, key
insertion order is remembered and is used when iterating & dumping (and
`reversed` is supported), and (b) like `Properties`, it may only be used to
store strings and will raise a `TypeError` if passed a non-string object as
key or value.
Two `PropertiesFile` instances compare equal iff both their key-value pairs
and comment & whitespace lines are equal and in the same order. When
comparing a `PropertiesFile` to any other type of mapping, only the
key-value pairs are considered, and order is ignored.
`PropertiesFile` currently only supports reading & writing the simple
line-oriented format, not XML.
"""
def __init__(self, mapping=None, **kwargs):
#: mapping from keys to list of line numbers
self._indices = OrderedDict()
#: mapping from line numbers to (key, value, source) tuples
self._lines = OrderedDict()
if mapping is not None:
self.update(mapping)
self.update(kwargs)
def _check(self):
"""
Assert the internal consistency of the instance's data structures.
This method is for debugging only.
"""
for k,ix in six.iteritems(self._indices):
assert k is not None, 'null key'
assert ix, 'Key does not map to any indices'
assert ix == sorted(ix), "Key's indices are not in order"
for i in ix:
assert i in self._lines, 'Key index does not map to line'
assert self._lines[i].key is not None, 'Key maps to comment'
assert self._lines[i].key == k, 'Key does not map to itself'
assert self._lines[i].value is not None, 'Key has null value'
prev = None
for i, line in six.iteritems(self._lines):
assert prev is None or prev < i, 'Line indices out of order'
prev = i
if line.key is None:
assert line.value is None, 'Comment/blank has value'
assert line.source is not None, 'Comment source not stored'
assert loads(line.source) == {}, 'Comment source is not comment'
else:
assert line.value is not None, 'Key has null value'
if line.source is not None:
assert loads(line.source) == {line.key: line.value}, \
'Key source does not deserialize to itself'
assert line.key in self._indices, 'Key is missing from map'
assert i in self._indices[line.key], \
'Key does not map to itself'
def __getitem__(self, key):
if not isinstance(key, six.string_types):
raise TypeError(_type_err)
return self._lines[self._indices[key][-1]].value
def __setitem__(self, key, value):
if not isinstance(key, six.string_types) or \
not isinstance(value, six.string_types):
raise TypeError(_type_err)
try:
ixes = self._indices[key]
except KeyError:
try:
lasti = next(reversed(self._lines))
except StopIteration:
ix = 0
else:
ix = lasti + 1
# We're adding a line to the end of the file, so make sure the
# line before it ends with a newline and (if it's not a
# comment) doesn't end with a trailing line continuation.
lastline = self._lines[lasti]
if lastline.source is not None:
lastsrc = lastline.source
if lastline.key is not None:
lastsrc=re.sub(r'(?<!\\)((?:\\\\)*)\\$', r'\1', lastsrc)
if not lastsrc.endswith(('\r', '\n')):
lastsrc += '\n'
self._lines[lasti] = lastline._replace(source=lastsrc)
else:
# Update the first occurrence of the key and discard the rest.
# This way, the order in which the keys are listed in the file and
# dict will be preserved.
ix = ixes.pop(0)
for i in ixes:
del self._lines[i]
self._indices[key] = [ix]
self._lines[ix] = PropertyLine(key, value, None)
def __delitem__(self, key):
if not isinstance(key, six.string_types):
raise TypeError(_type_err)
for i in self._indices.pop(key):
del self._lines[i]
def __iter__(self):
return iter(self._indices)
def __reversed__(self):
return reversed(self._indices)
def __len__(self):
return len(self._indices)
def _comparable(self):
return [
(None, line.source) if line.key is None else (line.key, line.value)
for i, line in six.iteritems(self._lines)
### TODO: Also include non-final repeated keys???
if line.key is None or self._indices[line.key][-1] == i
]
def __eq__(self, other):
if isinstance(other, PropertiesFile):
return self._comparable() == other._comparable()
### TODO: Special-case OrderedDict?
elif isinstance(other, Mapping):
return dict(self) == other
else:
return NotImplemented
def __ne__(self, other):
return not (self == other)
@classmethod
def load(cls, fp):
"""
Parse the contents of the `~io.IOBase.readline`-supporting file-like
object ``fp`` as a simple line-oriented ``.properties`` file and return
a `PropertiesFile` instance.
``fp`` may be either a text or binary filehandle, with or without
universal newlines enabled. If it is a binary filehandle, its contents
are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param fp: the file from which to read the ``.properties`` document
:type fp: file-like object
:rtype: PropertiesFile
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
obj = cls()
for i, (k, v, src) in enumerate(parse(fp)):
if k is not None:
obj._indices.setdefault(k, []).append(i)
obj._lines[i] = PropertyLine(k, v, src)
return obj
@classmethod
def loads(cls, s):
"""
Parse the contents of the string ``s`` as a simple line-oriented
``.properties`` file and return a `PropertiesFile` instance.
``s`` may be either a text string or bytes string. If it is a bytes
string, its contents are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param string s: the string from which to read the ``.properties``
document
:rtype: PropertiesFile
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
if isinstance(s, six.binary_type):
fp = six.BytesIO(s)
else:
fp = six.StringIO(s)
return cls.load(fp)
def dump(self, fp, separator='='):
"""
Write the mapping to a file in simple line-oriented ``.properties``
format.
If the instance was originally created from a file or string with
`PropertiesFile.load()` or `PropertiesFile.loads()`, then the output
will include the comments and whitespace from the original input, and
any keys that haven't been deleted or reassigned will retain their
original formatting and multiplicity. Key-value pairs that have been
modified or added to the mapping will be reformatted with
`join_key_value()` using the given separator. All key-value pairs are
output in the order they were defined, with new keys added to the end.
.. note::
Serializing a `PropertiesFile` instance with the :func:`dump()`
function instead will cause all formatting information to be
ignored, as :func:`dump()` will treat the instance like a normal
mapping.
:param fp: A file-like object to write the mapping to. It must have
been opened as a text file with a Latin-1-compatible encoding.
:param separator: The string to use for separating new or modified keys
& values. Only ``" "``, ``"="``, and ``":"`` (possibly with added
whitespace) should ever be used as the separator.
:type separator: text string
:return: `None`
"""
### TODO: Support setting the timestamp
for line in six.itervalues(self._lines):
if line.source is None:
print(join_key_value(line.key, line.value, separator), file=fp)
else:
fp.write(line.source)
def dumps(self, separator='='):
"""
Convert the mapping to a text string in simple line-oriented
``.properties`` format.
If the instance was originally created from a file or string with
`PropertiesFile.load()` or `PropertiesFile.loads()`, then the output
will include the comments and whitespace from the original input, and
any keys that haven't been deleted or reassigned will retain their
original formatting and multiplicity. Key-value pairs that have been
modified or added to the mapping will be reformatted with
`join_key_value()` using the given separator. All key-value pairs are
output in the order they were defined, with new keys added to the end.
.. note::
Serializing a `PropertiesFile` instance with the :func:`dumps()`
function instead will cause all formatting information to be
ignored, as :func:`dumps()` will treat the instance like a normal
mapping.
:param separator: The string to use for separating new or modified keys
& values. Only ``" "``, ``"="``, and ``":"`` (possibly with added
whitespace) should ever be used as the separator.
:type separator: text string
:rtype: text string
"""
s = six.StringIO()
self.dump(s, separator=separator)
return s.getvalue()
|
jwodder/javaproperties | javaproperties/propclass.py | Properties.getProperty | python | def getProperty(self, key, defaultValue=None):
try:
return self[key]
except KeyError:
if self.defaults is not None:
return self.defaults.getProperty(key, defaultValue)
else:
return defaultValue | Fetch the value associated with the key ``key`` in the `Properties`
object. If the key is not present, `defaults` is checked, and then
*its* `defaults`, etc., until either a value for ``key`` is found or
the next `defaults` is `None`, in which case `defaultValue` is
returned.
:param key: the key to look up the value of
:type key: text string
:param defaultValue: the value to return if ``key`` is not found in the
`Properties` object
:rtype: text string (if ``key`` was found)
:raises TypeError: if ``key`` is not a string | train | https://github.com/jwodder/javaproperties/blob/8b48f040305217ebeb80c98c4354691bbb01429b/javaproperties/propclass.py#L91-L112 | [
"def getProperty(self, key, defaultValue=None):\n \"\"\"\n Fetch the value associated with the key ``key`` in the `Properties`\n object. If the key is not present, `defaults` is checked, and then\n *its* `defaults`, etc., until either a value for ``key`` is found or\n the next `defaults` is `None`, ... | class Properties(MutableMapping):
"""
A port of |java8properties|_ that tries to match its behavior as much as is
Pythonically possible. `Properties` behaves like a normal
`~collections.abc.MutableMapping` class (i.e., you can do ``props[key] =
value`` and so forth), except that it may only be used to store strings
(|py2str|_ and |unicode|_ in Python 2; just `str` in Python 3). Attempts
to use a non-string object as a key or value will produce a `TypeError`.
Two `Properties` instances compare equal iff both their key-value pairs and
:attr:`defaults` attributes are equal. When comparing a `Properties`
instance to any other type of mapping, only the key-value pairs are
considered.
.. versionchanged:: 0.5.0
`Properties` instances can now compare equal to `dict`\\ s and other
mapping types
:param data: A mapping or iterable of ``(key, value)`` pairs with which to
initialize the `Properties` object. All keys and values in ``data``
must be text strings.
:type data: mapping or `None`
:param defaults: a set of default properties that will be used as fallback
for `getProperty`
:type defaults: `Properties` or `None`
.. |java8properties| replace:: Java 8's ``java.util.Properties``
.. _java8properties: https://docs.oracle.com/javase/8/docs/api/java/util/Properties.html
"""
def __init__(self, data=None, defaults=None):
self.data = {}
#: A `Properties` subobject used as fallback for `getProperty`. Only
#: `getProperty`, `propertyNames`, `stringPropertyNames`, and `__eq__`
#: use this attribute; all other methods (including the standard
#: mapping methods) ignore it.
self.defaults = defaults
if data is not None:
self.update(data)
def __getitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
return self.data[key]
def __setitem__(self, key, value):
if not isinstance(key, string_types) or \
not isinstance(value, string_types):
raise TypeError(_type_err)
self.data[key] = value
def __delitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def __repr__(self):
return '{0.__module__}.{0.__name__}({1.data!r}, defaults={1.defaults!r})'\
.format(type(self), self)
def __eq__(self, other):
if isinstance(other, Properties):
return self.data == other.data and self.defaults == other.defaults
elif isinstance(other, Mapping):
return dict(self) == other
else:
return NotImplemented
def __ne__(self, other):
return not (self == other)
def load(self, inStream):
"""
Update the `Properties` object with the entries in a ``.properties``
file or file-like object.
``inStream`` may be either a text or binary filehandle, with or without
universal newlines enabled. If it is a binary filehandle, its contents
are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param inStream: the file from which to read the ``.properties``
document
:type inStream: file-like object
:return: `None`
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
self.data.update(load(inStream))
def propertyNames(self):
r"""
Returns a generator of all distinct keys in the `Properties` object and
its `defaults` (and its `defaults`\ ’s `defaults`, etc.) in unspecified
order
:rtype: generator of text strings
"""
for k in self.data:
yield k
if self.defaults is not None:
for k in self.defaults.propertyNames():
if k not in self.data:
yield k
def setProperty(self, key, value):
""" Equivalent to ``self[key] = value`` """
self[key] = value
def store(self, out, comments=None):
"""
Write the `Properties` object's entries (in unspecified order) in
``.properties`` format to ``out``, including the current timestamp.
:param out: A file-like object to write the properties to. It must
have been opened as a text file with a Latin-1-compatible encoding.
:param comments: If non-`None`, ``comments`` will be written to ``out``
as a comment before any other content
:type comments: text string or `None`
:return: `None`
"""
dump(self.data, out, comments=comments)
def stringPropertyNames(self):
r"""
Returns a `set` of all keys in the `Properties` object and its
`defaults` (and its `defaults`\ ’s `defaults`, etc.)
:rtype: `set` of text strings
"""
names = set(self.data)
if self.defaults is not None:
names.update(self.defaults.stringPropertyNames())
return names
def loadFromXML(self, inStream):
"""
Update the `Properties` object with the entries in the XML properties
file ``inStream``.
Beyond basic XML well-formedness, `loadFromXML` only checks that the
root element is named ``properties`` and that all of its ``entry``
children have ``key`` attributes; no further validation is performed.
.. note::
This uses `xml.etree.ElementTree` for parsing, which does not have
decent support for |unicode|_ input in Python 2. Files containing
non-ASCII characters need to be opened in binary mode in Python 2,
while Python 3 accepts both binary and text input.
:param inStream: the file from which to read the XML properties document
:type inStream: file-like object
:return: `None`
:raises ValueError: if the root of the XML tree is not a
``<properties>`` tag or an ``<entry>`` element is missing a ``key``
attribute
"""
self.data.update(load_xml(inStream))
def storeToXML(self, out, comment=None, encoding='UTF-8'):
"""
Write the `Properties` object's entries (in unspecified order) in XML
properties format to ``out``.
:param out: a file-like object to write the properties to
:type out: binary file-like object
:param comment: if non-`None`, ``comment`` will be output as a
``<comment>`` element before the ``<entry>`` elements
:type comment: text string or `None`
:param string encoding: the name of the encoding to use for the XML
document (also included in the XML declaration)
:return: `None`
"""
dump_xml(self.data, out, comment=comment, encoding=encoding)
def copy(self):
"""
.. versionadded:: 0.5.0
Create a shallow copy of the mapping. The copy's `defaults` attribute
will be the same instance as the original's `defaults`.
"""
return type(self)(self.data, self.defaults)
|
jwodder/javaproperties | javaproperties/propclass.py | Properties.propertyNames | python | def propertyNames(self):
r"""
Returns a generator of all distinct keys in the `Properties` object and
its `defaults` (and its `defaults`\ ’s `defaults`, etc.) in unspecified
order
:rtype: generator of text strings
"""
for k in self.data:
yield k
if self.defaults is not None:
for k in self.defaults.propertyNames():
if k not in self.data:
yield k | r"""
Returns a generator of all distinct keys in the `Properties` object and
its `defaults` (and its `defaults`\ ’s `defaults`, etc.) in unspecified
order
:rtype: generator of text strings | train | https://github.com/jwodder/javaproperties/blob/8b48f040305217ebeb80c98c4354691bbb01429b/javaproperties/propclass.py#L136-L149 | [
"def propertyNames(self):\n r\"\"\"\n Returns a generator of all distinct keys in the `Properties` object and\n its `defaults` (and its `defaults`\\ ’s `defaults`, etc.) in unspecified\n order\n\n :rtype: generator of text strings\n \"\"\"\n for k in self.data:\n yield k\n if self.def... | class Properties(MutableMapping):
"""
A port of |java8properties|_ that tries to match its behavior as much as is
Pythonically possible. `Properties` behaves like a normal
`~collections.abc.MutableMapping` class (i.e., you can do ``props[key] =
value`` and so forth), except that it may only be used to store strings
(|py2str|_ and |unicode|_ in Python 2; just `str` in Python 3). Attempts
to use a non-string object as a key or value will produce a `TypeError`.
Two `Properties` instances compare equal iff both their key-value pairs and
:attr:`defaults` attributes are equal. When comparing a `Properties`
instance to any other type of mapping, only the key-value pairs are
considered.
.. versionchanged:: 0.5.0
`Properties` instances can now compare equal to `dict`\\ s and other
mapping types
:param data: A mapping or iterable of ``(key, value)`` pairs with which to
initialize the `Properties` object. All keys and values in ``data``
must be text strings.
:type data: mapping or `None`
:param defaults: a set of default properties that will be used as fallback
for `getProperty`
:type defaults: `Properties` or `None`
.. |java8properties| replace:: Java 8's ``java.util.Properties``
.. _java8properties: https://docs.oracle.com/javase/8/docs/api/java/util/Properties.html
"""
def __init__(self, data=None, defaults=None):
self.data = {}
#: A `Properties` subobject used as fallback for `getProperty`. Only
#: `getProperty`, `propertyNames`, `stringPropertyNames`, and `__eq__`
#: use this attribute; all other methods (including the standard
#: mapping methods) ignore it.
self.defaults = defaults
if data is not None:
self.update(data)
def __getitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
return self.data[key]
def __setitem__(self, key, value):
if not isinstance(key, string_types) or \
not isinstance(value, string_types):
raise TypeError(_type_err)
self.data[key] = value
def __delitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def __repr__(self):
return '{0.__module__}.{0.__name__}({1.data!r}, defaults={1.defaults!r})'\
.format(type(self), self)
def __eq__(self, other):
if isinstance(other, Properties):
return self.data == other.data and self.defaults == other.defaults
elif isinstance(other, Mapping):
return dict(self) == other
else:
return NotImplemented
def __ne__(self, other):
return not (self == other)
def getProperty(self, key, defaultValue=None):
"""
Fetch the value associated with the key ``key`` in the `Properties`
object. If the key is not present, `defaults` is checked, and then
*its* `defaults`, etc., until either a value for ``key`` is found or
the next `defaults` is `None`, in which case `defaultValue` is
returned.
:param key: the key to look up the value of
:type key: text string
:param defaultValue: the value to return if ``key`` is not found in the
`Properties` object
:rtype: text string (if ``key`` was found)
:raises TypeError: if ``key`` is not a string
"""
try:
return self[key]
except KeyError:
if self.defaults is not None:
return self.defaults.getProperty(key, defaultValue)
else:
return defaultValue
def load(self, inStream):
"""
Update the `Properties` object with the entries in a ``.properties``
file or file-like object.
``inStream`` may be either a text or binary filehandle, with or without
universal newlines enabled. If it is a binary filehandle, its contents
are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param inStream: the file from which to read the ``.properties``
document
:type inStream: file-like object
:return: `None`
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
self.data.update(load(inStream))
def setProperty(self, key, value):
""" Equivalent to ``self[key] = value`` """
self[key] = value
def store(self, out, comments=None):
"""
Write the `Properties` object's entries (in unspecified order) in
``.properties`` format to ``out``, including the current timestamp.
:param out: A file-like object to write the properties to. It must
have been opened as a text file with a Latin-1-compatible encoding.
:param comments: If non-`None`, ``comments`` will be written to ``out``
as a comment before any other content
:type comments: text string or `None`
:return: `None`
"""
dump(self.data, out, comments=comments)
def stringPropertyNames(self):
r"""
Returns a `set` of all keys in the `Properties` object and its
`defaults` (and its `defaults`\ ’s `defaults`, etc.)
:rtype: `set` of text strings
"""
names = set(self.data)
if self.defaults is not None:
names.update(self.defaults.stringPropertyNames())
return names
def loadFromXML(self, inStream):
"""
Update the `Properties` object with the entries in the XML properties
file ``inStream``.
Beyond basic XML well-formedness, `loadFromXML` only checks that the
root element is named ``properties`` and that all of its ``entry``
children have ``key`` attributes; no further validation is performed.
.. note::
This uses `xml.etree.ElementTree` for parsing, which does not have
decent support for |unicode|_ input in Python 2. Files containing
non-ASCII characters need to be opened in binary mode in Python 2,
while Python 3 accepts both binary and text input.
:param inStream: the file from which to read the XML properties document
:type inStream: file-like object
:return: `None`
:raises ValueError: if the root of the XML tree is not a
``<properties>`` tag or an ``<entry>`` element is missing a ``key``
attribute
"""
self.data.update(load_xml(inStream))
def storeToXML(self, out, comment=None, encoding='UTF-8'):
"""
Write the `Properties` object's entries (in unspecified order) in XML
properties format to ``out``.
:param out: a file-like object to write the properties to
:type out: binary file-like object
:param comment: if non-`None`, ``comment`` will be output as a
``<comment>`` element before the ``<entry>`` elements
:type comment: text string or `None`
:param string encoding: the name of the encoding to use for the XML
document (also included in the XML declaration)
:return: `None`
"""
dump_xml(self.data, out, comment=comment, encoding=encoding)
def copy(self):
"""
.. versionadded:: 0.5.0
Create a shallow copy of the mapping. The copy's `defaults` attribute
will be the same instance as the original's `defaults`.
"""
return type(self)(self.data, self.defaults)
|
jwodder/javaproperties | javaproperties/propclass.py | Properties.store | python | def store(self, out, comments=None):
dump(self.data, out, comments=comments) | Write the `Properties` object's entries (in unspecified order) in
``.properties`` format to ``out``, including the current timestamp.
:param out: A file-like object to write the properties to. It must
have been opened as a text file with a Latin-1-compatible encoding.
:param comments: If non-`None`, ``comments`` will be written to ``out``
as a comment before any other content
:type comments: text string or `None`
:return: `None` | train | https://github.com/jwodder/javaproperties/blob/8b48f040305217ebeb80c98c4354691bbb01429b/javaproperties/propclass.py#L155-L167 | [
"def dump(props, fp, separator='=', comments=None, timestamp=True,\n sort_keys=False):\n \"\"\"\n Write a series of key-value pairs to a file in simple line-oriented\n ``.properties`` format.\n\n :param props: A mapping or iterable of ``(key, value)`` pairs to write to\n ``fp``. All keys... | class Properties(MutableMapping):
"""
A port of |java8properties|_ that tries to match its behavior as much as is
Pythonically possible. `Properties` behaves like a normal
`~collections.abc.MutableMapping` class (i.e., you can do ``props[key] =
value`` and so forth), except that it may only be used to store strings
(|py2str|_ and |unicode|_ in Python 2; just `str` in Python 3). Attempts
to use a non-string object as a key or value will produce a `TypeError`.
Two `Properties` instances compare equal iff both their key-value pairs and
:attr:`defaults` attributes are equal. When comparing a `Properties`
instance to any other type of mapping, only the key-value pairs are
considered.
.. versionchanged:: 0.5.0
`Properties` instances can now compare equal to `dict`\\ s and other
mapping types
:param data: A mapping or iterable of ``(key, value)`` pairs with which to
initialize the `Properties` object. All keys and values in ``data``
must be text strings.
:type data: mapping or `None`
:param defaults: a set of default properties that will be used as fallback
for `getProperty`
:type defaults: `Properties` or `None`
.. |java8properties| replace:: Java 8's ``java.util.Properties``
.. _java8properties: https://docs.oracle.com/javase/8/docs/api/java/util/Properties.html
"""
def __init__(self, data=None, defaults=None):
self.data = {}
#: A `Properties` subobject used as fallback for `getProperty`. Only
#: `getProperty`, `propertyNames`, `stringPropertyNames`, and `__eq__`
#: use this attribute; all other methods (including the standard
#: mapping methods) ignore it.
self.defaults = defaults
if data is not None:
self.update(data)
def __getitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
return self.data[key]
def __setitem__(self, key, value):
if not isinstance(key, string_types) or \
not isinstance(value, string_types):
raise TypeError(_type_err)
self.data[key] = value
def __delitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def __repr__(self):
return '{0.__module__}.{0.__name__}({1.data!r}, defaults={1.defaults!r})'\
.format(type(self), self)
def __eq__(self, other):
if isinstance(other, Properties):
return self.data == other.data and self.defaults == other.defaults
elif isinstance(other, Mapping):
return dict(self) == other
else:
return NotImplemented
def __ne__(self, other):
return not (self == other)
def getProperty(self, key, defaultValue=None):
"""
Fetch the value associated with the key ``key`` in the `Properties`
object. If the key is not present, `defaults` is checked, and then
*its* `defaults`, etc., until either a value for ``key`` is found or
the next `defaults` is `None`, in which case `defaultValue` is
returned.
:param key: the key to look up the value of
:type key: text string
:param defaultValue: the value to return if ``key`` is not found in the
`Properties` object
:rtype: text string (if ``key`` was found)
:raises TypeError: if ``key`` is not a string
"""
try:
return self[key]
except KeyError:
if self.defaults is not None:
return self.defaults.getProperty(key, defaultValue)
else:
return defaultValue
def load(self, inStream):
"""
Update the `Properties` object with the entries in a ``.properties``
file or file-like object.
``inStream`` may be either a text or binary filehandle, with or without
universal newlines enabled. If it is a binary filehandle, its contents
are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param inStream: the file from which to read the ``.properties``
document
:type inStream: file-like object
:return: `None`
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
self.data.update(load(inStream))
def propertyNames(self):
r"""
Returns a generator of all distinct keys in the `Properties` object and
its `defaults` (and its `defaults`\ ’s `defaults`, etc.) in unspecified
order
:rtype: generator of text strings
"""
for k in self.data:
yield k
if self.defaults is not None:
for k in self.defaults.propertyNames():
if k not in self.data:
yield k
def setProperty(self, key, value):
""" Equivalent to ``self[key] = value`` """
self[key] = value
def stringPropertyNames(self):
r"""
Returns a `set` of all keys in the `Properties` object and its
`defaults` (and its `defaults`\ ’s `defaults`, etc.)
:rtype: `set` of text strings
"""
names = set(self.data)
if self.defaults is not None:
names.update(self.defaults.stringPropertyNames())
return names
def loadFromXML(self, inStream):
"""
Update the `Properties` object with the entries in the XML properties
file ``inStream``.
Beyond basic XML well-formedness, `loadFromXML` only checks that the
root element is named ``properties`` and that all of its ``entry``
children have ``key`` attributes; no further validation is performed.
.. note::
This uses `xml.etree.ElementTree` for parsing, which does not have
decent support for |unicode|_ input in Python 2. Files containing
non-ASCII characters need to be opened in binary mode in Python 2,
while Python 3 accepts both binary and text input.
:param inStream: the file from which to read the XML properties document
:type inStream: file-like object
:return: `None`
:raises ValueError: if the root of the XML tree is not a
``<properties>`` tag or an ``<entry>`` element is missing a ``key``
attribute
"""
self.data.update(load_xml(inStream))
def storeToXML(self, out, comment=None, encoding='UTF-8'):
"""
Write the `Properties` object's entries (in unspecified order) in XML
properties format to ``out``.
:param out: a file-like object to write the properties to
:type out: binary file-like object
:param comment: if non-`None`, ``comment`` will be output as a
``<comment>`` element before the ``<entry>`` elements
:type comment: text string or `None`
:param string encoding: the name of the encoding to use for the XML
document (also included in the XML declaration)
:return: `None`
"""
dump_xml(self.data, out, comment=comment, encoding=encoding)
def copy(self):
"""
.. versionadded:: 0.5.0
Create a shallow copy of the mapping. The copy's `defaults` attribute
will be the same instance as the original's `defaults`.
"""
return type(self)(self.data, self.defaults)
|
jwodder/javaproperties | javaproperties/propclass.py | Properties.stringPropertyNames | python | def stringPropertyNames(self):
r"""
Returns a `set` of all keys in the `Properties` object and its
`defaults` (and its `defaults`\ ’s `defaults`, etc.)
:rtype: `set` of text strings
"""
names = set(self.data)
if self.defaults is not None:
names.update(self.defaults.stringPropertyNames())
return names | r"""
Returns a `set` of all keys in the `Properties` object and its
`defaults` (and its `defaults`\ ’s `defaults`, etc.)
:rtype: `set` of text strings | train | https://github.com/jwodder/javaproperties/blob/8b48f040305217ebeb80c98c4354691bbb01429b/javaproperties/propclass.py#L169-L179 | [
"def stringPropertyNames(self):\n r\"\"\"\n Returns a `set` of all keys in the `Properties` object and its\n `defaults` (and its `defaults`\\ ’s `defaults`, etc.)\n\n :rtype: `set` of text strings\n \"\"\"\n names = set(self.data)\n if self.defaults is not None:\n names.update(self.defau... | class Properties(MutableMapping):
"""
A port of |java8properties|_ that tries to match its behavior as much as is
Pythonically possible. `Properties` behaves like a normal
`~collections.abc.MutableMapping` class (i.e., you can do ``props[key] =
value`` and so forth), except that it may only be used to store strings
(|py2str|_ and |unicode|_ in Python 2; just `str` in Python 3). Attempts
to use a non-string object as a key or value will produce a `TypeError`.
Two `Properties` instances compare equal iff both their key-value pairs and
:attr:`defaults` attributes are equal. When comparing a `Properties`
instance to any other type of mapping, only the key-value pairs are
considered.
.. versionchanged:: 0.5.0
`Properties` instances can now compare equal to `dict`\\ s and other
mapping types
:param data: A mapping or iterable of ``(key, value)`` pairs with which to
initialize the `Properties` object. All keys and values in ``data``
must be text strings.
:type data: mapping or `None`
:param defaults: a set of default properties that will be used as fallback
for `getProperty`
:type defaults: `Properties` or `None`
.. |java8properties| replace:: Java 8's ``java.util.Properties``
.. _java8properties: https://docs.oracle.com/javase/8/docs/api/java/util/Properties.html
"""
def __init__(self, data=None, defaults=None):
self.data = {}
#: A `Properties` subobject used as fallback for `getProperty`. Only
#: `getProperty`, `propertyNames`, `stringPropertyNames`, and `__eq__`
#: use this attribute; all other methods (including the standard
#: mapping methods) ignore it.
self.defaults = defaults
if data is not None:
self.update(data)
def __getitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
return self.data[key]
def __setitem__(self, key, value):
if not isinstance(key, string_types) or \
not isinstance(value, string_types):
raise TypeError(_type_err)
self.data[key] = value
def __delitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def __repr__(self):
return '{0.__module__}.{0.__name__}({1.data!r}, defaults={1.defaults!r})'\
.format(type(self), self)
def __eq__(self, other):
if isinstance(other, Properties):
return self.data == other.data and self.defaults == other.defaults
elif isinstance(other, Mapping):
return dict(self) == other
else:
return NotImplemented
def __ne__(self, other):
return not (self == other)
def getProperty(self, key, defaultValue=None):
"""
Fetch the value associated with the key ``key`` in the `Properties`
object. If the key is not present, `defaults` is checked, and then
*its* `defaults`, etc., until either a value for ``key`` is found or
the next `defaults` is `None`, in which case `defaultValue` is
returned.
:param key: the key to look up the value of
:type key: text string
:param defaultValue: the value to return if ``key`` is not found in the
`Properties` object
:rtype: text string (if ``key`` was found)
:raises TypeError: if ``key`` is not a string
"""
try:
return self[key]
except KeyError:
if self.defaults is not None:
return self.defaults.getProperty(key, defaultValue)
else:
return defaultValue
def load(self, inStream):
"""
Update the `Properties` object with the entries in a ``.properties``
file or file-like object.
``inStream`` may be either a text or binary filehandle, with or without
universal newlines enabled. If it is a binary filehandle, its contents
are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param inStream: the file from which to read the ``.properties``
document
:type inStream: file-like object
:return: `None`
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
self.data.update(load(inStream))
def propertyNames(self):
r"""
Returns a generator of all distinct keys in the `Properties` object and
its `defaults` (and its `defaults`\ ’s `defaults`, etc.) in unspecified
order
:rtype: generator of text strings
"""
for k in self.data:
yield k
if self.defaults is not None:
for k in self.defaults.propertyNames():
if k not in self.data:
yield k
def setProperty(self, key, value):
""" Equivalent to ``self[key] = value`` """
self[key] = value
def store(self, out, comments=None):
"""
Write the `Properties` object's entries (in unspecified order) in
``.properties`` format to ``out``, including the current timestamp.
:param out: A file-like object to write the properties to. It must
have been opened as a text file with a Latin-1-compatible encoding.
:param comments: If non-`None`, ``comments`` will be written to ``out``
as a comment before any other content
:type comments: text string or `None`
:return: `None`
"""
dump(self.data, out, comments=comments)
def loadFromXML(self, inStream):
"""
Update the `Properties` object with the entries in the XML properties
file ``inStream``.
Beyond basic XML well-formedness, `loadFromXML` only checks that the
root element is named ``properties`` and that all of its ``entry``
children have ``key`` attributes; no further validation is performed.
.. note::
This uses `xml.etree.ElementTree` for parsing, which does not have
decent support for |unicode|_ input in Python 2. Files containing
non-ASCII characters need to be opened in binary mode in Python 2,
while Python 3 accepts both binary and text input.
:param inStream: the file from which to read the XML properties document
:type inStream: file-like object
:return: `None`
:raises ValueError: if the root of the XML tree is not a
``<properties>`` tag or an ``<entry>`` element is missing a ``key``
attribute
"""
self.data.update(load_xml(inStream))
def storeToXML(self, out, comment=None, encoding='UTF-8'):
"""
Write the `Properties` object's entries (in unspecified order) in XML
properties format to ``out``.
:param out: a file-like object to write the properties to
:type out: binary file-like object
:param comment: if non-`None`, ``comment`` will be output as a
``<comment>`` element before the ``<entry>`` elements
:type comment: text string or `None`
:param string encoding: the name of the encoding to use for the XML
document (also included in the XML declaration)
:return: `None`
"""
dump_xml(self.data, out, comment=comment, encoding=encoding)
def copy(self):
"""
.. versionadded:: 0.5.0
Create a shallow copy of the mapping. The copy's `defaults` attribute
will be the same instance as the original's `defaults`.
"""
return type(self)(self.data, self.defaults)
|
jwodder/javaproperties | javaproperties/propclass.py | Properties.storeToXML | python | def storeToXML(self, out, comment=None, encoding='UTF-8'):
dump_xml(self.data, out, comment=comment, encoding=encoding) | Write the `Properties` object's entries (in unspecified order) in XML
properties format to ``out``.
:param out: a file-like object to write the properties to
:type out: binary file-like object
:param comment: if non-`None`, ``comment`` will be output as a
``<comment>`` element before the ``<entry>`` elements
:type comment: text string or `None`
:param string encoding: the name of the encoding to use for the XML
document (also included in the XML declaration)
:return: `None` | train | https://github.com/jwodder/javaproperties/blob/8b48f040305217ebeb80c98c4354691bbb01429b/javaproperties/propclass.py#L206-L220 | [
"def dump_xml(props, fp, comment=None, encoding='UTF-8', sort_keys=False):\n \"\"\"\n Write a series ``props`` of key-value pairs to a binary filehandle ``fp``\n in the format of an XML properties file. The file will include both an XML\n declaration and a doctype declaration.\n\n :param props: A ma... | class Properties(MutableMapping):
"""
A port of |java8properties|_ that tries to match its behavior as much as is
Pythonically possible. `Properties` behaves like a normal
`~collections.abc.MutableMapping` class (i.e., you can do ``props[key] =
value`` and so forth), except that it may only be used to store strings
(|py2str|_ and |unicode|_ in Python 2; just `str` in Python 3). Attempts
to use a non-string object as a key or value will produce a `TypeError`.
Two `Properties` instances compare equal iff both their key-value pairs and
:attr:`defaults` attributes are equal. When comparing a `Properties`
instance to any other type of mapping, only the key-value pairs are
considered.
.. versionchanged:: 0.5.0
`Properties` instances can now compare equal to `dict`\\ s and other
mapping types
:param data: A mapping or iterable of ``(key, value)`` pairs with which to
initialize the `Properties` object. All keys and values in ``data``
must be text strings.
:type data: mapping or `None`
:param defaults: a set of default properties that will be used as fallback
for `getProperty`
:type defaults: `Properties` or `None`
.. |java8properties| replace:: Java 8's ``java.util.Properties``
.. _java8properties: https://docs.oracle.com/javase/8/docs/api/java/util/Properties.html
"""
def __init__(self, data=None, defaults=None):
self.data = {}
#: A `Properties` subobject used as fallback for `getProperty`. Only
#: `getProperty`, `propertyNames`, `stringPropertyNames`, and `__eq__`
#: use this attribute; all other methods (including the standard
#: mapping methods) ignore it.
self.defaults = defaults
if data is not None:
self.update(data)
def __getitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
return self.data[key]
def __setitem__(self, key, value):
if not isinstance(key, string_types) or \
not isinstance(value, string_types):
raise TypeError(_type_err)
self.data[key] = value
def __delitem__(self, key):
if not isinstance(key, string_types):
raise TypeError(_type_err)
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def __repr__(self):
return '{0.__module__}.{0.__name__}({1.data!r}, defaults={1.defaults!r})'\
.format(type(self), self)
def __eq__(self, other):
if isinstance(other, Properties):
return self.data == other.data and self.defaults == other.defaults
elif isinstance(other, Mapping):
return dict(self) == other
else:
return NotImplemented
def __ne__(self, other):
return not (self == other)
def getProperty(self, key, defaultValue=None):
"""
Fetch the value associated with the key ``key`` in the `Properties`
object. If the key is not present, `defaults` is checked, and then
*its* `defaults`, etc., until either a value for ``key`` is found or
the next `defaults` is `None`, in which case `defaultValue` is
returned.
:param key: the key to look up the value of
:type key: text string
:param defaultValue: the value to return if ``key`` is not found in the
`Properties` object
:rtype: text string (if ``key`` was found)
:raises TypeError: if ``key`` is not a string
"""
try:
return self[key]
except KeyError:
if self.defaults is not None:
return self.defaults.getProperty(key, defaultValue)
else:
return defaultValue
def load(self, inStream):
"""
Update the `Properties` object with the entries in a ``.properties``
file or file-like object.
``inStream`` may be either a text or binary filehandle, with or without
universal newlines enabled. If it is a binary filehandle, its contents
are decoded as Latin-1.
.. versionchanged:: 0.5.0
Invalid ``\\uXXXX`` escape sequences will now cause an
`InvalidUEscapeError` to be raised
:param inStream: the file from which to read the ``.properties``
document
:type inStream: file-like object
:return: `None`
:raises InvalidUEscapeError: if an invalid ``\\uXXXX`` escape sequence
occurs in the input
"""
self.data.update(load(inStream))
def propertyNames(self):
r"""
Returns a generator of all distinct keys in the `Properties` object and
its `defaults` (and its `defaults`\ ’s `defaults`, etc.) in unspecified
order
:rtype: generator of text strings
"""
for k in self.data:
yield k
if self.defaults is not None:
for k in self.defaults.propertyNames():
if k not in self.data:
yield k
def setProperty(self, key, value):
""" Equivalent to ``self[key] = value`` """
self[key] = value
def store(self, out, comments=None):
"""
Write the `Properties` object's entries (in unspecified order) in
``.properties`` format to ``out``, including the current timestamp.
:param out: A file-like object to write the properties to. It must
have been opened as a text file with a Latin-1-compatible encoding.
:param comments: If non-`None`, ``comments`` will be written to ``out``
as a comment before any other content
:type comments: text string or `None`
:return: `None`
"""
dump(self.data, out, comments=comments)
def stringPropertyNames(self):
r"""
Returns a `set` of all keys in the `Properties` object and its
`defaults` (and its `defaults`\ ’s `defaults`, etc.)
:rtype: `set` of text strings
"""
names = set(self.data)
if self.defaults is not None:
names.update(self.defaults.stringPropertyNames())
return names
def loadFromXML(self, inStream):
"""
Update the `Properties` object with the entries in the XML properties
file ``inStream``.
Beyond basic XML well-formedness, `loadFromXML` only checks that the
root element is named ``properties`` and that all of its ``entry``
children have ``key`` attributes; no further validation is performed.
.. note::
This uses `xml.etree.ElementTree` for parsing, which does not have
decent support for |unicode|_ input in Python 2. Files containing
non-ASCII characters need to be opened in binary mode in Python 2,
while Python 3 accepts both binary and text input.
:param inStream: the file from which to read the XML properties document
:type inStream: file-like object
:return: `None`
:raises ValueError: if the root of the XML tree is not a
``<properties>`` tag or an ``<entry>`` element is missing a ``key``
attribute
"""
self.data.update(load_xml(inStream))
def copy(self):
"""
.. versionadded:: 0.5.0
Create a shallow copy of the mapping. The copy's `defaults` attribute
will be the same instance as the original's `defaults`.
"""
return type(self)(self.data, self.defaults)
|
alphagov/performanceplatform-collector | performanceplatform/collector/pingdom/__init__.py | parse_time_range | python | def parse_time_range(start_dt, end_dt):
now = datetime.now()
if start_dt and not end_dt:
end_dt = now
elif end_dt and not start_dt:
start_dt = _EARLIEST_DATE
elif not start_dt and not end_dt: # last 24 hours
end_dt = now
start_dt = end_dt - timedelta(days=1)
return tuple(map(truncate_hour_fraction, (start_dt, end_dt))) | Convert the start/end datetimes specified by the user, specifically:
- truncate any minutes/seconds
- for a missing end time, use start + 24 hours
- for a missing start time, use end - 24 hours
- for missing start and end, use the last 24 hours | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/pingdom/__init__.py#L26-L46 | null | from datetime import datetime, timedelta
import json
import logging
from performanceplatform.client import DataSet
from performanceplatform.collector.pingdom.core import Pingdom
_EARLIEST_DATE = datetime(2005, 1, 1)
def main(credentials, data_set_config, query, options, start_at, end_at):
start_dt, end_dt = parse_time_range(start_at, end_at)
pingdom = Pingdom(credentials)
check_name = query['name']
pingdom_stats = pingdom.stats(check_name, start_dt, end_dt)
push_stats_to_data_set(
pingdom_stats,
check_name,
data_set_config)
def push_stats_to_data_set(pingdom_stats, check_name, data_set_config):
data_set = DataSet.from_config(data_set_config)
data_set.post(
[convert_from_pingdom_to_performanceplatform(thing, check_name) for
thing in pingdom_stats])
def get_contents_as_json(path_to_file):
with open(path_to_file) as file_to_load:
logging.debug(path_to_file)
return json.load(file_to_load)
def convert_from_pingdom_to_performanceplatform(pingdom_stats, name_of_check):
timestamp = pingdom_stats['starttime'].isoformat()
name_for_id = name_of_check.replace(' ', '_')
return {
'_id': "%s.%s" % (name_for_id, timestamp),
'_timestamp': timestamp,
'avgresponse': pingdom_stats['avgresponse'],
'uptime': pingdom_stats['uptime'],
'downtime': pingdom_stats['downtime'],
'unmonitored': pingdom_stats['unmonitored'],
'check': name_of_check
}
def truncate_hour_fraction(a_datetime):
return a_datetime.replace(minute=0, second=0, microsecond=0)
|
alphagov/performanceplatform-collector | performanceplatform/collector/ga/plugins/aggregate.py | group | python | def group(iterable, key):
for _, grouped in groupby(sorted(iterable, key=key), key=key):
yield list(grouped) | groupby which sorts the input, discards the key and returns the output
as a sequence of lists. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/ga/plugins/aggregate.py#L41-L47 | null | from __future__ import division
from itertools import groupby
class AggregateKey(object):
"""
Given a set of documents, find all records with equal "key" where the "key"
is all values which are not being aggregated over.
For example:
doc1 = {"foo": "bar", "count": 2}
doc2 = {"foo": "bar", "count": 2}
plugin = AggregateKey(aggregate_count("count"))
plugin([doc1, doc2]) => {"foo": "bar", "count": 4}
Aggregate rates are also supported through
`aggregate_rate(rate_key, count_key)`.
"""
def __init__(self, *aggregations):
self.aggregations = aggregations
def __call__(self, documents):
first = documents[0]
aggregate_keys = [k for k, _ in self.aggregations]
groupkeys = set(first) - set(aggregate_keys)
def key(doc):
return tuple(doc[key] for key in groupkeys)
return [make_aggregate(grouped, self.aggregations)
for grouped in group(documents, key)]
def aggregate_count(keyname):
"""
Straightforward sum of the given keyname.
"""
def inner(docs):
return sum(doc[keyname] for doc in docs)
return keyname, inner
def aggregate_rate(rate_key, count_key):
"""
Compute an aggregate rate for `rate_key` weighted according to
`count_rate`.
"""
def inner(docs):
total = sum(doc[count_key] for doc in docs)
weighted_total = sum(doc[rate_key] * doc[count_key] for doc in docs)
total_rate = weighted_total / total
return total_rate
return rate_key, inner
def make_aggregate(docs, aggregations):
"""
Given `docs` and `aggregations` return a single document with the
aggregations applied.
"""
new_doc = dict(docs[0])
for keyname, aggregation_function in aggregations:
new_doc[keyname] = aggregation_function(docs)
return new_doc
def test_make_aggregate_sum():
"""
test_make_aggregate_sum()
Straight test that summation works over the field specified in
aggregate_count(keyname).
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 201}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 103}
docs = [doc1, doc2]
aggregate_doc = make_aggregate(docs, [aggregate_count("visits")])
expected_aggregate = {"a": 2, "b": 2, "c": 2, "visits": 304}
assert_equal(aggregate_doc, expected_aggregate)
def test_make_aggregate_rate():
"""
test_make_aggregate_rate()
Test that aggregation works when there are two fields summed over, and
that rate aggregation works correctly. It isn't easy to test independently
because one must aggregate over the two fields to get the correct result.
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.25}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.75}
docs = [doc1, doc2]
aggregate_doc = make_aggregate(docs, [aggregate_count("visits"),
aggregate_rate("rate", "visits")])
expected_aggregate = {
"a": 2, "b": 2, "c": 2,
"visits": 200,
"rate": (0.25 * 100 + 0.75 * 100) / (100 + 100)}
assert_equal(aggregate_doc, expected_aggregate)
def test_AggregateKeyPlugin():
"""
test_AggregateKeyPlugin()
Test that the AggregateKey class behaves as a plugin correctly.
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.25}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.75}
docs = [doc1, doc2]
plugin = AggregateKey(aggregate_count("visits"),
aggregate_rate("rate", "visits"))
output_docs = plugin(docs)
expected_aggregate = {
"a": 2, "b": 2, "c": 2,
"visits": 200,
"rate": (0.25 * 100 + 0.75 * 100) / (100 + 100)}
assert_equal(output_docs, [expected_aggregate])
|
alphagov/performanceplatform-collector | performanceplatform/collector/ga/plugins/aggregate.py | aggregate_count | python | def aggregate_count(keyname):
def inner(docs):
return sum(doc[keyname] for doc in docs)
return keyname, inner | Straightforward sum of the given keyname. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/ga/plugins/aggregate.py#L50-L57 | null | from __future__ import division
from itertools import groupby
class AggregateKey(object):
"""
Given a set of documents, find all records with equal "key" where the "key"
is all values which are not being aggregated over.
For example:
doc1 = {"foo": "bar", "count": 2}
doc2 = {"foo": "bar", "count": 2}
plugin = AggregateKey(aggregate_count("count"))
plugin([doc1, doc2]) => {"foo": "bar", "count": 4}
Aggregate rates are also supported through
`aggregate_rate(rate_key, count_key)`.
"""
def __init__(self, *aggregations):
self.aggregations = aggregations
def __call__(self, documents):
first = documents[0]
aggregate_keys = [k for k, _ in self.aggregations]
groupkeys = set(first) - set(aggregate_keys)
def key(doc):
return tuple(doc[key] for key in groupkeys)
return [make_aggregate(grouped, self.aggregations)
for grouped in group(documents, key)]
def group(iterable, key):
"""
groupby which sorts the input, discards the key and returns the output
as a sequence of lists.
"""
for _, grouped in groupby(sorted(iterable, key=key), key=key):
yield list(grouped)
def aggregate_rate(rate_key, count_key):
"""
Compute an aggregate rate for `rate_key` weighted according to
`count_rate`.
"""
def inner(docs):
total = sum(doc[count_key] for doc in docs)
weighted_total = sum(doc[rate_key] * doc[count_key] for doc in docs)
total_rate = weighted_total / total
return total_rate
return rate_key, inner
def make_aggregate(docs, aggregations):
"""
Given `docs` and `aggregations` return a single document with the
aggregations applied.
"""
new_doc = dict(docs[0])
for keyname, aggregation_function in aggregations:
new_doc[keyname] = aggregation_function(docs)
return new_doc
def test_make_aggregate_sum():
"""
test_make_aggregate_sum()
Straight test that summation works over the field specified in
aggregate_count(keyname).
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 201}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 103}
docs = [doc1, doc2]
aggregate_doc = make_aggregate(docs, [aggregate_count("visits")])
expected_aggregate = {"a": 2, "b": 2, "c": 2, "visits": 304}
assert_equal(aggregate_doc, expected_aggregate)
def test_make_aggregate_rate():
"""
test_make_aggregate_rate()
Test that aggregation works when there are two fields summed over, and
that rate aggregation works correctly. It isn't easy to test independently
because one must aggregate over the two fields to get the correct result.
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.25}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.75}
docs = [doc1, doc2]
aggregate_doc = make_aggregate(docs, [aggregate_count("visits"),
aggregate_rate("rate", "visits")])
expected_aggregate = {
"a": 2, "b": 2, "c": 2,
"visits": 200,
"rate": (0.25 * 100 + 0.75 * 100) / (100 + 100)}
assert_equal(aggregate_doc, expected_aggregate)
def test_AggregateKeyPlugin():
"""
test_AggregateKeyPlugin()
Test that the AggregateKey class behaves as a plugin correctly.
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.25}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.75}
docs = [doc1, doc2]
plugin = AggregateKey(aggregate_count("visits"),
aggregate_rate("rate", "visits"))
output_docs = plugin(docs)
expected_aggregate = {
"a": 2, "b": 2, "c": 2,
"visits": 200,
"rate": (0.25 * 100 + 0.75 * 100) / (100 + 100)}
assert_equal(output_docs, [expected_aggregate])
|
alphagov/performanceplatform-collector | performanceplatform/collector/ga/plugins/aggregate.py | aggregate_rate | python | def aggregate_rate(rate_key, count_key):
def inner(docs):
total = sum(doc[count_key] for doc in docs)
weighted_total = sum(doc[rate_key] * doc[count_key] for doc in docs)
total_rate = weighted_total / total
return total_rate
return rate_key, inner | Compute an aggregate rate for `rate_key` weighted according to
`count_rate`. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/ga/plugins/aggregate.py#L60-L71 | null | from __future__ import division
from itertools import groupby
class AggregateKey(object):
"""
Given a set of documents, find all records with equal "key" where the "key"
is all values which are not being aggregated over.
For example:
doc1 = {"foo": "bar", "count": 2}
doc2 = {"foo": "bar", "count": 2}
plugin = AggregateKey(aggregate_count("count"))
plugin([doc1, doc2]) => {"foo": "bar", "count": 4}
Aggregate rates are also supported through
`aggregate_rate(rate_key, count_key)`.
"""
def __init__(self, *aggregations):
self.aggregations = aggregations
def __call__(self, documents):
first = documents[0]
aggregate_keys = [k for k, _ in self.aggregations]
groupkeys = set(first) - set(aggregate_keys)
def key(doc):
return tuple(doc[key] for key in groupkeys)
return [make_aggregate(grouped, self.aggregations)
for grouped in group(documents, key)]
def group(iterable, key):
"""
groupby which sorts the input, discards the key and returns the output
as a sequence of lists.
"""
for _, grouped in groupby(sorted(iterable, key=key), key=key):
yield list(grouped)
def aggregate_count(keyname):
"""
Straightforward sum of the given keyname.
"""
def inner(docs):
return sum(doc[keyname] for doc in docs)
return keyname, inner
def make_aggregate(docs, aggregations):
"""
Given `docs` and `aggregations` return a single document with the
aggregations applied.
"""
new_doc = dict(docs[0])
for keyname, aggregation_function in aggregations:
new_doc[keyname] = aggregation_function(docs)
return new_doc
def test_make_aggregate_sum():
"""
test_make_aggregate_sum()
Straight test that summation works over the field specified in
aggregate_count(keyname).
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 201}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 103}
docs = [doc1, doc2]
aggregate_doc = make_aggregate(docs, [aggregate_count("visits")])
expected_aggregate = {"a": 2, "b": 2, "c": 2, "visits": 304}
assert_equal(aggregate_doc, expected_aggregate)
def test_make_aggregate_rate():
"""
test_make_aggregate_rate()
Test that aggregation works when there are two fields summed over, and
that rate aggregation works correctly. It isn't easy to test independently
because one must aggregate over the two fields to get the correct result.
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.25}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.75}
docs = [doc1, doc2]
aggregate_doc = make_aggregate(docs, [aggregate_count("visits"),
aggregate_rate("rate", "visits")])
expected_aggregate = {
"a": 2, "b": 2, "c": 2,
"visits": 200,
"rate": (0.25 * 100 + 0.75 * 100) / (100 + 100)}
assert_equal(aggregate_doc, expected_aggregate)
def test_AggregateKeyPlugin():
"""
test_AggregateKeyPlugin()
Test that the AggregateKey class behaves as a plugin correctly.
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.25}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.75}
docs = [doc1, doc2]
plugin = AggregateKey(aggregate_count("visits"),
aggregate_rate("rate", "visits"))
output_docs = plugin(docs)
expected_aggregate = {
"a": 2, "b": 2, "c": 2,
"visits": 200,
"rate": (0.25 * 100 + 0.75 * 100) / (100 + 100)}
assert_equal(output_docs, [expected_aggregate])
|
alphagov/performanceplatform-collector | performanceplatform/collector/ga/plugins/aggregate.py | make_aggregate | python | def make_aggregate(docs, aggregations):
new_doc = dict(docs[0])
for keyname, aggregation_function in aggregations:
new_doc[keyname] = aggregation_function(docs)
return new_doc | Given `docs` and `aggregations` return a single document with the
aggregations applied. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/ga/plugins/aggregate.py#L74-L84 | null | from __future__ import division
from itertools import groupby
class AggregateKey(object):
"""
Given a set of documents, find all records with equal "key" where the "key"
is all values which are not being aggregated over.
For example:
doc1 = {"foo": "bar", "count": 2}
doc2 = {"foo": "bar", "count": 2}
plugin = AggregateKey(aggregate_count("count"))
plugin([doc1, doc2]) => {"foo": "bar", "count": 4}
Aggregate rates are also supported through
`aggregate_rate(rate_key, count_key)`.
"""
def __init__(self, *aggregations):
self.aggregations = aggregations
def __call__(self, documents):
first = documents[0]
aggregate_keys = [k for k, _ in self.aggregations]
groupkeys = set(first) - set(aggregate_keys)
def key(doc):
return tuple(doc[key] for key in groupkeys)
return [make_aggregate(grouped, self.aggregations)
for grouped in group(documents, key)]
def group(iterable, key):
"""
groupby which sorts the input, discards the key and returns the output
as a sequence of lists.
"""
for _, grouped in groupby(sorted(iterable, key=key), key=key):
yield list(grouped)
def aggregate_count(keyname):
"""
Straightforward sum of the given keyname.
"""
def inner(docs):
return sum(doc[keyname] for doc in docs)
return keyname, inner
def aggregate_rate(rate_key, count_key):
"""
Compute an aggregate rate for `rate_key` weighted according to
`count_rate`.
"""
def inner(docs):
total = sum(doc[count_key] for doc in docs)
weighted_total = sum(doc[rate_key] * doc[count_key] for doc in docs)
total_rate = weighted_total / total
return total_rate
return rate_key, inner
def test_make_aggregate_sum():
"""
test_make_aggregate_sum()
Straight test that summation works over the field specified in
aggregate_count(keyname).
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 201}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 103}
docs = [doc1, doc2]
aggregate_doc = make_aggregate(docs, [aggregate_count("visits")])
expected_aggregate = {"a": 2, "b": 2, "c": 2, "visits": 304}
assert_equal(aggregate_doc, expected_aggregate)
def test_make_aggregate_rate():
"""
test_make_aggregate_rate()
Test that aggregation works when there are two fields summed over, and
that rate aggregation works correctly. It isn't easy to test independently
because one must aggregate over the two fields to get the correct result.
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.25}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.75}
docs = [doc1, doc2]
aggregate_doc = make_aggregate(docs, [aggregate_count("visits"),
aggregate_rate("rate", "visits")])
expected_aggregate = {
"a": 2, "b": 2, "c": 2,
"visits": 200,
"rate": (0.25 * 100 + 0.75 * 100) / (100 + 100)}
assert_equal(aggregate_doc, expected_aggregate)
def test_AggregateKeyPlugin():
"""
test_AggregateKeyPlugin()
Test that the AggregateKey class behaves as a plugin correctly.
"""
from nose.tools import assert_equal
doc1 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.25}
doc2 = {"a": 2, "b": 2, "c": 2, "visits": 100, "rate": 0.75}
docs = [doc1, doc2]
plugin = AggregateKey(aggregate_count("visits"),
aggregate_rate("rate", "visits"))
output_docs = plugin(docs)
expected_aggregate = {
"a": 2, "b": 2, "c": 2,
"visits": 200,
"rate": (0.25 * 100 + 0.75 * 100) / (100 + 100)}
assert_equal(output_docs, [expected_aggregate])
|
alphagov/performanceplatform-collector | performanceplatform/collector/webtrends/keymetrics.py | Collector.date_range_for_webtrends | python | def date_range_for_webtrends(cls, start_at=None, end_at=None):
if start_at and end_at:
start_date = cls.parse_standard_date_string_to_date(
start_at)
end_date = cls.parse_standard_date_string_to_date(
end_at)
return [(
cls.parse_date_for_query(start_date),
cls.parse_date_for_query(end_date))]
else:
return [("current_hour-1", "current_hour-1")] | Get the start and end formatted for query
or the last hour if none specified.
Unlike reports, this does not aggregate periods
and so it is possible to just query a range and parse out the
individual hours. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/webtrends/keymetrics.py#L15-L32 | [
"def parse_date_for_query(cls, date):\n return date.strftime(\"%Ym%md%d\")\n",
"def parse_standard_date_string_to_date(cls, date_string):\n if type(date_string) == datetime:\n return date_string\n return datetime.strptime(date_string, \"%Y-%m-%d\")\n"
] | class Collector(BaseCollector):
def __init__(self, credentials, query, start_at, end_at):
self.api_version = credentials.get('api_version')
self.base_url = credentials['keymetrics_url']
super(Collector, self).__init__(credentials, query, start_at, end_at)
@classmethod
def _make_request(self, start_at_for_webtrends, end_at_for_webtrends):
return requests_with_backoff.get(
url="{base_url}".format(
base_url=self.base_url),
auth=(self.user, self.password),
params={
'start_period': start_at_for_webtrends,
'end_period': end_at_for_webtrends,
'format': self.query_format,
"userealtime": True
}
)
def build_parser(self, data_set_config, options):
if self.api_version == 'v3':
return V3Parser(options, data_set_config['data-type'])
else:
return V2Parser(options, data_set_config['data-type'])
|
alphagov/performanceplatform-collector | performanceplatform/collector/arguments.py | parse_args | python | def parse_args(name="", args=None):
def _load_json_file(path):
with open(path) as f:
json_data = json.load(f)
json_data['path_to_json_file'] = path
return json_data
parser = argparse.ArgumentParser(description="%s collector for sending"
" data to the performance"
" platform" % name)
parser.add_argument('-c', '--credentials', dest='credentials',
type=_load_json_file,
help='JSON file containing credentials '
'for the collector',
required=True)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('-l', '--collector', dest='collector_slug',
type=str,
help='Collector slug to query the API for the '
'collector config')
group.add_argument('-q', '--query', dest='query',
type=_load_json_file,
help='JSON file containing details '
'about the query to make '
'against the source API '
'and the target data-set')
parser.add_argument('-t', '--token', dest='token',
type=_load_json_file,
help='JSON file containing token '
'for the collector',
required=True)
parser.add_argument('-b', '--performanceplatform',
dest='performanceplatform',
type=_load_json_file,
help='JSON file containing the Performance Platform '
'config for the collector',
required=True)
parser.add_argument('-s', '--start', dest='start_at',
type=parse_date,
help='Date to start collection from')
parser.add_argument('-e', '--end', dest='end_at',
type=parse_date,
help='Date to end collection')
parser.add_argument('--console-logging', dest='console_logging',
action='store_true',
help='Output logging to the console rather than file')
parser.add_argument('--dry-run', dest='dry_run',
action='store_true',
help='Instead of pushing to the Performance Platform '
'the collector will print out what would have '
'been pushed')
parser.set_defaults(console_logging=False, dry_run=False)
args = parser.parse_args(args)
return args | Parse command line argument for a collector
Returns an argparse.Namespace with 'config' and 'query' options | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/arguments.py#L6-L63 | null | import argparse
import json
from dateutil.parser import parse as parse_date
|
alphagov/performanceplatform-collector | setup.py | _get_requirements | python | def _get_requirements(fname):
packages = _read(fname).split('\n')
packages = (p.strip() for p in packages)
packages = (p for p in packages if p and not p.startswith('#'))
return list(packages) | Create a list of requirements from the output of the pip freeze command
saved in a text file. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/setup.py#L41-L49 | [
"def _read(fname, fail_silently=False):\n \"\"\"\n Read the content of the given file. The path is evaluated from the\n directory containing this file.\n \"\"\"\n try:\n filepath = os.path.join(os.path.dirname(__file__), fname)\n with io.open(filepath, 'rt', encoding='utf8') as f:\n ... | import io
import os
import re
from setuptools import setup, find_packages
from performanceplatform import collector
# multiprocessing and logging don't get on with each other in Python
# vesions < 2.7.4. The following unused import is a workaround. See:
# http://bugs.python.org/issue15881#msg170215
import multiprocessing
def _read(fname, fail_silently=False):
"""
Read the content of the given file. The path is evaluated from the
directory containing this file.
"""
try:
filepath = os.path.join(os.path.dirname(__file__), fname)
with io.open(filepath, 'rt', encoding='utf8') as f:
return f.read()
except:
if not fail_silently:
raise
return ''
def _get_version():
data = _read(
'performanceplatform/collector/__init__.py'
)
version = re.search(
r"^__version__ = ['\"]([^'\"]*)['\"]",
data,
re.M | re.I
).group(1).strip()
return version
def _get_long_description():
return _read('README.rst')
if __name__ == '__main__':
setup(
name='performanceplatform-collector',
version=_get_version(),
packages=find_packages(exclude=['test*']),
namespace_packages=['performanceplatform'],
# metadata for upload to PyPI
author=collector.__author__,
author_email=collector.__author_email__,
maintainer='Government Digital Service',
url='https://github.com/alphagov/performanceplatform-collector',
description='performanceplatform-collector: tools for sending '
'data to the Performance Platform',
long_description=_get_long_description(),
license='MIT',
keywords='api data performance_platform',
install_requires=_get_requirements('requirements.txt'),
tests_require=_get_requirements('requirements_for_tests.txt'),
test_suite='nose.collector',
entry_points={
'console_scripts': [
'pp-collector=performanceplatform.collector.main:main'
]
}
)
|
alphagov/performanceplatform-collector | performanceplatform/utils/http_with_backoff.py | parse_reason | python | def parse_reason(response, content):
def first_error(data):
errors = data['error']['errors']
if len(errors) > 1:
# we have more than one error. We should capture that?
logging.info('Received {} errors'.format(len(errors)))
return errors[0]
try:
json_data = json.loads(content)
return first_error(json_data)['reason']
except:
return response.reason | Google return a JSON document describing the error. We should parse this
and extract the error reason. See
https://developers.google.com/analytics/devguides/reporting/core/v3/coreErrors | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/utils/http_with_backoff.py#L29-L46 | [
"def first_error(data):\n errors = data['error']['errors']\n if len(errors) > 1:\n # we have more than one error. We should capture that?\n logging.info('Received {} errors'.format(len(errors)))\n return errors[0]\n"
] | from httplib2 import *
from httplib2 import DEFAULT_MAX_REDIRECTS
import json
import logging
import time
from performanceplatform.collector.logging_setup import (
extra_fields_from_exception)
_MAX_RETRIES = 5
def abort_message(method, uri, status, reason):
return (
'{} request for {} failed with code {}, '
'reason {}. Aborting. See '
'https://developers.google.com'
'/analytics/devguides/reporting/core/v3/coreErrors'
' for more details'.format(
method,
uri,
status,
reason))
def notify_operator(method, uri, status, reason):
logging.error(abort_message(method, uri, status, reason))
class ResponseAction:
def __init__(self):
self.should_retry = False
self.retry_info = ''
def GABackoff(response, content, method, uri):
response_action = ResponseAction()
if response.status not in [400, 401, 403, 503]:
return response_action
reason = parse_reason(response, content)
if response.status in [400, 401]:
# --------------------------------------------------------------------
# Do not retry without fixing the problem
# notify someone with appropriate actionable data
# --------------------------------------------------------------------
notify_operator(method, uri, response.status, reason)
return response_action
if response.status == 503:
# --------------------------------------------------------------------
# Server returned an error. Do not retry this query more than once.
# --------------------------------------------------------------------
# Not sure we can action anything as a result of this? Just note it.
logging.info(abort_message(method, uri, response.status, reason))
return response_action
if response.status == 403:
if reason in [
"insufficientPermissions",
"dailyLimitExceeded",
"usageLimits.userRateLimitExceededUnreg",
]:
# ----------------------------------------------------------------
# insufficientPermissions indicates that the user does not have
# sufficient permissions for the entity specified in the query.
#
# Do not retry without fixing the problem. You need to get
# sufficient permissions to perform the operation on the specified
# entity.
# ----------------------------------------------------------------
# dailyLimitExceeded indicates that user has exceeded the daily
# quota (either per project or per view (profile)).
#
# Do not retry without fixing the problem. You have used up your
# daily quota. See
# https://developers.google.com/analytics/devguides/reporting/core/v3/limits-quotas
# ----------------------------------------------------------------
# usageLimits.userRateLimitExceededUnreg indicates that the
# application needs to be registered in the Google Developers
# Console.
#
# Do not retry without fixing the problem. You need to register in
# Developers Console to get the full API quota.
# See https://console.developers.google.com/
# ----------------------------------------------------------------
notify_operator(method, uri, response.status, reason)
return response_action
elif reason in ["userRateLimitExceeded", "quotaExceeded"]:
# ----------------------------------------------------------------
# userRateLimitExceeded indicates that the user rate limit has
# been exceeded. The maximum rate limit is 10 qps per IP address.
# The default value set in Google Developers Console is 1 qps per
# IP address. You can increase this limit in the Google Developers
# Console to a maximum of 10 qps.
#
# Retry using exponential back-off. You need to slow down the rate
# at which you are sending the requests
# ----------------------------------------------------------------
# quotaExceeded indicates that the 10 concurrent requests per view
# (profile) in the Core Reporting API has been reached.
#
# Retry using exponential back-off. You need to wait for at least
# one in-progress request for this view (profile) to complete.
# ----------------------------------------------------------------
# fall through to retry handling below
response_action.should_retry = True
else:
# Unhandled 403 status
notify_operator(method, uri, response.status, reason)
return response_action
# If we got this far, then we're going to retry the request. Capture more
# detail about why.
retry_info = ('{} request for {} failed with code {}, '
'reason {}'.format(method,
uri,
response.status,
reason))
response_action.retry_info = retry_info
return response_action
class HttpWithBackoff(Http):
def __init__(self, cache=None, timeout=None,
proxy_info=None,
ca_certs=None, disable_ssl_certificate_validation=False,
backoff_strategy_predicate=GABackoff):
dscv = disable_ssl_certificate_validation
super(HttpWithBackoff, self).__init__(
cache=cache, timeout=timeout,
proxy_info=proxy_info,
ca_certs=ca_certs,
disable_ssl_certificate_validation=dscv)
if backoff_strategy_predicate:
self._backoff_strategy_predicate = backoff_strategy_predicate
else:
self._backoff_strategy_predicate = GABackoff
def request(self,
uri,
method="GET",
body=None,
headers=None,
redirections=DEFAULT_MAX_REDIRECTS,
connection_type=None):
delay = 10
for n in range(_MAX_RETRIES):
response, content = super(HttpWithBackoff, self).request(
uri,
method,
body,
headers,
redirections,
connection_type)
response_action = self._backoff_strategy_predicate(
response, content, method, uri)
if response_action.should_retry:
retry_info = response_action.retry_info
retry_info += '(Attempt {} of {})'.format(n + 1, _MAX_RETRIES)
if n + 1 < _MAX_RETRIES:
retry_info += ' Retrying in {} seconds...'.format(delay)
logging.info(retry_info)
time.sleep(delay)
delay *= 2
else:
return response, content
# we made _MAX_RETRIES requests but none worked
logging.error(
'Max retries exceeded for {}'.format(uri),
# HttpLibException doesn't exist but this fits with the format
# for logging other request failures
extra={
'exception_class': 'HttpLibException',
'exception_message': '{}: {}'.format(response.status,
response.reason)
}
)
return response, content
|
alphagov/performanceplatform-collector | performanceplatform/utils/data_parser.py | DataParser.get_data | python | def get_data(self, special_fields=None):
docs = build_document_set(self.data, self.data_type, self.mappings,
special_fields,
self.idMapping,
additionalFields=self.additionalFields)
if self.plugins:
docs = run_plugins(self.plugins, list(docs))
return docs | special_fields: "a dict of data specific to collector type
that should be added to the data returned by the parser.
This will also be and operated on by idMapping, mappings and
plugins"
This method loops through the data provided to the instance.
For each item it will return a dict of the format:
{
"_timestamp": "the item start_date",
"dataType": "self.data_type for the instance",
...
"any additional fields": "from self.additionalFields",
"any special_fields": "from special fields argument",
"any item dimensions": "from item.dimensions",
...
mappings changing keys in this dict from self.mappings
are then applied on the above
...
"_humanId": "derived from either the values corresponding to
idMapping concatenated or the data_type, item.start_date,
timeSpan and item.dimensions values if any concatenated"
"_id": "derived from either the values corresponding to
idMapping concatenated or the data_type, item.start_date,
timeSpan and item.dimensions values if any concatenated and
then base64 encoded"
} | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/utils/data_parser.py#L45-L82 | [
"def build_document_set(results, data_type, mappings, special_fields,\n idMapping=None,\n additionalFields={}):\n if special_fields and len(results) != len(special_fields):\n raise ValueError(\n \"There must be same number of special fields as results... | class DataParser(object):
def __init__(self, data, options, data_type):
# it would be nice to have some json schemas or something to validate
# for now here are some docs
"""
data: Can be any array of dicts
options: {
mappings: "a dict: converts keys matching key of the mappings dict
to the value in the mapping dict",
additionalFields: "a dict:
key value pairs will be added into the returned data",
idMapping: "a list of keys or single key string: for each key
the corresponding values will be concatenated in order to create
a unique _humanId field on the returned data and base64
encoded in order to create an _id field. If no idMapping
are provided then the item start_date and any avaiable
'dimensions' will be used instead",
dataType: "a value to be set on a dataType attribute.
Overrides the data_type argument"
plugins: "a list of string respresentations of python classes being
instantiated. e.g. ComputeIdFrom('_timestamp', '_timespan').
To be run on the passed in data to mutate it.
See performanceplatform.collector.ga.plugins for more details"
}
data_type: "a string - the data_type to be set as a data_type
attribute on the returned data unless it is overridden in options"
"""
self.data = list(data)
if "plugins" in options:
self.plugins = options["plugins"]
else:
self.plugins = None
self.data_type = options.get('dataType', data_type)
self.mappings = options.get("mappings", {})
self.idMapping = options.get("idMapping", None)
self.additionalFields = options.get('additionalFields', {})
|
alphagov/performanceplatform-collector | performanceplatform/collector/webtrends/reports.py | Collector.date_range_for_webtrends | python | def date_range_for_webtrends(cls, start_at=None, end_at=None):
if start_at and end_at:
start_date = cls.parse_standard_date_string_to_date(
start_at)
end_date = cls.parse_standard_date_string_to_date(
end_at)
numdays = (end_date - start_date).days + 1
start_dates = [end_date - timedelta(days=x)
for x in reversed(range(0, numdays))]
date_range = []
for i, date in enumerate(start_dates):
query_date = cls.parse_date_for_query(date)
date_range.append((query_date, query_date))
return date_range
else:
return [("current_day-1", "current_day-1")] | Get the day dates in between start and end formatted for query.
This returns dates inclusive e.g. final day is (end_at, end_at+1 day) | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/webtrends/reports.py#L16-L35 | [
"def parse_date_for_query(cls, date):\n return date.strftime(\"%Ym%md%d\")\n",
"def parse_standard_date_string_to_date(cls, date_string):\n if type(date_string) == datetime:\n return date_string\n return datetime.strptime(date_string, \"%Y-%m-%d\")\n"
] | class Collector(BaseCollector):
def __init__(self, credentials, query, start_at, end_at):
self.api_version = credentials.get('api_version')
self.base_url = credentials['reports_url']
self.report_id = query.pop('report_id')
super(Collector, self).__init__(credentials, query, start_at, end_at)
@classmethod
def _make_request(self, start_at_for_webtrends, end_at_for_webtrends):
return requests_with_backoff.get(
url="{base_url}{report_id}".format(
base_url=self.base_url,
report_id=self.report_id),
auth=(self.user, self.password),
params={
'start_period': start_at_for_webtrends,
'end_period': end_at_for_webtrends,
'format': self.query_format
}
)
def build_parser(self, data_set_config, options):
if self.api_version == 'v3':
return V3Parser(options, data_set_config['data-type'])
else:
return V2Parser(options, data_set_config['data-type'])
|
alphagov/performanceplatform-collector | performanceplatform/collector/gcloud/aggregate.py | get_cumulative_spend | python | def get_cumulative_spend(key):
query = ('ROUND(SUM(total_ex_vat), 2) AS total '
'FROM {table} '
'WHERE date <= "{year}-{month:02}-01" '
'AND lot="{lot}" '
'AND customer_sector="{sector}" '
'AND supplier_type="{sme_large}"'.format(
table=_RAW_SALES_TABLE,
year=key.year,
month=key.month,
lot=key.lot,
sector=key.sector,
sme_large=key.sme_large))
logging.debug(query)
result = scraperwiki.sqlite.select(query)
logging.debug(result)
value = result[0]['total']
return float(result[0]['total']) if value is not None else 0.0 | Get the sum of spending for this category up to and including the given
month. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/gcloud/aggregate.py#L130-L151 | null | #!/usr/bin/env python
# encoding: utf-8
from __future__ import unicode_literals
import itertools
import datetime
import logging
from collections import OrderedDict, namedtuple
import scraperwiki
from table_names import _RAW_SALES_TABLE
_SECTOR_NAME = {
'central-gov': 'Central government',
'local-gov': 'Local government',
'not-for-profit': 'Not for profit',
'wider-public-sector': 'Other wider public sector',
'unknown-sector': 'Unknown',
}
_SME_LARGE_NAME = {
'sme': 'Small and medium enterprises',
'large': 'Large enterprises',
}
_LOT_NAME = {
'iaas': 'Cloud Infrastructure as a Service (IaaS)',
'paas': 'Cloud Platform as a Service (PaaS)',
'saas': 'Cloud Software as a Service (SaaS)',
'css': 'Cloud Support Services (CSS)',
}
class SpendingGroupKey(namedtuple('SpendingGroupKey',
'month,year,lot,sector,sme_large')):
"""
A 'spending group' is a specific combination of month, year, lot (type),
government sector and sme/large.
"""
def __str__(self):
return str(self.key())
def __unicode__(self):
return unicode(self.key())
def key(self):
return "{year}_{month:02d}_lot{lot}_{sector}_{sme_large}".format(
year=self.year, month=self.month, lot=self.lot, sector=self.sector,
sme_large=self.sme_large)
class CompanyTypeKey(namedtuple('CompanyTypeKey',
'month,year,sme_large')):
def __str__(self):
return str(self.key())
def __unicode__(self):
return unicode(self.key())
def key(self):
return "{year}_{month:02d}_{sme_large}".format(
year=self.year, month=self.month, sme_large=self.sme_large)
def calculate_aggregated_sales(keys):
for key in keys:
monthly_spend, transaction_count = get_monthly_spend_and_count(key)
cumulative_spend = get_cumulative_spend(key)
cumulative_count = get_cumulative_count(key)
yield make_aggregated_sales_row(key, monthly_spend, transaction_count,
cumulative_spend, cumulative_count)
def make_aggregated_sales_row(key, monthly_total, transaction_count,
cumulative_spend, cumulative_count):
return OrderedDict([
('_id', unicode(key)),
('_timestamp', datetime.datetime(key.year, key.month, 1)),
('lot', _LOT_NAME[key.lot]),
('sector', _SECTOR_NAME[key.sector]),
('sme_large', _SME_LARGE_NAME[key.sme_large]),
('monthly_spend', monthly_total),
('count', transaction_count),
('cumulative_spend', cumulative_spend),
('cumulative_count', cumulative_count),
])
def get_distinct_month_years():
for row in scraperwiki.sqlite.select(
'month, year FROM {table} GROUP BY year, month'
' ORDER BY year, month'.format(table=_RAW_SALES_TABLE)):
yield (row['month'], row['year'])
def get_monthly_spend_and_count(key):
query = ('ROUND(SUM(total_ex_vat), 2) AS total_spend, '
'COUNT(*) AS invoice_count '
'FROM {table} '
'WHERE year={year} '
'AND month={month} '
'AND lot="{lot}" '
'AND customer_sector="{sector}" '
'AND supplier_type="{sme_large}"'.format(
table=_RAW_SALES_TABLE,
year=key.year,
month=key.month,
lot=key.lot,
sector=key.sector,
sme_large=key.sme_large))
logging.debug(query)
result = scraperwiki.sqlite.select(query)[0]
logging.debug(result)
spend, count = 0.0, 0
if result['total_spend'] is not None:
spend = float(result['total_spend'])
if result['invoice_count'] is not None:
count = int(result['invoice_count'])
return (spend, count)
def get_cumulative_count(key):
"""
Get the sum of spending for this category up to and including the given
month.
"""
query = ('COUNT(*) AS total '
'FROM {table} '
'WHERE date <= "{year}-{month:02}-01" '
'AND lot="{lot}" '
'AND customer_sector="{sector}" '
'AND supplier_type="{sme_large}"'.format(
table=_RAW_SALES_TABLE,
year=key.year,
month=key.month,
lot=key.lot,
sector=key.sector,
sme_large=key.sme_large))
logging.debug(query)
result = scraperwiki.sqlite.select(query)
logging.debug(result)
value = result[0]['total']
return float(result[0]['total']) if value is not None else 0
def make_spending_group_keys():
month_and_years = get_distinct_month_years()
all_lots = _LOT_NAME.keys() # ie [1, 2, 3, 4]
all_sectors = _SECTOR_NAME.keys() # ie ['central-gov', 'local-gov']
all_sme_large = _SME_LARGE_NAME.keys() # ie ['sme', 'large']
for (month, year), lot, sector, sme_large in itertools.product(
month_and_years, all_lots, all_sectors, all_sme_large):
yield SpendingGroupKey(month=month, year=year, lot=lot, sector=sector,
sme_large=sme_large)
def make_company_type_keys():
month_and_years = get_distinct_month_years()
all_sme_large = _SME_LARGE_NAME.keys() # ie ['sme', 'large']
for (month, year), sme_large in itertools.product(
month_and_years, all_sme_large):
yield CompanyTypeKey(month=month, year=year, sme_large=sme_large)
|
alphagov/performanceplatform-collector | performanceplatform/collector/main.py | _get_query_params | python | def _get_query_params(query):
query_params = OrderedDict(sorted(query['query'].items()))
return ' '.join(['{}={}'.format(k, v) for k, v in query_params.items()]) | >>> _get_query_params({'query': {'a': 1, 'c': 3, 'b': 5}})
'a=1 b=5 c=3' | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/main.py#L25-L31 | null | import os
import logging
import importlib
from collections import OrderedDict
from performanceplatform.collector import arguments
from performanceplatform.collector.logging_setup import (
set_up_logging, close_down_logging)
from performanceplatform.utils.collector import get_config
def _get_data_group(query):
return query['data-set']['data-group']
def _get_data_type(query):
return query['data-set']['data-type']
def _get_data_group_data_type(query):
return '{}/{}'.format(_get_data_group(query), _get_data_type(query))
def _get_path_to_json_file(query):
return query['path_to_json_file']
def make_extra_json_fields(args):
"""
From the parsed command-line arguments, generate a dictionary of additional
fields to be inserted into JSON logs (logstash_formatter module)
"""
extra_json_fields = {
'data_group': _get_data_group(args.query),
'data_type': _get_data_type(args.query),
'data_group_data_type': _get_data_group_data_type(args.query),
'query': _get_query_params(args.query),
}
if "path_to_json_file" in args.query:
extra_json_fields['path_to_query'] = _get_path_to_json_file(args.query)
return extra_json_fields
def logging_for_entrypoint(
entrypoint, json_fields, logfile_path, logfile_name):
if logfile_path is None:
logfile_path = os.path.join(
os.path.dirname(os.path.realpath(__file__)), '..', '..', 'log')
loglevel = getattr(logging, os.environ.get('LOGLEVEL', 'INFO').upper())
set_up_logging(
entrypoint, loglevel, logfile_path, logfile_name, json_fields)
def _log_collector_instead_of_running(entrypoint, args):
logged_args = {
'start_at': args.start_at,
'end_at': args.end_at,
'query': {k: args.query[k] for k in ('data-set', 'query', 'options')}
}
logging.info(
'Collector {} NOT run with the following {}'.format(entrypoint,
logged_args))
def merge_performanceplatform_config(
performanceplatform, data_set, token, dry_run=False):
return {
'url': '{0}/{1}/{2}'.format(
performanceplatform['backdrop_url'],
data_set['data-group'],
data_set['data-type']
),
'token': token['token'],
'data-group': data_set['data-group'],
'data-type': data_set['data-type'],
'dry_run': dry_run
}
def _run_collector(entrypoint, args, logfile_path=None, logfile_name=None):
if args.console_logging:
logging.basicConfig(level=logging.INFO)
else:
logging_for_entrypoint(
entrypoint,
make_extra_json_fields(args),
logfile_path,
logfile_name
)
if os.environ.get('DISABLE_COLLECTORS', 'false') == 'true':
_log_collector_instead_of_running(entrypoint, args)
else:
entrypoint_module = importlib.import_module(entrypoint)
logging.info('Running collection into {}/{}'.format(
args.query.get('data-set')['data-group'],
args.query.get('data-set')['data-type']))
entrypoint_module.main(
args.credentials,
merge_performanceplatform_config(
args.performanceplatform,
args.query['data-set'],
args.token,
args.dry_run
),
args.query['query'],
args.query['options'],
args.start_at,
args.end_at
)
if not args.console_logging:
close_down_logging()
def main():
args = arguments.parse_args('Performance Platform Collector')
if args.collector_slug:
args.query = get_config(args.collector_slug, args.performanceplatform)
_run_collector(args.query['entrypoint'], args)
if __name__ == '__main__':
main()
|
alphagov/performanceplatform-collector | performanceplatform/collector/main.py | make_extra_json_fields | python | def make_extra_json_fields(args):
extra_json_fields = {
'data_group': _get_data_group(args.query),
'data_type': _get_data_type(args.query),
'data_group_data_type': _get_data_group_data_type(args.query),
'query': _get_query_params(args.query),
}
if "path_to_json_file" in args.query:
extra_json_fields['path_to_query'] = _get_path_to_json_file(args.query)
return extra_json_fields | From the parsed command-line arguments, generate a dictionary of additional
fields to be inserted into JSON logs (logstash_formatter module) | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/main.py#L38-L51 | [
"def _get_data_group(query):\n return query['data-set']['data-group']\n",
"def _get_data_type(query):\n return query['data-set']['data-type']\n",
"def _get_data_group_data_type(query):\n return '{}/{}'.format(_get_data_group(query), _get_data_type(query))\n",
"def _get_query_params(query):\n \"\"\... | import os
import logging
import importlib
from collections import OrderedDict
from performanceplatform.collector import arguments
from performanceplatform.collector.logging_setup import (
set_up_logging, close_down_logging)
from performanceplatform.utils.collector import get_config
def _get_data_group(query):
return query['data-set']['data-group']
def _get_data_type(query):
return query['data-set']['data-type']
def _get_data_group_data_type(query):
return '{}/{}'.format(_get_data_group(query), _get_data_type(query))
def _get_query_params(query):
"""
>>> _get_query_params({'query': {'a': 1, 'c': 3, 'b': 5}})
'a=1 b=5 c=3'
"""
query_params = OrderedDict(sorted(query['query'].items()))
return ' '.join(['{}={}'.format(k, v) for k, v in query_params.items()])
def _get_path_to_json_file(query):
return query['path_to_json_file']
def logging_for_entrypoint(
entrypoint, json_fields, logfile_path, logfile_name):
if logfile_path is None:
logfile_path = os.path.join(
os.path.dirname(os.path.realpath(__file__)), '..', '..', 'log')
loglevel = getattr(logging, os.environ.get('LOGLEVEL', 'INFO').upper())
set_up_logging(
entrypoint, loglevel, logfile_path, logfile_name, json_fields)
def _log_collector_instead_of_running(entrypoint, args):
logged_args = {
'start_at': args.start_at,
'end_at': args.end_at,
'query': {k: args.query[k] for k in ('data-set', 'query', 'options')}
}
logging.info(
'Collector {} NOT run with the following {}'.format(entrypoint,
logged_args))
def merge_performanceplatform_config(
performanceplatform, data_set, token, dry_run=False):
return {
'url': '{0}/{1}/{2}'.format(
performanceplatform['backdrop_url'],
data_set['data-group'],
data_set['data-type']
),
'token': token['token'],
'data-group': data_set['data-group'],
'data-type': data_set['data-type'],
'dry_run': dry_run
}
def _run_collector(entrypoint, args, logfile_path=None, logfile_name=None):
if args.console_logging:
logging.basicConfig(level=logging.INFO)
else:
logging_for_entrypoint(
entrypoint,
make_extra_json_fields(args),
logfile_path,
logfile_name
)
if os.environ.get('DISABLE_COLLECTORS', 'false') == 'true':
_log_collector_instead_of_running(entrypoint, args)
else:
entrypoint_module = importlib.import_module(entrypoint)
logging.info('Running collection into {}/{}'.format(
args.query.get('data-set')['data-group'],
args.query.get('data-set')['data-type']))
entrypoint_module.main(
args.credentials,
merge_performanceplatform_config(
args.performanceplatform,
args.query['data-set'],
args.token,
args.dry_run
),
args.query['query'],
args.query['options'],
args.start_at,
args.end_at
)
if not args.console_logging:
close_down_logging()
def main():
args = arguments.parse_args('Performance Platform Collector')
if args.collector_slug:
args.query = get_config(args.collector_slug, args.performanceplatform)
_run_collector(args.query['entrypoint'], args)
if __name__ == '__main__':
main()
|
alphagov/performanceplatform-collector | performanceplatform/collector/gcloud/sales_parser.py | process_csv | python | def process_csv(f):
reader = unicodecsv.DictReader(f, encoding=_ENCODING)
for row in reader:
month, year = parse_month_year(row['Return Month'])
yield OrderedDict([
('customer_name', row['CustomerName']),
('supplier_name', row['SupplierName']),
('month', month),
('year', year),
('date', datetime.date(year, month, 1)),
('total_ex_vat', parse_price(row['EvidencedSpend'])),
('lot', parse_lot_name(row['LotDescription'])),
('customer_sector', parse_customer_sector(row['Sector'])),
('supplier_type', parse_sme_or_large(row['SME or Large'])),
]) | Take a file-like object and yield OrderedDicts to be inserted into raw
spending database. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/gcloud/sales_parser.py#L71-L90 | [
"def parse_month_year(date_string):\n \"\"\"\n >>> parse_month_year('01/10/2012')\n (10, 2012)\n \"\"\"\n match = re.match('\\d{2}/(?P<month>\\d{2})/(?P<year>\\d{4})$',\n date_string.lower())\n if not match:\n raise ValueError(\"Not format 'dd/mm/yyyy': '{}'\".format(dat... | #!/usr/bin/env python
# encoding: utf-8
from __future__ import unicode_literals
import datetime
import logging
import re
from collections import OrderedDict
import lxml.html
import unicodecsv
from dshelpers import download_url
_ENCODING = 'cp1252'
UNKNOWN_SECTOR_KEY = 'unknown-sector'
_SECTOR_KEY = {
'Central Government': 'central-gov',
'Local Government': 'local-gov',
'Not for Profit': 'not-for-profit',
'Health': 'wider-public-sector',
'Fire and Rescue': 'wider-public-sector',
'Devolved Administrations': 'wider-public-sector',
'Education': 'wider-public-sector',
'Police': 'wider-public-sector',
'Defence': 'wider-public-sector',
'Wider Public Sector': 'wider-public-sector',
'Utility (Historic)': 'wider-public-sector',
'Private Sector': UNKNOWN_SECTOR_KEY,
}
_SME_LARGE_KEY = {
'SME': 'sme',
'Large': 'large',
}
_LOT_KEY = {
'Infrastructure as a Service (IaaS)': 'iaas',
'Platform as a Service (PaaS)': 'paas',
'Software as a Service (SaaS)': 'saas',
'Specialist Cloud Services': 'css'
}
def get_latest_csv_url(index_page_url):
"""
Download the index page and extract the URL of the current CSV.
"""
index_fobj = download_url(index_page_url)
csv_url = parse_latest_csv_url(index_fobj)
logging.info("Latest CSV URL: {}".format(csv_url))
return csv_url
def parse_latest_csv_url(index_fobj):
root = lxml.html.fromstring(index_fobj.read())
elements = root.xpath("//*[contains(text(), 'Current')]"
"/a[contains(@href, '.csv')]/@href")
assert len(elements) == 1, elements
return elements[0]
def parse_month_year(date_string):
"""
>>> parse_month_year('01/10/2012')
(10, 2012)
"""
match = re.match('\d{2}/(?P<month>\d{2})/(?P<year>\d{4})$',
date_string.lower())
if not match:
raise ValueError("Not format 'dd/mm/yyyy': '{}'".format(date_string))
month = int(match.group('month'))
year = int(match.group('year'))
return month, year
def parse_price(price_string):
"""
>>> parse_price('£16,000.00')
16000.0
>>> parse_price('-£16,000.00')
-16000.0
"""
return float(re.sub('[£,]', '', price_string))
match = re.match('(?P<amount>-?£[\d,]+(\.\d{2})?)', price_string)
if not match:
raise ValueError("Charge not in format '(-)£16,000(.00)' : {}".format(
repr(price_string)))
return float(re.sub('[£,]', '', match.group('amount')))
def parse_customer_sector(raw_sector):
if raw_sector not in _SECTOR_KEY:
logging.warning('Unknown sector: "{}"'.format(raw_sector))
return _SECTOR_KEY.get(raw_sector, UNKNOWN_SECTOR_KEY)
def parse_sme_or_large(raw_sme_large):
try:
return _SME_LARGE_KEY[raw_sme_large]
except KeyError:
raise RuntimeError('Unknown sme/large: "{}"'.format(raw_sme_large))
def parse_lot_name(raw_lot_name):
return _LOT_KEY[raw_lot_name]
|
alphagov/performanceplatform-collector | performanceplatform/collector/gcloud/sales_parser.py | parse_month_year | python | def parse_month_year(date_string):
match = re.match('\d{2}/(?P<month>\d{2})/(?P<year>\d{4})$',
date_string.lower())
if not match:
raise ValueError("Not format 'dd/mm/yyyy': '{}'".format(date_string))
month = int(match.group('month'))
year = int(match.group('year'))
return month, year | >>> parse_month_year('01/10/2012')
(10, 2012) | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/gcloud/sales_parser.py#L93-L104 | null | #!/usr/bin/env python
# encoding: utf-8
from __future__ import unicode_literals
import datetime
import logging
import re
from collections import OrderedDict
import lxml.html
import unicodecsv
from dshelpers import download_url
_ENCODING = 'cp1252'
UNKNOWN_SECTOR_KEY = 'unknown-sector'
_SECTOR_KEY = {
'Central Government': 'central-gov',
'Local Government': 'local-gov',
'Not for Profit': 'not-for-profit',
'Health': 'wider-public-sector',
'Fire and Rescue': 'wider-public-sector',
'Devolved Administrations': 'wider-public-sector',
'Education': 'wider-public-sector',
'Police': 'wider-public-sector',
'Defence': 'wider-public-sector',
'Wider Public Sector': 'wider-public-sector',
'Utility (Historic)': 'wider-public-sector',
'Private Sector': UNKNOWN_SECTOR_KEY,
}
_SME_LARGE_KEY = {
'SME': 'sme',
'Large': 'large',
}
_LOT_KEY = {
'Infrastructure as a Service (IaaS)': 'iaas',
'Platform as a Service (PaaS)': 'paas',
'Software as a Service (SaaS)': 'saas',
'Specialist Cloud Services': 'css'
}
def get_latest_csv_url(index_page_url):
"""
Download the index page and extract the URL of the current CSV.
"""
index_fobj = download_url(index_page_url)
csv_url = parse_latest_csv_url(index_fobj)
logging.info("Latest CSV URL: {}".format(csv_url))
return csv_url
def parse_latest_csv_url(index_fobj):
root = lxml.html.fromstring(index_fobj.read())
elements = root.xpath("//*[contains(text(), 'Current')]"
"/a[contains(@href, '.csv')]/@href")
assert len(elements) == 1, elements
return elements[0]
def process_csv(f):
"""
Take a file-like object and yield OrderedDicts to be inserted into raw
spending database.
"""
reader = unicodecsv.DictReader(f, encoding=_ENCODING)
for row in reader:
month, year = parse_month_year(row['Return Month'])
yield OrderedDict([
('customer_name', row['CustomerName']),
('supplier_name', row['SupplierName']),
('month', month),
('year', year),
('date', datetime.date(year, month, 1)),
('total_ex_vat', parse_price(row['EvidencedSpend'])),
('lot', parse_lot_name(row['LotDescription'])),
('customer_sector', parse_customer_sector(row['Sector'])),
('supplier_type', parse_sme_or_large(row['SME or Large'])),
])
def parse_price(price_string):
"""
>>> parse_price('£16,000.00')
16000.0
>>> parse_price('-£16,000.00')
-16000.0
"""
return float(re.sub('[£,]', '', price_string))
match = re.match('(?P<amount>-?£[\d,]+(\.\d{2})?)', price_string)
if not match:
raise ValueError("Charge not in format '(-)£16,000(.00)' : {}".format(
repr(price_string)))
return float(re.sub('[£,]', '', match.group('amount')))
def parse_customer_sector(raw_sector):
if raw_sector not in _SECTOR_KEY:
logging.warning('Unknown sector: "{}"'.format(raw_sector))
return _SECTOR_KEY.get(raw_sector, UNKNOWN_SECTOR_KEY)
def parse_sme_or_large(raw_sme_large):
try:
return _SME_LARGE_KEY[raw_sme_large]
except KeyError:
raise RuntimeError('Unknown sme/large: "{}"'.format(raw_sme_large))
def parse_lot_name(raw_lot_name):
return _LOT_KEY[raw_lot_name]
|
alphagov/performanceplatform-collector | performanceplatform/collector/gcloud/sales_parser.py | parse_price | python | def parse_price(price_string):
return float(re.sub('[£,]', '', price_string))
match = re.match('(?P<amount>-?£[\d,]+(\.\d{2})?)', price_string)
if not match:
raise ValueError("Charge not in format '(-)£16,000(.00)' : {}".format(
repr(price_string)))
return float(re.sub('[£,]', '', match.group('amount'))) | >>> parse_price('£16,000.00')
16000.0
>>> parse_price('-£16,000.00')
-16000.0 | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/gcloud/sales_parser.py#L107-L120 | null | #!/usr/bin/env python
# encoding: utf-8
from __future__ import unicode_literals
import datetime
import logging
import re
from collections import OrderedDict
import lxml.html
import unicodecsv
from dshelpers import download_url
_ENCODING = 'cp1252'
UNKNOWN_SECTOR_KEY = 'unknown-sector'
_SECTOR_KEY = {
'Central Government': 'central-gov',
'Local Government': 'local-gov',
'Not for Profit': 'not-for-profit',
'Health': 'wider-public-sector',
'Fire and Rescue': 'wider-public-sector',
'Devolved Administrations': 'wider-public-sector',
'Education': 'wider-public-sector',
'Police': 'wider-public-sector',
'Defence': 'wider-public-sector',
'Wider Public Sector': 'wider-public-sector',
'Utility (Historic)': 'wider-public-sector',
'Private Sector': UNKNOWN_SECTOR_KEY,
}
_SME_LARGE_KEY = {
'SME': 'sme',
'Large': 'large',
}
_LOT_KEY = {
'Infrastructure as a Service (IaaS)': 'iaas',
'Platform as a Service (PaaS)': 'paas',
'Software as a Service (SaaS)': 'saas',
'Specialist Cloud Services': 'css'
}
def get_latest_csv_url(index_page_url):
"""
Download the index page and extract the URL of the current CSV.
"""
index_fobj = download_url(index_page_url)
csv_url = parse_latest_csv_url(index_fobj)
logging.info("Latest CSV URL: {}".format(csv_url))
return csv_url
def parse_latest_csv_url(index_fobj):
root = lxml.html.fromstring(index_fobj.read())
elements = root.xpath("//*[contains(text(), 'Current')]"
"/a[contains(@href, '.csv')]/@href")
assert len(elements) == 1, elements
return elements[0]
def process_csv(f):
"""
Take a file-like object and yield OrderedDicts to be inserted into raw
spending database.
"""
reader = unicodecsv.DictReader(f, encoding=_ENCODING)
for row in reader:
month, year = parse_month_year(row['Return Month'])
yield OrderedDict([
('customer_name', row['CustomerName']),
('supplier_name', row['SupplierName']),
('month', month),
('year', year),
('date', datetime.date(year, month, 1)),
('total_ex_vat', parse_price(row['EvidencedSpend'])),
('lot', parse_lot_name(row['LotDescription'])),
('customer_sector', parse_customer_sector(row['Sector'])),
('supplier_type', parse_sme_or_large(row['SME or Large'])),
])
def parse_month_year(date_string):
"""
>>> parse_month_year('01/10/2012')
(10, 2012)
"""
match = re.match('\d{2}/(?P<month>\d{2})/(?P<year>\d{4})$',
date_string.lower())
if not match:
raise ValueError("Not format 'dd/mm/yyyy': '{}'".format(date_string))
month = int(match.group('month'))
year = int(match.group('year'))
return month, year
def parse_customer_sector(raw_sector):
if raw_sector not in _SECTOR_KEY:
logging.warning('Unknown sector: "{}"'.format(raw_sector))
return _SECTOR_KEY.get(raw_sector, UNKNOWN_SECTOR_KEY)
def parse_sme_or_large(raw_sme_large):
try:
return _SME_LARGE_KEY[raw_sme_large]
except KeyError:
raise RuntimeError('Unknown sme/large: "{}"'.format(raw_sme_large))
def parse_lot_name(raw_lot_name):
return _LOT_KEY[raw_lot_name]
|
alphagov/performanceplatform-collector | performanceplatform/collector/ga/core.py | try_number | python | def try_number(value):
for cast_function in [int, float]:
try:
return cast_function(value)
except ValueError:
pass
raise ValueError("Unable to use value as int or float: {0!r}"
.format(value)) | Attempt to cast the string `value` to an int, and failing that, a float,
failing that, raise a ValueError. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/ga/core.py#L37-L50 | null | import json
import logging
from performanceplatform.utils.data_parser import DataParser
from performanceplatform.utils.datetimeutil \
import period_range
def query_ga(client, config, start_date, end_date):
logging.info("Querying GA for data in the period: %s - %s"
% (str(start_date), str(end_date)))
# If maxResults is 0, don't include it in the query.
# Same for a sort == [].
maxResults = config.get("maxResults", None)
if maxResults == 0:
maxResults = None
sort = config.get("sort", None)
if sort == []:
sort = None
return client.query.get(
config["id"].replace("ga:", ""),
start_date,
end_date,
config["metrics"],
config.get("dimensions"),
config.get("filters"),
maxResults,
sort,
config.get("segment")
)
def convert_durations(metric):
"""
Convert session duration metrics from seconds to milliseconds.
"""
if metric[0] == 'avgSessionDuration' and metric[1]:
new_metric = (metric[0], metric[1] * 1000)
else:
new_metric = metric
return new_metric
def build_document(item):
metrics = [(key, try_number(value))
for key, value in item["metrics"].items()]
metrics = [convert_durations(metric) for metric in metrics]
return dict(metrics)
def pretty_print(obj):
return json.dumps(obj, indent=2)
def build_document_set(results):
return (build_document(item)
for item in results)
def query_for_range(client, query, range_start, range_end):
frequency = query.get('frequency', 'weekly')
for start, end in period_range(range_start, range_end, frequency):
for record in query_ga(client, query, start, end):
yield record
def query_documents_for(client, query, options,
data_type, start_date, end_date):
results = query_for_range(client, query, start_date, end_date)
results = list(results)
frequency = query.get('frequency', 'weekly')
special_fields = add_timeSpan(frequency, build_document_set(results))
return DataParser(results, options, data_type).get_data(
special_fields
)
def add_timeSpan(frequency, special_fields):
frequency_to_timespan_mapping = {
'daily': 'day',
'weekly': 'week',
'monthly': 'month',
}
timespan = frequency_to_timespan_mapping[frequency]
return [dict(item.items() + [('timeSpan', timespan)])
for item in special_fields]
|
alphagov/performanceplatform-collector | performanceplatform/collector/ga/core.py | convert_durations | python | def convert_durations(metric):
if metric[0] == 'avgSessionDuration' and metric[1]:
new_metric = (metric[0], metric[1] * 1000)
else:
new_metric = metric
return new_metric | Convert session duration metrics from seconds to milliseconds. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/ga/core.py#L53-L61 | null | import json
import logging
from performanceplatform.utils.data_parser import DataParser
from performanceplatform.utils.datetimeutil \
import period_range
def query_ga(client, config, start_date, end_date):
logging.info("Querying GA for data in the period: %s - %s"
% (str(start_date), str(end_date)))
# If maxResults is 0, don't include it in the query.
# Same for a sort == [].
maxResults = config.get("maxResults", None)
if maxResults == 0:
maxResults = None
sort = config.get("sort", None)
if sort == []:
sort = None
return client.query.get(
config["id"].replace("ga:", ""),
start_date,
end_date,
config["metrics"],
config.get("dimensions"),
config.get("filters"),
maxResults,
sort,
config.get("segment")
)
def try_number(value):
"""
Attempt to cast the string `value` to an int, and failing that, a float,
failing that, raise a ValueError.
"""
for cast_function in [int, float]:
try:
return cast_function(value)
except ValueError:
pass
raise ValueError("Unable to use value as int or float: {0!r}"
.format(value))
def build_document(item):
metrics = [(key, try_number(value))
for key, value in item["metrics"].items()]
metrics = [convert_durations(metric) for metric in metrics]
return dict(metrics)
def pretty_print(obj):
return json.dumps(obj, indent=2)
def build_document_set(results):
return (build_document(item)
for item in results)
def query_for_range(client, query, range_start, range_end):
frequency = query.get('frequency', 'weekly')
for start, end in period_range(range_start, range_end, frequency):
for record in query_ga(client, query, start, end):
yield record
def query_documents_for(client, query, options,
data_type, start_date, end_date):
results = query_for_range(client, query, start_date, end_date)
results = list(results)
frequency = query.get('frequency', 'weekly')
special_fields = add_timeSpan(frequency, build_document_set(results))
return DataParser(results, options, data_type).get_data(
special_fields
)
def add_timeSpan(frequency, special_fields):
frequency_to_timespan_mapping = {
'daily': 'day',
'weekly': 'week',
'monthly': 'month',
}
timespan = frequency_to_timespan_mapping[frequency]
return [dict(item.items() + [('timeSpan', timespan)])
for item in special_fields]
|
alphagov/performanceplatform-collector | performanceplatform/collector/piwik/core.py | Parser.to_datetime | python | def to_datetime(date_key):
'''
Extract the first date from 'key' matching YYYY-MM-DD
or YYYY-MM, and convert to datetime.
'''
match = re.search(r'\d{4}-\d{2}(-\d{2})?', date_key)
formatter = '%Y-%m'
if len(match.group()) == 10:
formatter += '-%d'
return datetime.strptime(
match.group(), formatter).replace(tzinfo=pytz.UTC) | Extract the first date from 'key' matching YYYY-MM-DD
or YYYY-MM, and convert to datetime. | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/piwik/core.py#L77-L87 | null | class Parser():
def __init__(self, query, options, data_type):
self.options = options
self.data_type = data_type
self.period = FREQUENCY_TO_PERIOD_MAPPING[
query.get('frequency', 'weekly')]
def parse(self, data):
base_items = []
special_fields = []
for date_key, data_points in data.items():
if type(data_points) == dict:
data_points = [data_points]
data = self._parse_item(date_key, data_points)
base_items += data[0]
special_fields += data[1]
return DataParser(
base_items, self.options, self.data_type
).get_data(special_fields)
def _parse_item(self, date_key, data_points):
base_items = []
special_fields = []
for data_point in data_points:
start_date = Parser.to_datetime(date_key)
base_items.append({'start_date': start_date})
mapped_fields = {
v: data_point[k] for k, v in self.options['mappings'].items()}
mapped_fields['timeSpan'] = self.period
special_fields.append(mapped_fields)
return base_items, special_fields
@staticmethod
|
alphagov/performanceplatform-collector | performanceplatform/collector/crontab.py | parse_job_line | python | def parse_job_line(line):
parsed = None
if not ignore_line_re.match(line):
parsed = tuple(line.strip().split(','))
return parsed | >>> parse_job_line( \
"* * * *,myquery,mycredentials,mytoken,performanceplatform\\n")
('* * * *', 'myquery', 'mycredentials', 'mytoken', 'performanceplatform')
>>> parse_job_line(" ") is None
True
>>> parse_job_line("# comment") is None
True | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/crontab.py#L39-L54 | null | import argparse
import os
import re
import sys
import socket
# Number of hosts we are running the crons on
NUMBER_OF_HOSTS = 2
ignore_line_re = re.compile("^#.*|\s*$")
class ParseError(StandardError):
pass
def crontab_begin_comment(unique_id):
return '# Begin performanceplatform.collector jobs for %s' % unique_id
def crontab_end_comment(unique_id):
return '# End performanceplatform.collector jobs for %s' % unique_id
def remove_existing_crontab_for_app(crontab, unique_id):
new_crontab = []
should_append = True
for line in crontab:
if line == crontab_begin_comment(unique_id):
should_append = False
if should_append:
new_crontab.append(line)
if line == crontab_end_comment(unique_id):
should_append = True
return new_crontab
def skip_job(counter):
""" Should we skip the job based on its number
If the machine has a number at the end of its hostname (n), and we have m
machines in the pool -
on machine 1, run job 1, 1 + m, 1+2m etc
on machine 2, run job 2, 2 + m , 2 + 2m etc
Else run all the jobs
"""
try:
host_number = int(socket.gethostname().split('-')[-1])
except ValueError:
return False
if (counter + host_number - (NUMBER_OF_HOSTS - 1)) % NUMBER_OF_HOSTS == 0:
return False
return True
def generate_crontab(current_crontab, path_to_jobs, path_to_app, unique_id):
"""Returns a crontab with jobs from job path
It replaces jobs previously generated by this function
It preserves jobs not generated by this function
"""
set_disable_envar = ''
if os.environ.get('DISABLE_COLLECTORS') == 'true':
set_disable_envar = 'DISABLE_COLLECTORS={} '.format(
os.environ.get('DISABLE_COLLECTORS'))
job_template = '{schedule} ' \
'{set_disable_envar}' \
'{app_path}/venv/bin/pp-collector ' \
'-l {collector_slug} ' \
'-c {app_path}/config/{credentials} ' \
'-t {app_path}/config/{token} ' \
'-b {app_path}/config/{performanceplatform} ' \
'>> {app_path}/log/out.log 2>> {app_path}/log/error.log'
crontab = [line.strip() for line in current_crontab]
crontab = remove_existing_crontab_for_app(crontab, unique_id)
additional_crontab = []
job_number = 0
with open(path_to_jobs) as jobs:
try:
for job in jobs:
parsed = parse_job_line(job)
if parsed is not None:
job_number += 1
if skip_job(job_number):
continue
schedule, collector_slug, credentials, \
token, performanceplatform = parsed
cronjob = job_template.format(
schedule=schedule,
set_disable_envar=set_disable_envar,
app_path=path_to_app,
collector_slug=collector_slug,
credentials=credentials,
token=token,
performanceplatform=performanceplatform
)
additional_crontab.append(cronjob)
except ValueError as e:
raise ParseError(str(e))
if additional_crontab:
crontab.append(crontab_begin_comment(unique_id))
crontab.extend(additional_crontab)
crontab.append(crontab_end_comment(unique_id))
return crontab
if __name__ == '__main__':
current_crontab = sys.stdin.readlines()
try:
parser = argparse.ArgumentParser()
parser.add_argument('path_to_app',
help='Path to where the application')
parser.add_argument('path_to_jobs',
help='Path to the file where job templates are')
parser.add_argument('app_unique_id',
help='Unique id of the application '
'used to update crontab')
args = parser.parse_args()
crontab = generate_crontab(current_crontab,
args.path_to_jobs,
args.path_to_app,
args.app_unique_id)
sys.stdout.write("\n".join(crontab) + "\n")
sys.exit(0)
except StandardError as e:
sys.stderr.write(str(e))
sys.stdout.write("\n".join(current_crontab))
sys.exit(1)
|
alphagov/performanceplatform-collector | performanceplatform/collector/crontab.py | skip_job | python | def skip_job(counter):
try:
host_number = int(socket.gethostname().split('-')[-1])
except ValueError:
return False
if (counter + host_number - (NUMBER_OF_HOSTS - 1)) % NUMBER_OF_HOSTS == 0:
return False
return True | Should we skip the job based on its number
If the machine has a number at the end of its hostname (n), and we have m
machines in the pool -
on machine 1, run job 1, 1 + m, 1+2m etc
on machine 2, run job 2, 2 + m , 2 + 2m etc
Else run all the jobs | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/crontab.py#L57-L75 | null | import argparse
import os
import re
import sys
import socket
# Number of hosts we are running the crons on
NUMBER_OF_HOSTS = 2
ignore_line_re = re.compile("^#.*|\s*$")
class ParseError(StandardError):
pass
def crontab_begin_comment(unique_id):
return '# Begin performanceplatform.collector jobs for %s' % unique_id
def crontab_end_comment(unique_id):
return '# End performanceplatform.collector jobs for %s' % unique_id
def remove_existing_crontab_for_app(crontab, unique_id):
new_crontab = []
should_append = True
for line in crontab:
if line == crontab_begin_comment(unique_id):
should_append = False
if should_append:
new_crontab.append(line)
if line == crontab_end_comment(unique_id):
should_append = True
return new_crontab
def parse_job_line(line):
"""
>>> parse_job_line( \
"* * * *,myquery,mycredentials,mytoken,performanceplatform\\n")
('* * * *', 'myquery', 'mycredentials', 'mytoken', 'performanceplatform')
>>> parse_job_line(" ") is None
True
>>> parse_job_line("# comment") is None
True
"""
parsed = None
if not ignore_line_re.match(line):
parsed = tuple(line.strip().split(','))
return parsed
def generate_crontab(current_crontab, path_to_jobs, path_to_app, unique_id):
"""Returns a crontab with jobs from job path
It replaces jobs previously generated by this function
It preserves jobs not generated by this function
"""
set_disable_envar = ''
if os.environ.get('DISABLE_COLLECTORS') == 'true':
set_disable_envar = 'DISABLE_COLLECTORS={} '.format(
os.environ.get('DISABLE_COLLECTORS'))
job_template = '{schedule} ' \
'{set_disable_envar}' \
'{app_path}/venv/bin/pp-collector ' \
'-l {collector_slug} ' \
'-c {app_path}/config/{credentials} ' \
'-t {app_path}/config/{token} ' \
'-b {app_path}/config/{performanceplatform} ' \
'>> {app_path}/log/out.log 2>> {app_path}/log/error.log'
crontab = [line.strip() for line in current_crontab]
crontab = remove_existing_crontab_for_app(crontab, unique_id)
additional_crontab = []
job_number = 0
with open(path_to_jobs) as jobs:
try:
for job in jobs:
parsed = parse_job_line(job)
if parsed is not None:
job_number += 1
if skip_job(job_number):
continue
schedule, collector_slug, credentials, \
token, performanceplatform = parsed
cronjob = job_template.format(
schedule=schedule,
set_disable_envar=set_disable_envar,
app_path=path_to_app,
collector_slug=collector_slug,
credentials=credentials,
token=token,
performanceplatform=performanceplatform
)
additional_crontab.append(cronjob)
except ValueError as e:
raise ParseError(str(e))
if additional_crontab:
crontab.append(crontab_begin_comment(unique_id))
crontab.extend(additional_crontab)
crontab.append(crontab_end_comment(unique_id))
return crontab
if __name__ == '__main__':
current_crontab = sys.stdin.readlines()
try:
parser = argparse.ArgumentParser()
parser.add_argument('path_to_app',
help='Path to where the application')
parser.add_argument('path_to_jobs',
help='Path to the file where job templates are')
parser.add_argument('app_unique_id',
help='Unique id of the application '
'used to update crontab')
args = parser.parse_args()
crontab = generate_crontab(current_crontab,
args.path_to_jobs,
args.path_to_app,
args.app_unique_id)
sys.stdout.write("\n".join(crontab) + "\n")
sys.exit(0)
except StandardError as e:
sys.stderr.write(str(e))
sys.stdout.write("\n".join(current_crontab))
sys.exit(1)
|
alphagov/performanceplatform-collector | performanceplatform/collector/crontab.py | generate_crontab | python | def generate_crontab(current_crontab, path_to_jobs, path_to_app, unique_id):
set_disable_envar = ''
if os.environ.get('DISABLE_COLLECTORS') == 'true':
set_disable_envar = 'DISABLE_COLLECTORS={} '.format(
os.environ.get('DISABLE_COLLECTORS'))
job_template = '{schedule} ' \
'{set_disable_envar}' \
'{app_path}/venv/bin/pp-collector ' \
'-l {collector_slug} ' \
'-c {app_path}/config/{credentials} ' \
'-t {app_path}/config/{token} ' \
'-b {app_path}/config/{performanceplatform} ' \
'>> {app_path}/log/out.log 2>> {app_path}/log/error.log'
crontab = [line.strip() for line in current_crontab]
crontab = remove_existing_crontab_for_app(crontab, unique_id)
additional_crontab = []
job_number = 0
with open(path_to_jobs) as jobs:
try:
for job in jobs:
parsed = parse_job_line(job)
if parsed is not None:
job_number += 1
if skip_job(job_number):
continue
schedule, collector_slug, credentials, \
token, performanceplatform = parsed
cronjob = job_template.format(
schedule=schedule,
set_disable_envar=set_disable_envar,
app_path=path_to_app,
collector_slug=collector_slug,
credentials=credentials,
token=token,
performanceplatform=performanceplatform
)
additional_crontab.append(cronjob)
except ValueError as e:
raise ParseError(str(e))
if additional_crontab:
crontab.append(crontab_begin_comment(unique_id))
crontab.extend(additional_crontab)
crontab.append(crontab_end_comment(unique_id))
return crontab | Returns a crontab with jobs from job path
It replaces jobs previously generated by this function
It preserves jobs not generated by this function | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/crontab.py#L78-L135 | [
"def crontab_begin_comment(unique_id):\n return '# Begin performanceplatform.collector jobs for %s' % unique_id\n",
"def crontab_end_comment(unique_id):\n return '# End performanceplatform.collector jobs for %s' % unique_id\n",
"def remove_existing_crontab_for_app(crontab, unique_id):\n new_crontab = [... | import argparse
import os
import re
import sys
import socket
# Number of hosts we are running the crons on
NUMBER_OF_HOSTS = 2
ignore_line_re = re.compile("^#.*|\s*$")
class ParseError(StandardError):
pass
def crontab_begin_comment(unique_id):
return '# Begin performanceplatform.collector jobs for %s' % unique_id
def crontab_end_comment(unique_id):
return '# End performanceplatform.collector jobs for %s' % unique_id
def remove_existing_crontab_for_app(crontab, unique_id):
new_crontab = []
should_append = True
for line in crontab:
if line == crontab_begin_comment(unique_id):
should_append = False
if should_append:
new_crontab.append(line)
if line == crontab_end_comment(unique_id):
should_append = True
return new_crontab
def parse_job_line(line):
"""
>>> parse_job_line( \
"* * * *,myquery,mycredentials,mytoken,performanceplatform\\n")
('* * * *', 'myquery', 'mycredentials', 'mytoken', 'performanceplatform')
>>> parse_job_line(" ") is None
True
>>> parse_job_line("# comment") is None
True
"""
parsed = None
if not ignore_line_re.match(line):
parsed = tuple(line.strip().split(','))
return parsed
def skip_job(counter):
""" Should we skip the job based on its number
If the machine has a number at the end of its hostname (n), and we have m
machines in the pool -
on machine 1, run job 1, 1 + m, 1+2m etc
on machine 2, run job 2, 2 + m , 2 + 2m etc
Else run all the jobs
"""
try:
host_number = int(socket.gethostname().split('-')[-1])
except ValueError:
return False
if (counter + host_number - (NUMBER_OF_HOSTS - 1)) % NUMBER_OF_HOSTS == 0:
return False
return True
if __name__ == '__main__':
current_crontab = sys.stdin.readlines()
try:
parser = argparse.ArgumentParser()
parser.add_argument('path_to_app',
help='Path to where the application')
parser.add_argument('path_to_jobs',
help='Path to the file where job templates are')
parser.add_argument('app_unique_id',
help='Unique id of the application '
'used to update crontab')
args = parser.parse_args()
crontab = generate_crontab(current_crontab,
args.path_to_jobs,
args.path_to_app,
args.app_unique_id)
sys.stdout.write("\n".join(crontab) + "\n")
sys.exit(0)
except StandardError as e:
sys.stderr.write(str(e))
sys.stdout.write("\n".join(current_crontab))
sys.exit(1)
|
alphagov/performanceplatform-collector | performanceplatform/collector/ga/plugins/department.py | try_get_department | python | def try_get_department(department_or_code):
try:
value = take_first_department_code(department_or_code)
except AssertionError:
value = department_or_code
if value in DEPARTMENT_MAPPING:
value = DEPARTMENT_MAPPING[value]
return value | Try to take the first department code, or fall back to string as passed | train | https://github.com/alphagov/performanceplatform-collector/blob/de68ab4aa500c31e436e050fa1268fa928c522a5/performanceplatform/collector/ga/plugins/department.py#L62-L74 | [
"def take_first_department_code(department_codes):\n get_first_re = re.compile(\"^(<[^>]+>).*$\")\n match = get_first_re.match(department_codes)\n assert match is not None\n (department_code, ) = match.groups()\n return department_code\n"
] | import re
class ComputeDepartmentKey(object):
"""
Adds a 'department' key to a dictionary by looking up the department from
the specified, key_name. It takes the first department code of form
<[code]> from document[key_name].
"""
def __init__(self, key_name):
self.key_name = key_name
def __call__(self, documents):
def compute_department(department):
assert self.key_name in document, (
'key "{}" not found "{}"'.format(self.key_name, document))
department_codes = document[self.key_name]
department_code = take_first_department_code(department_codes)
document["department"] = DEPARTMENT_MAPPING.get(
department_code, department_code)
return document
return [compute_department(document) for document in documents]
class SetDepartment(object):
"""
Adds a 'department' key to a dictionary by using the department specified
in the constructor. It takes the first department code of form <[code]>,
and if that doesn't exist it uses the string verbatim.
This assumes that all of the data being processed belongs to the same
single department.
For example:
SetDepartment("<D3>")
SetDepartment("Department for fooing the bar")
"""
def __init__(self, department_or_code):
self.department_or_code = department_or_code
self.value = try_get_department(self.department_or_code)
def __call__(self, documents):
for document in documents:
document["department"] = self.value
return documents
def take_first_department_code(department_codes):
get_first_re = re.compile("^(<[^>]+>).*$")
match = get_first_re.match(department_codes)
assert match is not None
(department_code, ) = match.groups()
return department_code
DEPARTMENT_MAPPING = {
"<D1>": "attorney-generals-office",
"<D2>": "cabinet-office",
"<D3>": "department-for-business-innovation-skills",
"<D4>": "department-for-communities-and-local-government",
"<D5>": "department-for-culture-media-sport",
"<D6>": "department-for-education",
"<D7>": "department-for-environment-food-rural-affairs",
"<D8>": "department-for-international-development",
"<D9>": "department-for-transport",
"<D10>": "department-for-work-pensions",
"<D11>": "department-of-energy-climate-change",
"<D12>": "department-of-health",
"<D13>": "foreign-commonwealth-office",
"<D15>": "hm-treasury",
"<D16>": "home-office",
"<D17>": "ministry-of-defence",
"<D18>": "ministry-of-justice",
"<D19>": "northern-ireland-office",
"<D20>": "office-of-the-advocate-general-for-scotland",
"<D21>": "the-office-of-the-leader-of-the-house-of-commons",
"<D22>": "office-of-the-leader-of-the-house-of-lords",
"<D23>": "scotland-office",
"<D24>": "wales-office",
"<D25>": "hm-revenue-customs",
"<D30>": "treasury-solicitor-s-department",
"<D38>": "ordnance-survey",
"<D85>": "forestry-commission",
"<D98>": "the-charity-commission-for-england-and-wales",
"<D101>": "crown-prosecution-service",
"<D102>": "food-standards-agency",
"<D106>": "ofsted",
"<D108>": "ofgem",
"<D109>": "office-of-qualifications-and-examinations-regulation",
"<D110>": "office-of-rail-regulation",
"<D115>": "serious-fraud-office",
"<D116>": "uk-statistics-authority",
"<D117>": "uk-trade-investment",
"<D240>": "the-water-services-regulation-authority",
"<D241>": "uk-export-finance",
"<D303>": "office-for-national-statistics",
"<D346>": "public-works-loan-board",
"<D352>": "office-of-fair-trading",
"<D435>": "supreme-court-of-the-united-kingdom",
"<D1196>": "department-for-international-trade",
"<D1197>": "department-for-exiting-the-european-union",
"<D1198>": "department-for-business-energy-and-industrial-strategy",
"<EA26>": "companies-house",
"<EA31>": "uk-space-agency",
"<EA32>": "insolvency-service",
"<EA33>": "national-measurement-office",
"<EA34>": "intellectual-property-office",
"<EA37>": "fire-service-college",
"<EA39>": "planning-inspectorate",
"<EA40>": "queen-elizabeth-ii-conference-centre",
"<EA41>": "royal-parks",
"<EA42>": "defence-science-and-technology-laboratory",
"<EA44>": "defence-support-group",
"<EA46>": "met-office",
"<EA47>": "ministry-of-defence-police-and-guarding-agency",
"<EA49>": "service-children-s-education",
"<EA50>": "service-personnel-and-veterans-agency",
"<EA51>": "uk-hydrographic-office",
"<EA52>": "animal-health-and-veterinary-laboratories-agency",
"<EA53>": "centre-for-environment-fisheries-and-aquaculture-science",
"<EA54>": "forest-research",
"<EA55>": "forest-enterprise-england",
"<EA56>": "the-food-and-environment-research-agency",
"<EA58>": "rural-payments-agency",
"<EA60>": "veterinary-medicines-directorate",
"<EA61>": "fco-services",
"<EA62>": "wilton-park",
"<EA63>": "medicines-and-healthcare-products-regulatory-agency",
"<EA66>": "identity-and-passport-service",
"<EA67>": "uk-border-agency",
"<EA70>": "national-offender-management-service",
"<EA71>": "the-national-archives",
"<EA72>": "office-of-the-public-guardian",
"<EA73>": "hm-courts-and-tribunals-service",
"<EA74>": "driver-and-vehicle-licensing-agency",
"<EA75>": "driving-standards-agency",
"<EA77>": "highways-agency",
"<EA78>": "maritime-and-coastguard-agency",
"<EA79>": "vehicle-and-operator-services-agency",
"<EA80>": "vehicle-certification-agency",
"<EA82>": "uk-debt-management-office",
"<EA86>": "skills-funding-agency",
"<EA87>": "valuation-office-agency",
"<EA104>": "ns-i",
"<EA114>": "royal-mint",
"<EA199>": "environment-agency",
"<EA242>": "education-funding-agency",
"<EA243>": "standards-and-testing-agency",
"<EA245>": "national-college-for-school-leadership",
"<EA321>": "hm-prison-service",
"<EA365>": "government-procurement-service",
"<EA480>": "public-health-england",
"<EA541>": "national-college-for-teaching-and-leadership",
"<EA570>": "driver-and-vehicle-standards-agency",
"<IM320>": "criminal-cases-review-commission",
"<IM324>": "hm-inspectorate-of-prisons",
"<IM325>": "hm-inspectorate-of-probation",
"<IM327>": "prisons-and-probation-ombudsman",
"<IM329>": "the-legal-ombudsman",
"<IM332>": "legal-services-board",
"<IM333>": "judicial-appointments-and-conduct-ombudsman",
"<IM335>": "independent-commission-for-aid-impact",
"<IM341>": "house-of-lords-appointments-commission",
"<OT100>": "the-crown-estate",
"<OT152>": "industrial-development-advisory-board",
"<OT204>": "agriculture-and-horticulture-development-board",
"<OT205>": "sea-fish-industry-authority",
"<OT214>": "covent-garden-market-authority",
"<OT216>": "broads-authority",
"<OT217>": "dartmoor-national-park-authority",
"<OT219>": "exmoor-national-park-authority",
"<OT220>": "lake-district-national-park-authority",
"<OT222>": "new-forest-national-park-authority",
"<OT223>": "north-york-moors-national-park",
"<OT237>": "national-employment-savings-trust",
"<OT248>": "air-accidents-investigation-branch",
"<OT249>": "rail-accidents-investigation-branch",
"<OT261>": "office-of-tax-simplification",
"<OT269>": "equality-and-human-rights-commission",
"<OT281>": "hm-inspectorate-of-constabulary",
"<OT284>": "chief-inspector-of-the-uk-border-agency",
"<OT304>": "the-security-service-mi5",
"<OT305>": "secret-intelligence-service",
"<OT306>": "government-communications-headquarters",
"<OT313>": "british-council",
"<OT315>": "marshall-aid-commemoration-commission",
"<OT316>": "westminster-foundation-for-democracy",
"<OT328>": "official-solicitor-and-public-trustee",
"<OT342>": "privy-council-office",
"<OT347>": "hm-crown-prosecution-service-inspectorate",
"<OT360>": "uk-green-investment-bank",
"<OT385>": "ofcom",
"<OT404>": "service-complaints-commissioner",
"<OT405>": "defence-academy",
"<OT406>": "service-prosecuting-authority",
"<OT408>": "defence-press-and-broadcasting-advisory-committee",
"<OT409>": "royal-navy-submarine-museum",
"<OT411>": "defence-sixth-form-college",
"<OT425>": "drinking-water-inspectorate",
"<OT428>": "boundary-commission-for-northern-ireland",
"<OT429>": "northern-ireland-human-rights-commission",
"<OT430>": "parades-commission-for-northern-ireland",
"<OT431>": "boundary-commission-for-scotland",
"<OT432>": "boundary-commission-for-wales",
"<OT433>": "the-adjudicator-s-office",
"<OT444>": "independent-monitoring-boards-of-prisons-immigration-removal-"
"centres-and-short-term-holding-rooms",
"<OT483>": "health-research-authority",
"<OT484>": "nhs-trust-development-authority",
"<OT486>": "nhs-blood-and-transplant",
"<OT488>": "nhs-litigation-authority",
"<OT492>": "chevening-foundation",
"<OT494>": "airports-commission",
"<OT495>": "defence-equipment-and-support",
"<OT496>": "defence-infrastructure-organisation",
"<OT498>": "joint-forces-command",
"<OT502>": "office-for-life-sciences",
"<OT504>": "chief-fire-and-rescue-adviser-unit",
"<OT505>": "national-security",
"<OT506>": "government-equalities-office",
"<OT507>": "efficiency-and-reform-group",
"<OT511>": "independent-reviewer-of-terrorism-legislation",
"<OT512>": "intelligence-services-commissioner",
"<OT513>": "interception-of-communications-commissioner",
"<OT514>": "office-for-low-emission-vehicles",
"<OT515>": "office-of-the-parliamentary-counsel",
"<OT517>": "commissioner-for-public-appointments",
"<OT518>": "mckay-commission",
"<OT519>": "the-parliamentary-and-health-service-ombudsman",
"<OT520>": "behavioural-insights-team",
"<OT522>": "advisory-committee-on-clinical-excellence-awards",
"<OT529>": "nhs-business-services-authority",
"<OT532>": "prime-ministers-office-10-downing-street",
"<OT533>": "investigation-into-the-role-of-jimmy-savile-at-broadmoor-"
"hospital",
"<OT535>": "border-force",
"<OT536>": "forensic-science-regulator",
"<OT537>": "deputy-prime-ministers-office",
"<OT538>": "health-and-social-care-information-centre",
"<OT539>": "health-education-england",
"<OT540>": "infrastructure-uk",
"<OT554>": "uk-visas-and-immigration",
"<OT555>": "government-office-for-science",
"<PB27>": "student-loans-company",
"<PB28>": "acas",
"<PB29>": "national-fraud-authority",
"<PB57>": "marine-management-organisation",
"<PB118>": "horserace-betting-levy-board",
"<PB120>": "higher-education-funding-council-for-england",
"<PB121>": "council-for-science-and-technology",
"<PB122>": "low-pay-commission",
"<PB123>": "arts-and-humanities-research-council",
"<PB124>": "british-hallmarking-council",
"<PB126>": "construction-industry-training-board",
"<PB129>": "economic-and-social-research-council",
"<PB130>": "engineering-and-physical-sciences-research-council",
"<PB131>": "engineering-construction-industry-training-board",
"<PB132>": "medical-research-council",
"<PB133>": "natural-environment-research-council",
"<PB134>": "office-for-fair-access",
"<PB135>": "science-and-technology-facilities-council",
"<PB136>": "technology-strategy-board",
"<PB137>": "uk-atomic-energy-authority",
"<PB138>": "uk-commission-for-employment-and-skills",
"<PB139>": "certification-office",
"<PB140>": "competition-appeal-tribunal",
"<PB147>": "capital-for-enterprise-ltd",
"<PB148>": "central-arbitration-committee",
"<PB158>": "the-committee-on-standards-in-public-life",
"<PB160>": "building-regulations-advisory-committee",
"<PB161>": "homes-and-communities-agency",
"<PB165>": "arts-council-england",
"<PB166>": "british-library",
"<PB167>": "british-museum",
"<PB168>": "english-heritage",
"<PB169>": "gambling-commission",
"<PB170>": "geffrye-museum",
"<PB171>": "horniman-museum",
"<PB172>": "imperial-war-museum",
"<PB174>": "national-gallery",
"<PB175>": "national-heritage-memorial-fund",
"<PB176>": "national-lottery-commission",
"<PB177>": "science-museum-group",
"<PB178>": "national-museums-liverpool",
"<PB179>": "national-portrait-gallery",
"<PB180>": "natural-history-museum",
"<PB181>": "olympic-delivery-authority",
"<PB182>": "royal-armouries-museum",
"<PB183>": "sir-john-soane-s-museum",
"<PB184>": "sports-grounds-safety-authority",
"<PB185>": "uk-sport",
"<PB186>": "visitbritain",
"<PB187>": "wallace-collection",
"<PB188>": "big-lottery-fund",
"<PB189>": "british-film-institute",
"<PB190>": "sport-england",
"<PB191>": "committee-on-radioactive-waste-management",
"<PB192>": "the-fuel-poverty-advisory-group",
"<PB193>": "nuclear-liabilities-financing-assurance-board",
"<PB194>": "civil-nuclear-police-authority",
"<PB195>": "the-coal-authority",
"<PB196>": "committee-on-climate-change",
"<PB197>": "nuclear-decommissioning-authority",
"<PB198>": "consumer-council-for-water",
"<PB200>": "gangmasters-licensing-authority",
"<PB201>": "joint-nature-conservation-committee",
"<PB202>": "natural-england",
"<PB207>": "health-and-safety-executive",
"<PB208>": "advisory-committee-on-pesticides",
"<PB209>": "advisory-committee-on-releases-to-the-environment",
"<PB210>": "independent-agricultural-appeals-panel",
"<PB211>": "science-advisory-council",
"<PB212>": "veterinary-products-committee",
"<PB213>": "agricultural-land-tribunal",
"<PB226>": "commonwealth-scholarship-commission-in-the-uk",
"<PB230>": "equality-2025",
"<PB231>": "industrial-injuries-advisory-council",
"<PB232>": "social-security-advisory-committee",
"<PB233>": "the-pensions-advisory-service",
"<PB234>": "the-pensions-regulator",
"<PB235>": "pension-protection-fund-ombudsman",
"<PB236>": "pensions-ombudsman",
"<PB246>": "children-and-family-court-advisory-and-support-service",
"<PB247>": "disabled-persons-transport-advisory-committee",
"<PB251>": "care-quality-commission",
"<PB253>": "human-fertilisation-and-embryology-authority",
"<PB254>": "human-tissue-authority",
"<PB255>": "monitor",
"<PB257>": "foreign-compensation-commission",
"<PB260>": "office-for-budget-responsibility",
"<PB263>": "office-of-the-immigration-services-commissioner",
"<PB264>": "security-industry-authority",
"<PB265>": "serious-organised-crime-agency",
"<PB266>": "office-of-manpower-economics",
"<PB270>": "investigatory-powers-tribunal",
"<PB271>": "advisory-council-on-the-misuse-of-drugs",
"<PB273>": "police-advisory-board-for-england-and-wales",
"<PB274>": "technical-advisory-board",
"<PB275>": "migration-advisory-committee",
"<PB276>": "national-dna-database-ethics-group",
"<PB277>": "police-arbitration-tribunal",
"<PB278>": "police-discipline-appeals-tribunal",
"<PB290>": "royal-naval-museum",
"<PB291>": "royal-museums-greenwich",
"<PB293>": "civil-justice-council",
"<PB294>": "law-commission",
"<PB295>": "the-sentencing-council-for-england-and-wales",
"<PB296>": "victim-s-advisory-panel",
"<PB297>": "criminal-injuries-compensation-authority",
"<PB298>": "information-commissioner-s-office",
"<PB299>": "judicial-appointments-commission",
"<PB300>": "legal-services-commission",
"<PB301>": "parole-board",
"<PB302>": "youth-justice-board-for-england-and-wales",
"<PB307>": "national-army-museum",
"<PB308>": "royal-air-force-museum",
"<PB310>": "royal-marines-museum",
"<PB311>": "fleet-air-arm-museum",
"<PB317>": "biotechnology-biological-sciences-research-council",
"<PB318>": "competition-service",
"<PB326>": "victims-commissioner",
"<PB336>": "advisory-committee-on-business-appointments",
"<PB337>": "boundary-commission-for-england",
"<PB348>": "pension-protection-fund",
"<PB349>": "independent-living-fund",
"<PB350>": "remploy-ltd",
"<PB353>": "competition-commission",
"<PB354>": "regulatory-policy-committee",
"<PB355>": "copyright-tribunal",
"<PB356>": "consumer-focus",
"<PB357>": "export-guarantees-advisory-council",
"<PB358>": "land-registration-rule-committee",
"<PB366>": "civil-service-commission",
"<PB367>": "security-vetting-appeals-panel",
"<PB368>": "review-body-on-senior-salaries",
"<PB372>": "housing-ombudsman",
"<PB373>": "leasehold-advisory-service",
"<PB374>": "london-thames-gateway-development-corporation",
"<PB375>": "valuation-tribunal-service-for-england-valuation-tribunal-"
"service",
"<PB376>": "visitengland",
"<PB377>": "uk-anti-doping",
"<PB378>": "victoria-and-albert-museum",
"<PB380>": "the-theatres-trust",
"<PB381>": "olympic-lottery-distributor",
"<PB382>": "the-reviewing-committee-on-the-export-of-works-of-art-and-"
"objects-of-cultural-interest",
"<PB383>": "treasure-valuation-committee",
"<PB392>": "advisory-committee-on-conscientious-objectors",
"<PB393>": "advisory-group-on-military-medicine",
"<PB394>": "armed-forces-pay-review-body",
"<PB395>": "central-advisory-committee-on-pensions-and-compensation",
"<PB396>": "defence-nuclear-safety-committee",
"<PB397>": "defence-scientific-advisory-council",
"<PB399>": "national-employer-advisory-board",
"<PB400>": "nuclear-research-advisory-council",
"<PB401>": "oil-and-pipelines-agency",
"<PB402>": "science-advisory-committee-on-the-medical-implications-of-"
"less-lethal-weapons",
"<PB403>": "veterans-advisory-and-pensions-committees-x13",
"<PB413>": "office-of-the-children-s-commissioner",
"<PB414>": "school-teachers-review-body",
"<PB415>": "national-forest-company",
"<PB420>": "agricultural-wages-committee-x13",
"<PB421>": "agricultural-dwelling-house-advisory-committees-x16",
"<PB423>": "plant-varieties-and-seeds-tribunal",
"<PB426>": "great-britain-china-centre",
"<PB434>": "royal-mint-advisory-committee",
"<PB436>": "administrative-justice-and-tribunals-council",
"<PB437>": "advisory-committees-on-justices-of-the-peace",
"<PB438>": "the-advisory-council-on-national-records-and-archives",
"<PB440>": "civil-procedure-rules-committee",
"<PB441>": "family-justice-council",
"<PB442>": "family-procedure-rule-committee",
"<PB443>": "independent-advisory-panel-on-deaths-in-custody",
"<PB445>": "insolvency-rules-committee",
"<PB446>": "prison-services-pay-review-body",
"<PB448>": "tribunal-procedure-committee",
"<PB456>": "independent-police-complaint-commission",
"<PB457>": "office-of-surveillance-commissioners",
"<PB458>": "police-negotiating-board",
"<PB459>": "directly-operated-railways-limited",
"<PB460>": "high-speed-two-limited",
"<PB461>": "british-transport-police-authority",
"<PB462>": "trinity-house",
"<PB463>": "northern-lighthouse-board",
"<PB464>": "passenger-focus",
"<PB465>": "traffic-commissioners",
"<PB466>": "railway-heritage-committee",
"<PB474>": "national-institute-for-clinical-excellence",
"<PB477>": "review-board-for-government-contracts",
"<PB481>": "nhs-commissioning-board",
"<PB491>": "advisory-panel-on-public-sector-information",
"<PB500>": "the-shareholder-executive",
"<PB501>": "the-west-northamptonshire-development-corporation",
"<PB503>": "local-government-ombudsman",
"<PB508>": "board-of-trustees-of-the-royal-botanic-gardens-kew",
"<PB509>": "disclosure-and-barring-service",
"<PB510>": "social-mobility-and-child-poverty-commission",
"<PB516>": "criminal-procedure-rule-committee",
"<PB521>": "insolvency-practitioners-tribunal",
"<PB523>": "administration-of-radioactive-substances-advisory-committee",
"<PB524>": "british-pharmacopoeia",
"<PB525>": "commission-on-human-medicines",
"<PB526>": "committee-on-mutagenicity-of-chemicals-in-food-consumer-"
"products-and-the-environment",
"<PB527>": "independent-reconfiguration-panel",
"<PB530>": "review-body-on-doctors-and-dentists-remuneration",
"<PB531>": "nhs-pay-review-body",
"<PB534>": "animals-in-science-committee",
"<PB542>": "the-office-of-the-schools-commissioner",
"<PB1062>": "company-names-tribunal",
"<PC163>": "architects-registration-board",
"<PC259>": "uk-financial-investments-limited",
"<PC343>": "audit-commission",
"<PC386>": "channel-4",
"<PC387>": "s4c",
"<PC388>": "bbc",
"<PC389>": "historic-royal-palaces",
"<PC390>": "heritage-lottery-fund",
"<PC427>": "bbc-world-service",
"<PC467>": "brb-residuary-ltd",
"<PC468>": "trust-ports",
"<PC469>": "civil-aviation-authority",
"<PC472>": "marine-accident-investigation-branch",
"<PC493>": "london-and-continental-railways-ltd",
}
|
idlesign/django-sitecats | sitecats/models.py | ModelWithCategory.get_category_lists | python | def get_category_lists(self, init_kwargs=None, additional_parents_aliases=None):
if self._category_editor is not None: # Return editor lists instead of plain lists if it's enabled.
return self._category_editor.get_lists()
from .toolbox import get_category_lists
init_kwargs = init_kwargs or {}
catlist_kwargs = {}
if self._category_lists_init_kwargs is not None:
catlist_kwargs.update(self._category_lists_init_kwargs)
catlist_kwargs.update(init_kwargs)
lists = get_category_lists(catlist_kwargs, additional_parents_aliases, obj=self)
return lists | Returns a list of CategoryList objects, associated with
this model instance.
:param dict|None init_kwargs:
:param list|None additional_parents_aliases:
:rtype: list|CategoryRequestHandler
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/models.py#L229-L252 | [
"def get_category_lists(init_kwargs=None, additional_parents_aliases=None, obj=None):\n \"\"\"Returns a list of CategoryList objects, optionally associated with\n a given model instance.\n\n :param dict|None init_kwargs:\n :param list|None additional_parents_aliases:\n :param Model|None obj: Model in... | class ModelWithCategory(models.Model):
"""Helper class for models with tags.
Mix in this helper to your model class to be able to categorize model instances.
"""
categories = GenericRelation(MODEL_TIE)
class Meta(object):
abstract = True
_category_lists_init_kwargs = None
_category_editor = None
def set_category_lists_init_kwargs(self, kwa_dict):
"""Sets keyword arguments for category lists which can be spawned
by get_categories().
:param dict|None kwa_dict:
:return:
"""
self._category_lists_init_kwargs = kwa_dict
def enable_category_lists_editor(self, request, editor_init_kwargs=None, additional_parents_aliases=None,
lists_init_kwargs=None, handler_init_kwargs=None):
"""Enables editor functionality for categories of this object.
:param Request request: Django request object
:param dict editor_init_kwargs: Keyword args to initialize category lists editor with.
See CategoryList.enable_editor()
:param list additional_parents_aliases: Aliases of categories for editor to render
even if this object has no tie to them.
:param dict lists_init_kwargs: Keyword args to initialize CategoryList objects with
:param dict handler_init_kwargs: Keyword args to initialize CategoryRequestHandler object with
:return:
"""
from .toolbox import CategoryRequestHandler
additional_parents_aliases = additional_parents_aliases or []
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
handler_init_kwargs = handler_init_kwargs or {}
handler = CategoryRequestHandler(request, self, **handler_init_kwargs)
lists = self.get_category_lists(
init_kwargs=lists_init_kwargs, additional_parents_aliases=additional_parents_aliases)
handler.register_lists(lists, lists_init_kwargs=lists_init_kwargs, editor_init_kwargs=editor_init_kwargs)
self._category_editor = handler # Set link to handler to mutate get_category_lists() behaviour.
return handler.listen()
def add_to_category(self, category, user):
"""Add this model instance to a category.
:param Category category: Category to add this object to
:param User user: User heir who adds
:return:
"""
init_kwargs = {
'category': category,
'creator': user,
'linked_object': self
}
tie = self.categories.model(**init_kwargs) # That's a model of Tie.
tie.save()
return tie
def remove_from_category(self, category):
"""Removes this object from a given category.
:param Category category:
:return:
"""
ctype = ContentType.objects.get_for_model(self)
self.categories.model.objects.filter(category=category, content_type=ctype, object_id=self.id).delete()
@classmethod
def get_ties_for_categories_qs(cls, categories, user=None, status=None):
"""Returns a QuerySet of Ties for the given categories.
:param list|Category categories:
:param User|None user:
:param int|None status:
:return:
"""
if not isinstance(categories, list):
categories = [categories]
category_ids = []
for category in categories:
if isinstance(category, models.Model):
category_ids.append(category.id)
else:
category_ids.append(category)
filter_kwargs = {
'content_type': ContentType.objects.get_for_model(cls, for_concrete_model=False),
'category_id__in': category_ids
}
if user is not None:
filter_kwargs['creator'] = user
if status is not None:
filter_kwargs['status'] = status
ties = get_tie_model().objects.filter(**filter_kwargs)
return ties
@classmethod
def get_from_category_qs(cls, category):
"""Returns a QuerySet of objects of this type associated with the given category.
:param Category category:
:rtype: list
:return:
"""
ids = cls.get_ties_for_categories_qs(category).values_list('object_id').distinct()
filter_kwargs = {'id__in': [i[0] for i in ids]}
return cls.objects.filter(**filter_kwargs)
|
idlesign/django-sitecats | sitecats/models.py | ModelWithCategory.enable_category_lists_editor | python | def enable_category_lists_editor(self, request, editor_init_kwargs=None, additional_parents_aliases=None,
lists_init_kwargs=None, handler_init_kwargs=None):
from .toolbox import CategoryRequestHandler
additional_parents_aliases = additional_parents_aliases or []
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
handler_init_kwargs = handler_init_kwargs or {}
handler = CategoryRequestHandler(request, self, **handler_init_kwargs)
lists = self.get_category_lists(
init_kwargs=lists_init_kwargs, additional_parents_aliases=additional_parents_aliases)
handler.register_lists(lists, lists_init_kwargs=lists_init_kwargs, editor_init_kwargs=editor_init_kwargs)
self._category_editor = handler # Set link to handler to mutate get_category_lists() behaviour.
return handler.listen() | Enables editor functionality for categories of this object.
:param Request request: Django request object
:param dict editor_init_kwargs: Keyword args to initialize category lists editor with.
See CategoryList.enable_editor()
:param list additional_parents_aliases: Aliases of categories for editor to render
even if this object has no tie to them.
:param dict lists_init_kwargs: Keyword args to initialize CategoryList objects with
:param dict handler_init_kwargs: Keyword args to initialize CategoryRequestHandler object with
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/models.py#L254-L277 | [
"def get_category_lists(self, init_kwargs=None, additional_parents_aliases=None):\n \"\"\"Returns a list of CategoryList objects, associated with\n this model instance.\n\n :param dict|None init_kwargs:\n :param list|None additional_parents_aliases:\n :rtype: list|CategoryRequestHandler\n :return:... | class ModelWithCategory(models.Model):
"""Helper class for models with tags.
Mix in this helper to your model class to be able to categorize model instances.
"""
categories = GenericRelation(MODEL_TIE)
class Meta(object):
abstract = True
_category_lists_init_kwargs = None
_category_editor = None
def set_category_lists_init_kwargs(self, kwa_dict):
"""Sets keyword arguments for category lists which can be spawned
by get_categories().
:param dict|None kwa_dict:
:return:
"""
self._category_lists_init_kwargs = kwa_dict
def get_category_lists(self, init_kwargs=None, additional_parents_aliases=None):
"""Returns a list of CategoryList objects, associated with
this model instance.
:param dict|None init_kwargs:
:param list|None additional_parents_aliases:
:rtype: list|CategoryRequestHandler
:return:
"""
if self._category_editor is not None: # Return editor lists instead of plain lists if it's enabled.
return self._category_editor.get_lists()
from .toolbox import get_category_lists
init_kwargs = init_kwargs or {}
catlist_kwargs = {}
if self._category_lists_init_kwargs is not None:
catlist_kwargs.update(self._category_lists_init_kwargs)
catlist_kwargs.update(init_kwargs)
lists = get_category_lists(catlist_kwargs, additional_parents_aliases, obj=self)
return lists
def add_to_category(self, category, user):
"""Add this model instance to a category.
:param Category category: Category to add this object to
:param User user: User heir who adds
:return:
"""
init_kwargs = {
'category': category,
'creator': user,
'linked_object': self
}
tie = self.categories.model(**init_kwargs) # That's a model of Tie.
tie.save()
return tie
def remove_from_category(self, category):
"""Removes this object from a given category.
:param Category category:
:return:
"""
ctype = ContentType.objects.get_for_model(self)
self.categories.model.objects.filter(category=category, content_type=ctype, object_id=self.id).delete()
@classmethod
def get_ties_for_categories_qs(cls, categories, user=None, status=None):
"""Returns a QuerySet of Ties for the given categories.
:param list|Category categories:
:param User|None user:
:param int|None status:
:return:
"""
if not isinstance(categories, list):
categories = [categories]
category_ids = []
for category in categories:
if isinstance(category, models.Model):
category_ids.append(category.id)
else:
category_ids.append(category)
filter_kwargs = {
'content_type': ContentType.objects.get_for_model(cls, for_concrete_model=False),
'category_id__in': category_ids
}
if user is not None:
filter_kwargs['creator'] = user
if status is not None:
filter_kwargs['status'] = status
ties = get_tie_model().objects.filter(**filter_kwargs)
return ties
@classmethod
def get_from_category_qs(cls, category):
"""Returns a QuerySet of objects of this type associated with the given category.
:param Category category:
:rtype: list
:return:
"""
ids = cls.get_ties_for_categories_qs(category).values_list('object_id').distinct()
filter_kwargs = {'id__in': [i[0] for i in ids]}
return cls.objects.filter(**filter_kwargs)
|
idlesign/django-sitecats | sitecats/models.py | ModelWithCategory.add_to_category | python | def add_to_category(self, category, user):
init_kwargs = {
'category': category,
'creator': user,
'linked_object': self
}
tie = self.categories.model(**init_kwargs) # That's a model of Tie.
tie.save()
return tie | Add this model instance to a category.
:param Category category: Category to add this object to
:param User user: User heir who adds
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/models.py#L279-L293 | null | class ModelWithCategory(models.Model):
"""Helper class for models with tags.
Mix in this helper to your model class to be able to categorize model instances.
"""
categories = GenericRelation(MODEL_TIE)
class Meta(object):
abstract = True
_category_lists_init_kwargs = None
_category_editor = None
def set_category_lists_init_kwargs(self, kwa_dict):
"""Sets keyword arguments for category lists which can be spawned
by get_categories().
:param dict|None kwa_dict:
:return:
"""
self._category_lists_init_kwargs = kwa_dict
def get_category_lists(self, init_kwargs=None, additional_parents_aliases=None):
"""Returns a list of CategoryList objects, associated with
this model instance.
:param dict|None init_kwargs:
:param list|None additional_parents_aliases:
:rtype: list|CategoryRequestHandler
:return:
"""
if self._category_editor is not None: # Return editor lists instead of plain lists if it's enabled.
return self._category_editor.get_lists()
from .toolbox import get_category_lists
init_kwargs = init_kwargs or {}
catlist_kwargs = {}
if self._category_lists_init_kwargs is not None:
catlist_kwargs.update(self._category_lists_init_kwargs)
catlist_kwargs.update(init_kwargs)
lists = get_category_lists(catlist_kwargs, additional_parents_aliases, obj=self)
return lists
def enable_category_lists_editor(self, request, editor_init_kwargs=None, additional_parents_aliases=None,
lists_init_kwargs=None, handler_init_kwargs=None):
"""Enables editor functionality for categories of this object.
:param Request request: Django request object
:param dict editor_init_kwargs: Keyword args to initialize category lists editor with.
See CategoryList.enable_editor()
:param list additional_parents_aliases: Aliases of categories for editor to render
even if this object has no tie to them.
:param dict lists_init_kwargs: Keyword args to initialize CategoryList objects with
:param dict handler_init_kwargs: Keyword args to initialize CategoryRequestHandler object with
:return:
"""
from .toolbox import CategoryRequestHandler
additional_parents_aliases = additional_parents_aliases or []
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
handler_init_kwargs = handler_init_kwargs or {}
handler = CategoryRequestHandler(request, self, **handler_init_kwargs)
lists = self.get_category_lists(
init_kwargs=lists_init_kwargs, additional_parents_aliases=additional_parents_aliases)
handler.register_lists(lists, lists_init_kwargs=lists_init_kwargs, editor_init_kwargs=editor_init_kwargs)
self._category_editor = handler # Set link to handler to mutate get_category_lists() behaviour.
return handler.listen()
def remove_from_category(self, category):
"""Removes this object from a given category.
:param Category category:
:return:
"""
ctype = ContentType.objects.get_for_model(self)
self.categories.model.objects.filter(category=category, content_type=ctype, object_id=self.id).delete()
@classmethod
def get_ties_for_categories_qs(cls, categories, user=None, status=None):
"""Returns a QuerySet of Ties for the given categories.
:param list|Category categories:
:param User|None user:
:param int|None status:
:return:
"""
if not isinstance(categories, list):
categories = [categories]
category_ids = []
for category in categories:
if isinstance(category, models.Model):
category_ids.append(category.id)
else:
category_ids.append(category)
filter_kwargs = {
'content_type': ContentType.objects.get_for_model(cls, for_concrete_model=False),
'category_id__in': category_ids
}
if user is not None:
filter_kwargs['creator'] = user
if status is not None:
filter_kwargs['status'] = status
ties = get_tie_model().objects.filter(**filter_kwargs)
return ties
@classmethod
def get_from_category_qs(cls, category):
"""Returns a QuerySet of objects of this type associated with the given category.
:param Category category:
:rtype: list
:return:
"""
ids = cls.get_ties_for_categories_qs(category).values_list('object_id').distinct()
filter_kwargs = {'id__in': [i[0] for i in ids]}
return cls.objects.filter(**filter_kwargs)
|
idlesign/django-sitecats | sitecats/models.py | ModelWithCategory.remove_from_category | python | def remove_from_category(self, category):
ctype = ContentType.objects.get_for_model(self)
self.categories.model.objects.filter(category=category, content_type=ctype, object_id=self.id).delete() | Removes this object from a given category.
:param Category category:
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/models.py#L295-L302 | null | class ModelWithCategory(models.Model):
"""Helper class for models with tags.
Mix in this helper to your model class to be able to categorize model instances.
"""
categories = GenericRelation(MODEL_TIE)
class Meta(object):
abstract = True
_category_lists_init_kwargs = None
_category_editor = None
def set_category_lists_init_kwargs(self, kwa_dict):
"""Sets keyword arguments for category lists which can be spawned
by get_categories().
:param dict|None kwa_dict:
:return:
"""
self._category_lists_init_kwargs = kwa_dict
def get_category_lists(self, init_kwargs=None, additional_parents_aliases=None):
"""Returns a list of CategoryList objects, associated with
this model instance.
:param dict|None init_kwargs:
:param list|None additional_parents_aliases:
:rtype: list|CategoryRequestHandler
:return:
"""
if self._category_editor is not None: # Return editor lists instead of plain lists if it's enabled.
return self._category_editor.get_lists()
from .toolbox import get_category_lists
init_kwargs = init_kwargs or {}
catlist_kwargs = {}
if self._category_lists_init_kwargs is not None:
catlist_kwargs.update(self._category_lists_init_kwargs)
catlist_kwargs.update(init_kwargs)
lists = get_category_lists(catlist_kwargs, additional_parents_aliases, obj=self)
return lists
def enable_category_lists_editor(self, request, editor_init_kwargs=None, additional_parents_aliases=None,
lists_init_kwargs=None, handler_init_kwargs=None):
"""Enables editor functionality for categories of this object.
:param Request request: Django request object
:param dict editor_init_kwargs: Keyword args to initialize category lists editor with.
See CategoryList.enable_editor()
:param list additional_parents_aliases: Aliases of categories for editor to render
even if this object has no tie to them.
:param dict lists_init_kwargs: Keyword args to initialize CategoryList objects with
:param dict handler_init_kwargs: Keyword args to initialize CategoryRequestHandler object with
:return:
"""
from .toolbox import CategoryRequestHandler
additional_parents_aliases = additional_parents_aliases or []
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
handler_init_kwargs = handler_init_kwargs or {}
handler = CategoryRequestHandler(request, self, **handler_init_kwargs)
lists = self.get_category_lists(
init_kwargs=lists_init_kwargs, additional_parents_aliases=additional_parents_aliases)
handler.register_lists(lists, lists_init_kwargs=lists_init_kwargs, editor_init_kwargs=editor_init_kwargs)
self._category_editor = handler # Set link to handler to mutate get_category_lists() behaviour.
return handler.listen()
def add_to_category(self, category, user):
"""Add this model instance to a category.
:param Category category: Category to add this object to
:param User user: User heir who adds
:return:
"""
init_kwargs = {
'category': category,
'creator': user,
'linked_object': self
}
tie = self.categories.model(**init_kwargs) # That's a model of Tie.
tie.save()
return tie
@classmethod
def get_ties_for_categories_qs(cls, categories, user=None, status=None):
"""Returns a QuerySet of Ties for the given categories.
:param list|Category categories:
:param User|None user:
:param int|None status:
:return:
"""
if not isinstance(categories, list):
categories = [categories]
category_ids = []
for category in categories:
if isinstance(category, models.Model):
category_ids.append(category.id)
else:
category_ids.append(category)
filter_kwargs = {
'content_type': ContentType.objects.get_for_model(cls, for_concrete_model=False),
'category_id__in': category_ids
}
if user is not None:
filter_kwargs['creator'] = user
if status is not None:
filter_kwargs['status'] = status
ties = get_tie_model().objects.filter(**filter_kwargs)
return ties
@classmethod
def get_from_category_qs(cls, category):
"""Returns a QuerySet of objects of this type associated with the given category.
:param Category category:
:rtype: list
:return:
"""
ids = cls.get_ties_for_categories_qs(category).values_list('object_id').distinct()
filter_kwargs = {'id__in': [i[0] for i in ids]}
return cls.objects.filter(**filter_kwargs)
|
idlesign/django-sitecats | sitecats/models.py | ModelWithCategory.get_ties_for_categories_qs | python | def get_ties_for_categories_qs(cls, categories, user=None, status=None):
if not isinstance(categories, list):
categories = [categories]
category_ids = []
for category in categories:
if isinstance(category, models.Model):
category_ids.append(category.id)
else:
category_ids.append(category)
filter_kwargs = {
'content_type': ContentType.objects.get_for_model(cls, for_concrete_model=False),
'category_id__in': category_ids
}
if user is not None:
filter_kwargs['creator'] = user
if status is not None:
filter_kwargs['status'] = status
ties = get_tie_model().objects.filter(**filter_kwargs)
return ties | Returns a QuerySet of Ties for the given categories.
:param list|Category categories:
:param User|None user:
:param int|None status:
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/models.py#L305-L331 | [
"def get_tie_model():\n \"\"\"Returns the Tie model, set for the project.\"\"\"\n return get_model_class_from_string(MODEL_TIE)\n"
] | class ModelWithCategory(models.Model):
"""Helper class for models with tags.
Mix in this helper to your model class to be able to categorize model instances.
"""
categories = GenericRelation(MODEL_TIE)
class Meta(object):
abstract = True
_category_lists_init_kwargs = None
_category_editor = None
def set_category_lists_init_kwargs(self, kwa_dict):
"""Sets keyword arguments for category lists which can be spawned
by get_categories().
:param dict|None kwa_dict:
:return:
"""
self._category_lists_init_kwargs = kwa_dict
def get_category_lists(self, init_kwargs=None, additional_parents_aliases=None):
"""Returns a list of CategoryList objects, associated with
this model instance.
:param dict|None init_kwargs:
:param list|None additional_parents_aliases:
:rtype: list|CategoryRequestHandler
:return:
"""
if self._category_editor is not None: # Return editor lists instead of plain lists if it's enabled.
return self._category_editor.get_lists()
from .toolbox import get_category_lists
init_kwargs = init_kwargs or {}
catlist_kwargs = {}
if self._category_lists_init_kwargs is not None:
catlist_kwargs.update(self._category_lists_init_kwargs)
catlist_kwargs.update(init_kwargs)
lists = get_category_lists(catlist_kwargs, additional_parents_aliases, obj=self)
return lists
def enable_category_lists_editor(self, request, editor_init_kwargs=None, additional_parents_aliases=None,
lists_init_kwargs=None, handler_init_kwargs=None):
"""Enables editor functionality for categories of this object.
:param Request request: Django request object
:param dict editor_init_kwargs: Keyword args to initialize category lists editor with.
See CategoryList.enable_editor()
:param list additional_parents_aliases: Aliases of categories for editor to render
even if this object has no tie to them.
:param dict lists_init_kwargs: Keyword args to initialize CategoryList objects with
:param dict handler_init_kwargs: Keyword args to initialize CategoryRequestHandler object with
:return:
"""
from .toolbox import CategoryRequestHandler
additional_parents_aliases = additional_parents_aliases or []
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
handler_init_kwargs = handler_init_kwargs or {}
handler = CategoryRequestHandler(request, self, **handler_init_kwargs)
lists = self.get_category_lists(
init_kwargs=lists_init_kwargs, additional_parents_aliases=additional_parents_aliases)
handler.register_lists(lists, lists_init_kwargs=lists_init_kwargs, editor_init_kwargs=editor_init_kwargs)
self._category_editor = handler # Set link to handler to mutate get_category_lists() behaviour.
return handler.listen()
def add_to_category(self, category, user):
"""Add this model instance to a category.
:param Category category: Category to add this object to
:param User user: User heir who adds
:return:
"""
init_kwargs = {
'category': category,
'creator': user,
'linked_object': self
}
tie = self.categories.model(**init_kwargs) # That's a model of Tie.
tie.save()
return tie
def remove_from_category(self, category):
"""Removes this object from a given category.
:param Category category:
:return:
"""
ctype = ContentType.objects.get_for_model(self)
self.categories.model.objects.filter(category=category, content_type=ctype, object_id=self.id).delete()
@classmethod
@classmethod
def get_from_category_qs(cls, category):
"""Returns a QuerySet of objects of this type associated with the given category.
:param Category category:
:rtype: list
:return:
"""
ids = cls.get_ties_for_categories_qs(category).values_list('object_id').distinct()
filter_kwargs = {'id__in': [i[0] for i in ids]}
return cls.objects.filter(**filter_kwargs)
|
idlesign/django-sitecats | sitecats/models.py | ModelWithCategory.get_from_category_qs | python | def get_from_category_qs(cls, category):
ids = cls.get_ties_for_categories_qs(category).values_list('object_id').distinct()
filter_kwargs = {'id__in': [i[0] for i in ids]}
return cls.objects.filter(**filter_kwargs) | Returns a QuerySet of objects of this type associated with the given category.
:param Category category:
:rtype: list
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/models.py#L334-L343 | [
"def get_ties_for_categories_qs(cls, categories, user=None, status=None):\n \"\"\"Returns a QuerySet of Ties for the given categories.\n\n :param list|Category categories:\n :param User|None user:\n :param int|None status:\n :return:\n \"\"\"\n if not isinstance(categories, list):\n cate... | class ModelWithCategory(models.Model):
"""Helper class for models with tags.
Mix in this helper to your model class to be able to categorize model instances.
"""
categories = GenericRelation(MODEL_TIE)
class Meta(object):
abstract = True
_category_lists_init_kwargs = None
_category_editor = None
def set_category_lists_init_kwargs(self, kwa_dict):
"""Sets keyword arguments for category lists which can be spawned
by get_categories().
:param dict|None kwa_dict:
:return:
"""
self._category_lists_init_kwargs = kwa_dict
def get_category_lists(self, init_kwargs=None, additional_parents_aliases=None):
"""Returns a list of CategoryList objects, associated with
this model instance.
:param dict|None init_kwargs:
:param list|None additional_parents_aliases:
:rtype: list|CategoryRequestHandler
:return:
"""
if self._category_editor is not None: # Return editor lists instead of plain lists if it's enabled.
return self._category_editor.get_lists()
from .toolbox import get_category_lists
init_kwargs = init_kwargs or {}
catlist_kwargs = {}
if self._category_lists_init_kwargs is not None:
catlist_kwargs.update(self._category_lists_init_kwargs)
catlist_kwargs.update(init_kwargs)
lists = get_category_lists(catlist_kwargs, additional_parents_aliases, obj=self)
return lists
def enable_category_lists_editor(self, request, editor_init_kwargs=None, additional_parents_aliases=None,
lists_init_kwargs=None, handler_init_kwargs=None):
"""Enables editor functionality for categories of this object.
:param Request request: Django request object
:param dict editor_init_kwargs: Keyword args to initialize category lists editor with.
See CategoryList.enable_editor()
:param list additional_parents_aliases: Aliases of categories for editor to render
even if this object has no tie to them.
:param dict lists_init_kwargs: Keyword args to initialize CategoryList objects with
:param dict handler_init_kwargs: Keyword args to initialize CategoryRequestHandler object with
:return:
"""
from .toolbox import CategoryRequestHandler
additional_parents_aliases = additional_parents_aliases or []
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
handler_init_kwargs = handler_init_kwargs or {}
handler = CategoryRequestHandler(request, self, **handler_init_kwargs)
lists = self.get_category_lists(
init_kwargs=lists_init_kwargs, additional_parents_aliases=additional_parents_aliases)
handler.register_lists(lists, lists_init_kwargs=lists_init_kwargs, editor_init_kwargs=editor_init_kwargs)
self._category_editor = handler # Set link to handler to mutate get_category_lists() behaviour.
return handler.listen()
def add_to_category(self, category, user):
"""Add this model instance to a category.
:param Category category: Category to add this object to
:param User user: User heir who adds
:return:
"""
init_kwargs = {
'category': category,
'creator': user,
'linked_object': self
}
tie = self.categories.model(**init_kwargs) # That's a model of Tie.
tie.save()
return tie
def remove_from_category(self, category):
"""Removes this object from a given category.
:param Category category:
:return:
"""
ctype = ContentType.objects.get_for_model(self)
self.categories.model.objects.filter(category=category, content_type=ctype, object_id=self.id).delete()
@classmethod
def get_ties_for_categories_qs(cls, categories, user=None, status=None):
"""Returns a QuerySet of Ties for the given categories.
:param list|Category categories:
:param User|None user:
:param int|None status:
:return:
"""
if not isinstance(categories, list):
categories = [categories]
category_ids = []
for category in categories:
if isinstance(category, models.Model):
category_ids.append(category.id)
else:
category_ids.append(category)
filter_kwargs = {
'content_type': ContentType.objects.get_for_model(cls, for_concrete_model=False),
'category_id__in': category_ids
}
if user is not None:
filter_kwargs['creator'] = user
if status is not None:
filter_kwargs['status'] = status
ties = get_tie_model().objects.filter(**filter_kwargs)
return ties
@classmethod
|
idlesign/django-sitecats | sitecats/templatetags/sitecats.py | detect_clause | python | def detect_clause(parser, clause_name, tokens, as_filter_expr=True):
if clause_name in tokens:
t_index = tokens.index(clause_name)
clause_value = tokens[t_index + 1]
if as_filter_expr:
clause_value = parser.compile_filter(clause_value)
del tokens[t_index:t_index + 2]
else:
clause_value = None
return clause_value | Helper function detects a certain clause in tag tokens list.
Returns its value. | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/templatetags/sitecats.py#L102-L115 | null | from django import template, VERSION
from django.conf import settings
from django.template.loader import get_template
from django.template.base import FilterExpression
from ..models import ModelWithCategory
from ..toolbox import CategoryRequestHandler, CategoryList
from ..exceptions import SitecatsConfigurationError
register = template.Library()
_CONTEXT_FLATTEN = VERSION >= (1, 11)
@register.tag
def sitecats_url(parser, token):
tokens = token.split_contents()
as_clause = detect_clause(parser, 'as', tokens, as_filter_expr=False)
target_list = detect_clause(parser, 'using', tokens)
category = detect_clause(parser, 'for', tokens)
if category is None or target_list is None:
raise template.TemplateSyntaxError(
'`sitecats_url` tag expects the following notation: '
'{% sitecats_url for my_category using my_categories_list as someurl %}.')
return sitecats_urlNode(category, target_list, as_clause)
@register.tag
def sitecats_categories(parser, token):
tokens = token.split_contents()
use_template = detect_clause(parser, 'template', tokens)
target_obj = detect_clause(parser, 'from', tokens)
if target_obj is None:
raise template.TemplateSyntaxError(
'`sitecats_categories` tag expects the following notation: '
'{% sitecats_categories from my_categories_list template "sitecats/my_categories.html" %}.')
return sitecats_categoriesNode(target_obj, use_template)
class sitecats_urlNode(template.Node):
def __init__(self, category, target_list, as_var):
self.as_var = as_var
self.target_list = target_list
self.category = category
def render(self, context):
resolve = lambda arg: arg.resolve(context) if isinstance(arg, FilterExpression) else arg
category = resolve(self.category)
target_obj = resolve(self.target_list)
url = target_obj.get_category_url(category)
if not self.as_var:
return url
context[self.as_var] = url
return ''
class sitecats_categoriesNode(template.Node):
def __init__(self, target_obj, use_template):
self.use_template = use_template
self.target_obj = target_obj
def render(self, context):
resolve = lambda arg: arg.resolve(context) if isinstance(arg, FilterExpression) else arg
target_obj = resolve(self.target_obj)
if isinstance(target_obj, CategoryRequestHandler):
target_obj = target_obj.get_lists()
elif isinstance(target_obj, ModelWithCategory):
target_obj = target_obj.get_category_lists()
elif isinstance(target_obj, (list, tuple)): # Simple list of CategoryList items.
pass
elif isinstance(target_obj, CategoryList):
target_obj = (target_obj,)
else:
if settings.DEBUG:
raise SitecatsConfigurationError(
'`sitecats_categories` template tag can\'t accept `%s` type '
'from `%s` template variable.' % (type(target_obj), self.target_obj))
return '' # Silent fall.
context.push()
context['sitecats_categories'] = target_obj
template_path = resolve(self.use_template) or 'sitecats/categories.html'
contents = get_template(template_path).render(context.flatten() if _CONTEXT_FLATTEN else context)
context.pop()
return contents
|
idlesign/django-sitecats | sitecats/utils.py | Cache._cache_init | python | def _cache_init(self):
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_ | Initializes local cache from Django cache if required. | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L57-L85 | [
"def get_category_model():\n \"\"\"Returns the Category model, set for the project.\"\"\"\n return get_model_class_from_string(MODEL_CATEGORY)\n"
] | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache._cache_get_entry | python | def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default) | Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L94-L104 | null | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.sort_aliases | python | def sort_aliases(self, aliases):
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases] | Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L106-L116 | [
"def _cache_init(self):\n \"\"\"Initializes local cache from Django cache if required.\"\"\"\n cache_ = cache.get(self.CACHE_ENTRY_NAME)\n\n if cache_ is None:\n categories = get_category_model().objects.order_by('sort_order')\n\n ids = {category.id: category for category in categories}\n ... | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.get_parents_for | python | def get_parents_for(self, child_ids):
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) | Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L118-L130 | [
"def _cache_init(self):\n \"\"\"Initializes local cache from Django cache if required.\"\"\"\n cache_ = cache.get(self.CACHE_ENTRY_NAME)\n\n if cache_ is None:\n categories = get_category_model().objects.order_by('sort_order')\n\n ids = {category.id: category for category in categories}\n ... | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
# Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.get_children_for | python | def get_children_for(self, parent_alias=None, only_with_aliases=False):
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids] | Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L132-L148 | [
"def _cache_init(self):\n \"\"\"Initializes local cache from Django cache if required.\"\"\"\n cache_ = cache.get(self.CACHE_ENTRY_NAME)\n\n if cache_ is None:\n categories = get_category_model().objects.order_by('sort_order')\n\n ids = {category.id: category for category in categories}\n ... | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.get_child_ids | python | def get_child_ids(self, parent_alias):
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, []) | Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L150-L158 | [
"def _cache_init(self):\n \"\"\"Initializes local cache from Django cache if required.\"\"\"\n cache_ = cache.get(self.CACHE_ENTRY_NAME)\n\n if cache_ is None:\n categories = get_category_model().objects.order_by('sort_order')\n\n ids = {category.id: category for category in categories}\n ... | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.get_category_by_alias | python | def get_category_by_alias(self, alias):
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None) | Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L160-L168 | [
"def _cache_init(self):\n \"\"\"Initializes local cache from Django cache if required.\"\"\"\n cache_ = cache.get(self.CACHE_ENTRY_NAME)\n\n if cache_ is None:\n categories = get_category_model().objects.order_by('sort_order')\n\n ids = {category.id: category for category in categories}\n ... | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.get_category_by_id | python | def get_category_by_id(self, cid):
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid) | Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L170-L178 | [
"def _cache_init(self):\n \"\"\"Initializes local cache from Django cache if required.\"\"\"\n cache_ = cache.get(self.CACHE_ENTRY_NAME)\n\n if cache_ is None:\n categories = get_category_model().objects.order_by('sort_order')\n\n ids = {category.id: category for category in categories}\n ... | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.find_category | python | def find_category(self, parent_alias, title):
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found | Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L180-L195 | [
"def get_child_ids(self, parent_alias):\n \"\"\"Returns child IDs of the given parent category\n\n :param str parent_alias: Parent category alias\n :rtype: list\n :return: a list of child IDs\n \"\"\"\n self._cache_init()\n return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])... | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.get_ties_stats | python | def get_ties_stats(self, categories, target_model=None):
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
} | Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L197-L221 | [
"def get_tie_model():\n \"\"\"Returns the Tie model, set for the project.\"\"\"\n return get_model_class_from_string(MODEL_TIE)\n"
] | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
"""Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories.
"""
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories
|
idlesign/django-sitecats | sitecats/utils.py | Cache.get_categories | python | def get_categories(self, parent_aliases=None, target_object=None, tied_only=True):
single_mode = False
if not isinstance(parent_aliases, list):
single_mode = parent_aliases
parent_aliases = [parent_aliases]
all_children = []
parents_to_children = OrderedDict()
for parent_alias in parent_aliases:
child_ids = self.get_child_ids(parent_alias)
parents_to_children[parent_alias] = child_ids
if tied_only:
all_children.extend(child_ids)
ties = {}
if tied_only:
source = OrderedDict()
ties = self.get_ties_stats(all_children, target_object)
for parent_alias, child_ids in parents_to_children.items():
common = set(ties.keys()).intersection(child_ids)
if common:
source[parent_alias] = common
else:
source = parents_to_children
categories = OrderedDict()
for parent_alias, child_ids in source.items():
for cat_id in child_ids:
cat = self.get_category_by_id(cat_id)
if tied_only:
cat.ties_num = ties.get(cat_id, 0)
if parent_alias not in categories:
categories[parent_alias] = []
categories[parent_alias].append(cat)
if single_mode != False: # sic!
return categories[single_mode]
return categories | Returns subcategories (or ties if `target_object` is set)
for the given parent category.
:param str|None|list parent_aliases:
:param ModelWithCategory|Model target_object:
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:return: a list of category objects or tie objects extended with information from their categories. | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/utils.py#L223-L273 | [
"def get_child_ids(self, parent_alias):\n \"\"\"Returns child IDs of the given parent category\n\n :param str parent_alias: Parent category alias\n :rtype: list\n :return: a list of child IDs\n \"\"\"\n self._cache_init()\n return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])... | class Cache(object):
# Sitecats objects are stored in Django cache for a year (60 * 60 * 24 * 365 = 31536000 sec).
# Cache is only invalidated on sitecats Category model save/delete.
CACHE_TIMEOUT = 31536000
CACHE_ENTRY_NAME = 'sitecats'
CACHE_NAME_IDS = 'ids'
CACHE_NAME_ALIASES = 'aliases'
CACHE_NAME_PARENTS = 'parents'
def __init__(self):
self._cache = None
# Listen for signals from the models.
category_model = get_category_model()
signals.post_save.connect(self._cache_empty, sender=category_model)
signals.post_delete.connect(self._cache_empty, sender=category_model)
def _cache_init(self):
"""Initializes local cache from Django cache if required."""
cache_ = cache.get(self.CACHE_ENTRY_NAME)
if cache_ is None:
categories = get_category_model().objects.order_by('sort_order')
ids = {category.id: category for category in categories}
aliases = {category.alias: category for category in categories if category.alias}
parent_to_children = OrderedDict() # Preserve aliases order.
for category in categories:
parent_category = ids.get(category.parent_id, False)
parent_alias = None
if parent_category:
parent_alias = parent_category.alias
if parent_alias not in parent_to_children:
parent_to_children[parent_alias] = []
parent_to_children[parent_alias].append(category.id)
cache_ = {
self.CACHE_NAME_IDS: ids,
self.CACHE_NAME_PARENTS: parent_to_children,
self.CACHE_NAME_ALIASES: aliases
}
cache.set(self.CACHE_ENTRY_NAME, cache_, self.CACHE_TIMEOUT)
self._cache = cache_
def _cache_empty(self, **kwargs):
"""Empties cached sitecats data."""
self._cache = None
cache.delete(self.CACHE_ENTRY_NAME)
ENTIRE_ENTRY_KEY = object()
def _cache_get_entry(self, entry_name, key=ENTIRE_ENTRY_KEY, default=False):
"""Returns cache entry parameter value by its name.
:param str entry_name:
:param str key:
:param type default:
:return:
"""
if key is self.ENTIRE_ENTRY_KEY:
return self._cache[entry_name]
return self._cache[entry_name].get(key, default)
def sort_aliases(self, aliases):
"""Sorts the given aliases list, returns a sorted list.
:param list aliases:
:return: sorted aliases list
"""
self._cache_init()
if not aliases:
return aliases
parent_aliases = self._cache_get_entry(self.CACHE_NAME_PARENTS).keys()
return [parent_alias for parent_alias in parent_aliases if parent_alias in aliases]
def get_parents_for(self, child_ids):
"""Returns parent aliases for a list of child IDs.
:param list child_ids:
:rtype: set
:return: a set of parent aliases
"""
self._cache_init()
parent_candidates = []
for parent, children in self._cache_get_entry(self.CACHE_NAME_PARENTS).items():
if set(children).intersection(child_ids):
parent_candidates.append(parent)
return set(parent_candidates) # Make unique.
def get_children_for(self, parent_alias=None, only_with_aliases=False):
"""Returns a list with with categories under the given parent.
:param str|None parent_alias: Parent category alias or None for categories under root
:param bool only_with_aliases: Flag to return only children with aliases
:return: a list of category objects
"""
self._cache_init()
child_ids = self.get_child_ids(parent_alias)
if only_with_aliases:
children = []
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.alias:
children.append(category)
return children
return [self.get_category_by_id(cid) for cid in child_ids]
def get_child_ids(self, parent_alias):
"""Returns child IDs of the given parent category
:param str parent_alias: Parent category alias
:rtype: list
:return: a list of child IDs
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_PARENTS, parent_alias, [])
def get_category_by_alias(self, alias):
"""Returns Category object by its alias.
:param str alias:
:rtype: Category|None
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_ALIASES, alias, None)
def get_category_by_id(self, cid):
"""Returns Category object by its id.
:param str cid:
:rtype: Category
:return: category object
"""
self._cache_init()
return self._cache_get_entry(self.CACHE_NAME_IDS, cid)
def find_category(self, parent_alias, title):
"""Searches parent category children for the given title (case independent).
:param str parent_alias:
:param str title:
:rtype: Category|None
:return: None if not found; otherwise - found Category
"""
found = None
child_ids = self.get_child_ids(parent_alias)
for cid in child_ids:
category = self.get_category_by_id(cid)
if category.title.lower() == title.lower():
found = category
break
return found
def get_ties_stats(self, categories, target_model=None):
"""Returns a dict with categories popularity stats.
:param list categories:
:param Model|None target_model:
:return:
"""
filter_kwargs = {
'category_id__in': categories
}
if target_model is not None:
is_cls = hasattr(target_model, '__name__')
if is_cls:
concrete = False
else:
concrete = True
filter_kwargs['object_id'] = target_model.id
filter_kwargs['content_type'] = ContentType.objects.get_for_model(
target_model, for_concrete_model=concrete
)
return {
item['category_id']: item['ties_num'] for item in
get_tie_model().objects.filter(**filter_kwargs).values('category_id').annotate(ties_num=Count('category'))
}
|
idlesign/django-sitecats | sitecats/toolbox.py | get_category_aliases_under | python | def get_category_aliases_under(parent_alias=None):
return [ch.alias for ch in get_cache().get_children_for(parent_alias, only_with_aliases=True)] | Returns a list of category aliases under the given parent.
Could be useful to pass to `ModelWithCategory.enable_category_lists_editor`
in `additional_parents_aliases` parameter.
:param str|None parent_alias: Parent alias or None to categories under root
:rtype: list
:return: a list of category aliases | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/toolbox.py#L16-L26 | [
"def get_cache():\n \"\"\"Returns global cache object.\n\n :rtype: Cache\n :return: cache object\n \"\"\"\n global _SITECATS_CACHE\n\n if _SITECATS_CACHE is None:\n _SITECATS_CACHE = apps.get_app_config('sitecats').get_categories_cache()\n\n return _SITECATS_CACHE\n"
] | from inspect import getargspec
from collections import namedtuple, OrderedDict, Callable
from django.utils.encoding import python_2_unicode_compatible
from django.utils.translation import ugettext_lazy as _, ungettext_lazy
from django.utils.six import string_types
from django.contrib.contenttypes.models import ContentType
from django.contrib import messages
from .settings import UNRESOLVED_URL_MARKER
from .utils import get_category_model, get_tie_model, get_cache
from .exceptions import SitecatsConfigurationError, SitecatsSecurityException, SitecatsNewCategoryException, \
SitecatsValidationError
def get_category_lists(init_kwargs=None, additional_parents_aliases=None, obj=None):
"""Returns a list of CategoryList objects, optionally associated with
a given model instance.
:param dict|None init_kwargs:
:param list|None additional_parents_aliases:
:param Model|None obj: Model instance to get categories for
:rtype: list
:return:
"""
init_kwargs = init_kwargs or {}
additional_parents_aliases = additional_parents_aliases or []
parent_aliases = additional_parents_aliases
if obj is not None:
ctype = ContentType.objects.get_for_model(obj)
cat_ids = [
item[0] for item in
get_tie_model().objects.filter(content_type=ctype, object_id=obj.id).values_list('category_id').all()
]
parent_aliases = list(get_cache().get_parents_for(cat_ids).union(additional_parents_aliases))
lists = []
aliases = get_cache().sort_aliases(parent_aliases)
categories_cache = get_cache().get_categories(aliases, obj)
for parent_alias in aliases:
catlist = CategoryList(parent_alias, **init_kwargs) # TODO Burned in class name. Make more customizable.
if obj is not None:
catlist.set_obj(obj)
# Optimization. To get DB hits down.
cache = []
try:
cache = categories_cache[parent_alias]
except KeyError:
pass
catlist.set_get_categories_cache(cache)
lists.append(catlist)
return lists
@python_2_unicode_compatible
class CategoryList(object):
"""Represents a set on categories under a parent category on page."""
_cache_category = None
_cache_get_categories = None
#TODO custom template
def __init__(self, alias=None, show_title=False, show_links=True, cat_html_class=''):
"""
:param str alias: Alias of a category to construct a list from (list will include subcategories)
:param bool show_title: Flag to render parent category title
:param bool|callable show_links: Boolean flag to render links for category pages,
or a callable which accepts Category instance and returns an URL for it.
If boolean and True links will be set to UNRESOLVED_URL_MARKER (useful
for client-side links generation based on data-* attrs of HTML elements).
:param str cat_html_class: HTML classes to be added to categories
:return:
"""
self.alias = alias
self.show_title = show_title
self._url_resolver = None
if isinstance(show_links, Callable):
self._url_resolver = show_links
show_links = True
self.show_links = show_links
self.cat_html_class = cat_html_class
self.obj = None
self.editor = None
def __str__(self):
"""Returns alias.
:rtype: str
:return: alias
"""
s = self.alias
if s is None:
s = ''
return s
def set_get_categories_cache(self, val):
"""Sets prefetched data to be returned by `get_categories()` later on.
:param list val:
:return:
"""
self._cache_get_categories = val
def get_category_url(self, category):
"""Returns URL for a given Category object from this list.
First tries to get it with a callable passed as `show_links` init param of this list.
Secondly tries to get it with `get_category_absolute_url` method of an object associated with this list.
:param Category category:
:rtype: str
:return: URL
"""
if self._url_resolver is not None:
return self._url_resolver(category)
if self.obj and hasattr(self.obj, 'get_category_absolute_url'):
return self.obj.get_category_absolute_url(category)
return UNRESOLVED_URL_MARKER
def set_obj(self, obj):
"""Sets a target object for categories to be filtered upon.
`ModelWithCategory` heir is expected.
If not set CategoryList will render actual categories.
If set CategoryList will render just object-to-categories ties.
:param obj: `ModelWithCategory` heir
"""
self.obj = obj
def enable_editor(self, allow_add=True, allow_remove=False, allow_new=False, min_num=None, max_num=None,
render_button=True, category_separator=None, show_category_choices=True):
"""Enables editor controls for this category list.
:param bool allow_add: Flag to allow adding object-to-categories ties
:param bool allow_remove: Flag to allow remove of object-to-categories ties or categories themselves
:param bool allow_new: Flag to allow new categories creation
:param None|int min_num: Child items minimum for this list
(object-to-categories ties or categories themselves)
:param None|int max_num: Child items maximum for this list
(object-to-categories ties or categories themselves)
:param bool render_button: Flag to allow buttons rendering for forms of this list
:param str|None category_separator: String to consider it a category separator.
:param bool show_category_choices: Flag to render a choice list of available subcategories
for each CategoryList
"""
# DRY: translate method args into namedtuple args.
args, n, n, n = getargspec(self.enable_editor)
locals_ = locals()
self.editor = namedtuple('CategoryEditor', args)(**{arg: locals_[arg] for arg in args})
def get_category_model(self):
"""Returns category model for this list (parent category for categories in the list) or None.
:return: CategoryModel|None
"""
if self._cache_category is None:
self._cache_category = get_cache().get_category_by_alias(self.alias)
return self._cache_category
def get_category_attr(self, name, default=False):
"""Returns a custom attribute of a category model for this list.
:param str name: Attribute name
:param object default: Default value if attribute is not found
:return: attribute value
"""
category = self.get_category_model()
return getattr(category, name, default)
def get_id(self):
"""Returns ID attribute of a category of this list.
:rtype: int|None
:return: id
"""
return self.get_category_attr('id', None)
def get_title(self):
"""Returns `title` attribute of a category of this list.
:rtype: str
:return: title
"""
return self.get_category_attr('title', _('Categories'))
def get_note(self):
"""Returns `note` attribute of a category of this list.
:rtype: str
:return: note
"""
return self.get_category_attr('note', '')
def get_categories(self, tied_only=None):
"""Returns a list of actual subcategories.
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:rtype: list
:return: a list of actual subcategories
"""
if self._cache_get_categories is not None:
return self._cache_get_categories
if tied_only is None:
tied_only = self.obj is not None
return get_cache().get_categories(self.alias, self.obj, tied_only=tied_only)
def get_choices(self):
"""Returns available subcategories choices list.
:rtype: list
:return: list of Category objects
"""
return get_cache().get_children_for(self.alias)
class CategoryRequestHandler(object):
"""This one can handle requests issued by CategoryList editors. Can be used in views."""
list_cls = CategoryList # For customization purposes.
KNOWN_ACTIONS = ('add', 'remove')
def __init__(self, request, obj=None, error_messages_extra_tags=None):
"""
:param Request request: Django request object
:param Model obj: `ModelWithCategory` heir to bind CategoryList objects upon.
"""
self._request = request
self._lists = OrderedDict()
self._obj = obj
self.error_messages_extra_tags = error_messages_extra_tags or ''
def register_lists(self, category_lists, lists_init_kwargs=None, editor_init_kwargs=None):
"""Registers CategoryList objects to handle their requests.
:param list category_lists: CategoryList objects
:param dict lists_init_kwargs: Attributes to apply to each of CategoryList objects
"""
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
for lst in category_lists:
if isinstance(lst, string_types): # Spawn CategoryList object from base category alias.
lst = self.list_cls(lst, **lists_init_kwargs)
elif not isinstance(lst, CategoryList):
raise SitecatsConfigurationError(
'`CategoryRequestHandler.register_lists()` accepts only '
'`CategoryList` objects or category aliases.'
)
if self._obj:
lst.set_obj(self._obj)
for name, val in lists_init_kwargs.items(): # Setting CategoryList attributes from kwargs.
setattr(lst, name, val)
lst.enable_editor(**editor_init_kwargs)
self._lists[lst.get_id()] = lst
@classmethod
def action_remove(cls, request, category_list):
"""Handles `remove` action from CategoryList editor.
Removes an actual category if a target object is not set for the list.
Removes a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: True on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_remove:
raise SitecatsSecurityException(
'`action_remove()` is not supported by parent `%s`category.' % category_list.alias)
category_id = int(request.POST.get('category_id', 0))
if not category_id:
raise SitecatsSecurityException(
'Unsupported `category_id` value - `%s` - is passed to `action_remove()`.' % category_id)
category = get_cache().get_category_by_id(category_id)
if not category:
raise SitecatsSecurityException('Unable to get `%s` category in `action_remove()`.' % category_id)
cat_ident = category.alias or category.id
if category.is_locked:
raise SitecatsSecurityException('`action_remove()` is not supported by `%s` category.' % cat_ident)
if category.parent_id != category_list.get_id():
raise SitecatsSecurityException(
'`action_remove()` is unable to remove `%s`: '
'not a child of parent `%s` category.' % (cat_ident, category_list.alias)
)
min_num = category_list.editor.min_num
def check_min_num(num):
if min_num is not None and num-1 < min_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', min_num)
error_msg = _(
'Unable to remove "%(target_category)s" category from "%(parent_category)s": '
'parent category requires at least %(num)s %(subcats_str)s.'
) % {
'target_category': category.title,
'parent_category': category_list.get_title(),
'num': min_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
child_ids = get_cache().get_child_ids(category_list.alias)
check_min_num(len(child_ids))
if category_list.obj is None: # Remove category itself and children.
category.delete()
else: # Remove just a category-to-object tie.
# TODO filter user/status
check_min_num(category_list.obj.get_ties_for_categories_qs(child_ids).count())
category_list.obj.remove_from_category(category)
return True
@classmethod
def action_add(cls, request, category_list):
"""Handles `add` action from CategoryList editor.
Adds an actual category if a target object is not set for the list.
Adds a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: CategoryModel object on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_add:
raise SitecatsSecurityException('`action_add()` is not supported by `%s` category.' % category_list.alias)
titles = request.POST.get('category_title', '').strip()
if not titles:
raise SitecatsSecurityException(
'Unsupported `category_title` value - `%s` - is passed to `action_add()`.' % titles)
if category_list.editor.category_separator is None:
titles = [titles]
else:
titles = [
title.strip() for title in titles.split(category_list.editor.category_separator) if title.strip()
]
def check_max_num(num, max_num, category_title):
if max_num is not None and num+1 > max_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', max_num)
error_msg = _(
'Unable to add "%(target_category)s" category into "%(parent_category)s": '
'parent category can have at most %(num)s %(subcats_str)s.'
) % {
'target_category': category_title,
'parent_category': category_list.get_title(),
'num': max_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
target_category = None
for category_title in titles:
exists = get_cache().find_category(category_list.alias, category_title)
if exists and category_list.obj is None: # Already exists.
return exists
if not exists and not category_list.editor.allow_new:
error_msg = _(
'Unable to create a new "%(new_category)s" category inside of "%(parent_category)s": '
'parent category does not support this action.'
) % {
'new_category': category_title,
'parent_category': category_list.get_title()
}
raise SitecatsNewCategoryException(error_msg)
max_num = category_list.editor.max_num
child_ids = get_cache().get_child_ids(category_list.alias)
if not exists: # Add new category.
if category_list.obj is None:
check_max_num(len(child_ids), max_num, category_title)
# TODO status
target_category = get_category_model().add(
category_title, request.user, parent=category_list.get_category_model()
)
else:
target_category = exists # Use existing one for a tie.
if category_list.obj is not None:
# TODO status
check_max_num(category_list.obj.get_ties_for_categories_qs(child_ids).count(), max_num, category_title)
category_list.obj.add_to_category(target_category, request.user)
return target_category
def listen(self):
"""Instructs handler to listen to Django request and handle
CategoryList editor requests (if any).
:return: None on success otherwise and exception from SitecatsException family is raised.
"""
requested_action = self._request.POST.get('category_action', False)
if not requested_action:
return None # No action supplied. Pass.
if requested_action not in self.KNOWN_ACTIONS:
raise SitecatsSecurityException('Unknown `category_action` - `%s` - requested.')
category_base_id = self._request.POST.get('category_base_id', False)
if category_base_id == 'None':
category_base_id = None
else:
category_base_id = int(category_base_id)
if category_base_id not in self._lists.keys():
raise SitecatsSecurityException('Unknown `category_base_id` - `%s` - requested.')
category_list = self._lists[category_base_id]
if category_list.editor is None:
raise SitecatsSecurityException('Editor is disabled for `%s` category.' % category_list.alias)
action_method = getattr(self, 'action_%s' % requested_action)
try:
return action_method(self._request, category_list)
except SitecatsNewCategoryException as e:
messages.error(self._request, e, extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
except SitecatsValidationError as e:
messages.error(self._request, e.messages[0], extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
finally:
self._request.POST = {} # Prevent other forms fail.
def get_lists(self):
"""Returns a list of previously registered CategoryList objects.
:rtype: list
:return: A list of CategoryList objects.
"""
return list(self._lists.values())
|
idlesign/django-sitecats | sitecats/toolbox.py | get_category_lists | python | def get_category_lists(init_kwargs=None, additional_parents_aliases=None, obj=None):
init_kwargs = init_kwargs or {}
additional_parents_aliases = additional_parents_aliases or []
parent_aliases = additional_parents_aliases
if obj is not None:
ctype = ContentType.objects.get_for_model(obj)
cat_ids = [
item[0] for item in
get_tie_model().objects.filter(content_type=ctype, object_id=obj.id).values_list('category_id').all()
]
parent_aliases = list(get_cache().get_parents_for(cat_ids).union(additional_parents_aliases))
lists = []
aliases = get_cache().sort_aliases(parent_aliases)
categories_cache = get_cache().get_categories(aliases, obj)
for parent_alias in aliases:
catlist = CategoryList(parent_alias, **init_kwargs) # TODO Burned in class name. Make more customizable.
if obj is not None:
catlist.set_obj(obj)
# Optimization. To get DB hits down.
cache = []
try:
cache = categories_cache[parent_alias]
except KeyError:
pass
catlist.set_get_categories_cache(cache)
lists.append(catlist)
return lists | Returns a list of CategoryList objects, optionally associated with
a given model instance.
:param dict|None init_kwargs:
:param list|None additional_parents_aliases:
:param Model|None obj: Model instance to get categories for
:rtype: list
:return: | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/toolbox.py#L29-L68 | [
"def get_tie_model():\n \"\"\"Returns the Tie model, set for the project.\"\"\"\n return get_model_class_from_string(MODEL_TIE)\n",
"def get_cache():\n \"\"\"Returns global cache object.\n\n :rtype: Cache\n :return: cache object\n \"\"\"\n global _SITECATS_CACHE\n\n if _SITECATS_CACHE is N... | from inspect import getargspec
from collections import namedtuple, OrderedDict, Callable
from django.utils.encoding import python_2_unicode_compatible
from django.utils.translation import ugettext_lazy as _, ungettext_lazy
from django.utils.six import string_types
from django.contrib.contenttypes.models import ContentType
from django.contrib import messages
from .settings import UNRESOLVED_URL_MARKER
from .utils import get_category_model, get_tie_model, get_cache
from .exceptions import SitecatsConfigurationError, SitecatsSecurityException, SitecatsNewCategoryException, \
SitecatsValidationError
def get_category_aliases_under(parent_alias=None):
"""Returns a list of category aliases under the given parent.
Could be useful to pass to `ModelWithCategory.enable_category_lists_editor`
in `additional_parents_aliases` parameter.
:param str|None parent_alias: Parent alias or None to categories under root
:rtype: list
:return: a list of category aliases
"""
return [ch.alias for ch in get_cache().get_children_for(parent_alias, only_with_aliases=True)]
@python_2_unicode_compatible
class CategoryList(object):
"""Represents a set on categories under a parent category on page."""
_cache_category = None
_cache_get_categories = None
#TODO custom template
def __init__(self, alias=None, show_title=False, show_links=True, cat_html_class=''):
"""
:param str alias: Alias of a category to construct a list from (list will include subcategories)
:param bool show_title: Flag to render parent category title
:param bool|callable show_links: Boolean flag to render links for category pages,
or a callable which accepts Category instance and returns an URL for it.
If boolean and True links will be set to UNRESOLVED_URL_MARKER (useful
for client-side links generation based on data-* attrs of HTML elements).
:param str cat_html_class: HTML classes to be added to categories
:return:
"""
self.alias = alias
self.show_title = show_title
self._url_resolver = None
if isinstance(show_links, Callable):
self._url_resolver = show_links
show_links = True
self.show_links = show_links
self.cat_html_class = cat_html_class
self.obj = None
self.editor = None
def __str__(self):
"""Returns alias.
:rtype: str
:return: alias
"""
s = self.alias
if s is None:
s = ''
return s
def set_get_categories_cache(self, val):
"""Sets prefetched data to be returned by `get_categories()` later on.
:param list val:
:return:
"""
self._cache_get_categories = val
def get_category_url(self, category):
"""Returns URL for a given Category object from this list.
First tries to get it with a callable passed as `show_links` init param of this list.
Secondly tries to get it with `get_category_absolute_url` method of an object associated with this list.
:param Category category:
:rtype: str
:return: URL
"""
if self._url_resolver is not None:
return self._url_resolver(category)
if self.obj and hasattr(self.obj, 'get_category_absolute_url'):
return self.obj.get_category_absolute_url(category)
return UNRESOLVED_URL_MARKER
def set_obj(self, obj):
"""Sets a target object for categories to be filtered upon.
`ModelWithCategory` heir is expected.
If not set CategoryList will render actual categories.
If set CategoryList will render just object-to-categories ties.
:param obj: `ModelWithCategory` heir
"""
self.obj = obj
def enable_editor(self, allow_add=True, allow_remove=False, allow_new=False, min_num=None, max_num=None,
render_button=True, category_separator=None, show_category_choices=True):
"""Enables editor controls for this category list.
:param bool allow_add: Flag to allow adding object-to-categories ties
:param bool allow_remove: Flag to allow remove of object-to-categories ties or categories themselves
:param bool allow_new: Flag to allow new categories creation
:param None|int min_num: Child items minimum for this list
(object-to-categories ties or categories themselves)
:param None|int max_num: Child items maximum for this list
(object-to-categories ties or categories themselves)
:param bool render_button: Flag to allow buttons rendering for forms of this list
:param str|None category_separator: String to consider it a category separator.
:param bool show_category_choices: Flag to render a choice list of available subcategories
for each CategoryList
"""
# DRY: translate method args into namedtuple args.
args, n, n, n = getargspec(self.enable_editor)
locals_ = locals()
self.editor = namedtuple('CategoryEditor', args)(**{arg: locals_[arg] for arg in args})
def get_category_model(self):
"""Returns category model for this list (parent category for categories in the list) or None.
:return: CategoryModel|None
"""
if self._cache_category is None:
self._cache_category = get_cache().get_category_by_alias(self.alias)
return self._cache_category
def get_category_attr(self, name, default=False):
"""Returns a custom attribute of a category model for this list.
:param str name: Attribute name
:param object default: Default value if attribute is not found
:return: attribute value
"""
category = self.get_category_model()
return getattr(category, name, default)
def get_id(self):
"""Returns ID attribute of a category of this list.
:rtype: int|None
:return: id
"""
return self.get_category_attr('id', None)
def get_title(self):
"""Returns `title` attribute of a category of this list.
:rtype: str
:return: title
"""
return self.get_category_attr('title', _('Categories'))
def get_note(self):
"""Returns `note` attribute of a category of this list.
:rtype: str
:return: note
"""
return self.get_category_attr('note', '')
def get_categories(self, tied_only=None):
"""Returns a list of actual subcategories.
:param bool tied_only: Flag to get only categories with ties. Ties stats are stored in `ties_num` attrs.
:rtype: list
:return: a list of actual subcategories
"""
if self._cache_get_categories is not None:
return self._cache_get_categories
if tied_only is None:
tied_only = self.obj is not None
return get_cache().get_categories(self.alias, self.obj, tied_only=tied_only)
def get_choices(self):
"""Returns available subcategories choices list.
:rtype: list
:return: list of Category objects
"""
return get_cache().get_children_for(self.alias)
class CategoryRequestHandler(object):
"""This one can handle requests issued by CategoryList editors. Can be used in views."""
list_cls = CategoryList # For customization purposes.
KNOWN_ACTIONS = ('add', 'remove')
def __init__(self, request, obj=None, error_messages_extra_tags=None):
"""
:param Request request: Django request object
:param Model obj: `ModelWithCategory` heir to bind CategoryList objects upon.
"""
self._request = request
self._lists = OrderedDict()
self._obj = obj
self.error_messages_extra_tags = error_messages_extra_tags or ''
def register_lists(self, category_lists, lists_init_kwargs=None, editor_init_kwargs=None):
"""Registers CategoryList objects to handle their requests.
:param list category_lists: CategoryList objects
:param dict lists_init_kwargs: Attributes to apply to each of CategoryList objects
"""
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
for lst in category_lists:
if isinstance(lst, string_types): # Spawn CategoryList object from base category alias.
lst = self.list_cls(lst, **lists_init_kwargs)
elif not isinstance(lst, CategoryList):
raise SitecatsConfigurationError(
'`CategoryRequestHandler.register_lists()` accepts only '
'`CategoryList` objects or category aliases.'
)
if self._obj:
lst.set_obj(self._obj)
for name, val in lists_init_kwargs.items(): # Setting CategoryList attributes from kwargs.
setattr(lst, name, val)
lst.enable_editor(**editor_init_kwargs)
self._lists[lst.get_id()] = lst
@classmethod
def action_remove(cls, request, category_list):
"""Handles `remove` action from CategoryList editor.
Removes an actual category if a target object is not set for the list.
Removes a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: True on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_remove:
raise SitecatsSecurityException(
'`action_remove()` is not supported by parent `%s`category.' % category_list.alias)
category_id = int(request.POST.get('category_id', 0))
if not category_id:
raise SitecatsSecurityException(
'Unsupported `category_id` value - `%s` - is passed to `action_remove()`.' % category_id)
category = get_cache().get_category_by_id(category_id)
if not category:
raise SitecatsSecurityException('Unable to get `%s` category in `action_remove()`.' % category_id)
cat_ident = category.alias or category.id
if category.is_locked:
raise SitecatsSecurityException('`action_remove()` is not supported by `%s` category.' % cat_ident)
if category.parent_id != category_list.get_id():
raise SitecatsSecurityException(
'`action_remove()` is unable to remove `%s`: '
'not a child of parent `%s` category.' % (cat_ident, category_list.alias)
)
min_num = category_list.editor.min_num
def check_min_num(num):
if min_num is not None and num-1 < min_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', min_num)
error_msg = _(
'Unable to remove "%(target_category)s" category from "%(parent_category)s": '
'parent category requires at least %(num)s %(subcats_str)s.'
) % {
'target_category': category.title,
'parent_category': category_list.get_title(),
'num': min_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
child_ids = get_cache().get_child_ids(category_list.alias)
check_min_num(len(child_ids))
if category_list.obj is None: # Remove category itself and children.
category.delete()
else: # Remove just a category-to-object tie.
# TODO filter user/status
check_min_num(category_list.obj.get_ties_for_categories_qs(child_ids).count())
category_list.obj.remove_from_category(category)
return True
@classmethod
def action_add(cls, request, category_list):
"""Handles `add` action from CategoryList editor.
Adds an actual category if a target object is not set for the list.
Adds a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: CategoryModel object on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_add:
raise SitecatsSecurityException('`action_add()` is not supported by `%s` category.' % category_list.alias)
titles = request.POST.get('category_title', '').strip()
if not titles:
raise SitecatsSecurityException(
'Unsupported `category_title` value - `%s` - is passed to `action_add()`.' % titles)
if category_list.editor.category_separator is None:
titles = [titles]
else:
titles = [
title.strip() for title in titles.split(category_list.editor.category_separator) if title.strip()
]
def check_max_num(num, max_num, category_title):
if max_num is not None and num+1 > max_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', max_num)
error_msg = _(
'Unable to add "%(target_category)s" category into "%(parent_category)s": '
'parent category can have at most %(num)s %(subcats_str)s.'
) % {
'target_category': category_title,
'parent_category': category_list.get_title(),
'num': max_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
target_category = None
for category_title in titles:
exists = get_cache().find_category(category_list.alias, category_title)
if exists and category_list.obj is None: # Already exists.
return exists
if not exists and not category_list.editor.allow_new:
error_msg = _(
'Unable to create a new "%(new_category)s" category inside of "%(parent_category)s": '
'parent category does not support this action.'
) % {
'new_category': category_title,
'parent_category': category_list.get_title()
}
raise SitecatsNewCategoryException(error_msg)
max_num = category_list.editor.max_num
child_ids = get_cache().get_child_ids(category_list.alias)
if not exists: # Add new category.
if category_list.obj is None:
check_max_num(len(child_ids), max_num, category_title)
# TODO status
target_category = get_category_model().add(
category_title, request.user, parent=category_list.get_category_model()
)
else:
target_category = exists # Use existing one for a tie.
if category_list.obj is not None:
# TODO status
check_max_num(category_list.obj.get_ties_for_categories_qs(child_ids).count(), max_num, category_title)
category_list.obj.add_to_category(target_category, request.user)
return target_category
def listen(self):
"""Instructs handler to listen to Django request and handle
CategoryList editor requests (if any).
:return: None on success otherwise and exception from SitecatsException family is raised.
"""
requested_action = self._request.POST.get('category_action', False)
if not requested_action:
return None # No action supplied. Pass.
if requested_action not in self.KNOWN_ACTIONS:
raise SitecatsSecurityException('Unknown `category_action` - `%s` - requested.')
category_base_id = self._request.POST.get('category_base_id', False)
if category_base_id == 'None':
category_base_id = None
else:
category_base_id = int(category_base_id)
if category_base_id not in self._lists.keys():
raise SitecatsSecurityException('Unknown `category_base_id` - `%s` - requested.')
category_list = self._lists[category_base_id]
if category_list.editor is None:
raise SitecatsSecurityException('Editor is disabled for `%s` category.' % category_list.alias)
action_method = getattr(self, 'action_%s' % requested_action)
try:
return action_method(self._request, category_list)
except SitecatsNewCategoryException as e:
messages.error(self._request, e, extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
except SitecatsValidationError as e:
messages.error(self._request, e.messages[0], extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
finally:
self._request.POST = {} # Prevent other forms fail.
def get_lists(self):
"""Returns a list of previously registered CategoryList objects.
:rtype: list
:return: A list of CategoryList objects.
"""
return list(self._lists.values())
|
idlesign/django-sitecats | sitecats/toolbox.py | CategoryRequestHandler.register_lists | python | def register_lists(self, category_lists, lists_init_kwargs=None, editor_init_kwargs=None):
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
for lst in category_lists:
if isinstance(lst, string_types): # Spawn CategoryList object from base category alias.
lst = self.list_cls(lst, **lists_init_kwargs)
elif not isinstance(lst, CategoryList):
raise SitecatsConfigurationError(
'`CategoryRequestHandler.register_lists()` accepts only '
'`CategoryList` objects or category aliases.'
)
if self._obj:
lst.set_obj(self._obj)
for name, val in lists_init_kwargs.items(): # Setting CategoryList attributes from kwargs.
setattr(lst, name, val)
lst.enable_editor(**editor_init_kwargs)
self._lists[lst.get_id()] = lst | Registers CategoryList objects to handle their requests.
:param list category_lists: CategoryList objects
:param dict lists_init_kwargs: Attributes to apply to each of CategoryList objects | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/toolbox.py#L254-L280 | null | class CategoryRequestHandler(object):
"""This one can handle requests issued by CategoryList editors. Can be used in views."""
list_cls = CategoryList # For customization purposes.
KNOWN_ACTIONS = ('add', 'remove')
def __init__(self, request, obj=None, error_messages_extra_tags=None):
"""
:param Request request: Django request object
:param Model obj: `ModelWithCategory` heir to bind CategoryList objects upon.
"""
self._request = request
self._lists = OrderedDict()
self._obj = obj
self.error_messages_extra_tags = error_messages_extra_tags or ''
@classmethod
def action_remove(cls, request, category_list):
"""Handles `remove` action from CategoryList editor.
Removes an actual category if a target object is not set for the list.
Removes a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: True on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_remove:
raise SitecatsSecurityException(
'`action_remove()` is not supported by parent `%s`category.' % category_list.alias)
category_id = int(request.POST.get('category_id', 0))
if not category_id:
raise SitecatsSecurityException(
'Unsupported `category_id` value - `%s` - is passed to `action_remove()`.' % category_id)
category = get_cache().get_category_by_id(category_id)
if not category:
raise SitecatsSecurityException('Unable to get `%s` category in `action_remove()`.' % category_id)
cat_ident = category.alias or category.id
if category.is_locked:
raise SitecatsSecurityException('`action_remove()` is not supported by `%s` category.' % cat_ident)
if category.parent_id != category_list.get_id():
raise SitecatsSecurityException(
'`action_remove()` is unable to remove `%s`: '
'not a child of parent `%s` category.' % (cat_ident, category_list.alias)
)
min_num = category_list.editor.min_num
def check_min_num(num):
if min_num is not None and num-1 < min_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', min_num)
error_msg = _(
'Unable to remove "%(target_category)s" category from "%(parent_category)s": '
'parent category requires at least %(num)s %(subcats_str)s.'
) % {
'target_category': category.title,
'parent_category': category_list.get_title(),
'num': min_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
child_ids = get_cache().get_child_ids(category_list.alias)
check_min_num(len(child_ids))
if category_list.obj is None: # Remove category itself and children.
category.delete()
else: # Remove just a category-to-object tie.
# TODO filter user/status
check_min_num(category_list.obj.get_ties_for_categories_qs(child_ids).count())
category_list.obj.remove_from_category(category)
return True
@classmethod
def action_add(cls, request, category_list):
"""Handles `add` action from CategoryList editor.
Adds an actual category if a target object is not set for the list.
Adds a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: CategoryModel object on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_add:
raise SitecatsSecurityException('`action_add()` is not supported by `%s` category.' % category_list.alias)
titles = request.POST.get('category_title', '').strip()
if not titles:
raise SitecatsSecurityException(
'Unsupported `category_title` value - `%s` - is passed to `action_add()`.' % titles)
if category_list.editor.category_separator is None:
titles = [titles]
else:
titles = [
title.strip() for title in titles.split(category_list.editor.category_separator) if title.strip()
]
def check_max_num(num, max_num, category_title):
if max_num is not None and num+1 > max_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', max_num)
error_msg = _(
'Unable to add "%(target_category)s" category into "%(parent_category)s": '
'parent category can have at most %(num)s %(subcats_str)s.'
) % {
'target_category': category_title,
'parent_category': category_list.get_title(),
'num': max_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
target_category = None
for category_title in titles:
exists = get_cache().find_category(category_list.alias, category_title)
if exists and category_list.obj is None: # Already exists.
return exists
if not exists and not category_list.editor.allow_new:
error_msg = _(
'Unable to create a new "%(new_category)s" category inside of "%(parent_category)s": '
'parent category does not support this action.'
) % {
'new_category': category_title,
'parent_category': category_list.get_title()
}
raise SitecatsNewCategoryException(error_msg)
max_num = category_list.editor.max_num
child_ids = get_cache().get_child_ids(category_list.alias)
if not exists: # Add new category.
if category_list.obj is None:
check_max_num(len(child_ids), max_num, category_title)
# TODO status
target_category = get_category_model().add(
category_title, request.user, parent=category_list.get_category_model()
)
else:
target_category = exists # Use existing one for a tie.
if category_list.obj is not None:
# TODO status
check_max_num(category_list.obj.get_ties_for_categories_qs(child_ids).count(), max_num, category_title)
category_list.obj.add_to_category(target_category, request.user)
return target_category
def listen(self):
"""Instructs handler to listen to Django request and handle
CategoryList editor requests (if any).
:return: None on success otherwise and exception from SitecatsException family is raised.
"""
requested_action = self._request.POST.get('category_action', False)
if not requested_action:
return None # No action supplied. Pass.
if requested_action not in self.KNOWN_ACTIONS:
raise SitecatsSecurityException('Unknown `category_action` - `%s` - requested.')
category_base_id = self._request.POST.get('category_base_id', False)
if category_base_id == 'None':
category_base_id = None
else:
category_base_id = int(category_base_id)
if category_base_id not in self._lists.keys():
raise SitecatsSecurityException('Unknown `category_base_id` - `%s` - requested.')
category_list = self._lists[category_base_id]
if category_list.editor is None:
raise SitecatsSecurityException('Editor is disabled for `%s` category.' % category_list.alias)
action_method = getattr(self, 'action_%s' % requested_action)
try:
return action_method(self._request, category_list)
except SitecatsNewCategoryException as e:
messages.error(self._request, e, extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
except SitecatsValidationError as e:
messages.error(self._request, e.messages[0], extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
finally:
self._request.POST = {} # Prevent other forms fail.
def get_lists(self):
"""Returns a list of previously registered CategoryList objects.
:rtype: list
:return: A list of CategoryList objects.
"""
return list(self._lists.values())
|
idlesign/django-sitecats | sitecats/toolbox.py | CategoryRequestHandler.action_remove | python | def action_remove(cls, request, category_list):
if not category_list.editor.allow_remove:
raise SitecatsSecurityException(
'`action_remove()` is not supported by parent `%s`category.' % category_list.alias)
category_id = int(request.POST.get('category_id', 0))
if not category_id:
raise SitecatsSecurityException(
'Unsupported `category_id` value - `%s` - is passed to `action_remove()`.' % category_id)
category = get_cache().get_category_by_id(category_id)
if not category:
raise SitecatsSecurityException('Unable to get `%s` category in `action_remove()`.' % category_id)
cat_ident = category.alias or category.id
if category.is_locked:
raise SitecatsSecurityException('`action_remove()` is not supported by `%s` category.' % cat_ident)
if category.parent_id != category_list.get_id():
raise SitecatsSecurityException(
'`action_remove()` is unable to remove `%s`: '
'not a child of parent `%s` category.' % (cat_ident, category_list.alias)
)
min_num = category_list.editor.min_num
def check_min_num(num):
if min_num is not None and num-1 < min_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', min_num)
error_msg = _(
'Unable to remove "%(target_category)s" category from "%(parent_category)s": '
'parent category requires at least %(num)s %(subcats_str)s.'
) % {
'target_category': category.title,
'parent_category': category_list.get_title(),
'num': min_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
child_ids = get_cache().get_child_ids(category_list.alias)
check_min_num(len(child_ids))
if category_list.obj is None: # Remove category itself and children.
category.delete()
else: # Remove just a category-to-object tie.
# TODO filter user/status
check_min_num(category_list.obj.get_ties_for_categories_qs(child_ids).count())
category_list.obj.remove_from_category(category)
return True | Handles `remove` action from CategoryList editor.
Removes an actual category if a target object is not set for the list.
Removes a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: True on success otherwise and exception from SitecatsException family is raised. | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/toolbox.py#L283-L342 | [
"def get_cache():\n \"\"\"Returns global cache object.\n\n :rtype: Cache\n :return: cache object\n \"\"\"\n global _SITECATS_CACHE\n\n if _SITECATS_CACHE is None:\n _SITECATS_CACHE = apps.get_app_config('sitecats').get_categories_cache()\n\n return _SITECATS_CACHE\n",
"def check_min_nu... | class CategoryRequestHandler(object):
"""This one can handle requests issued by CategoryList editors. Can be used in views."""
list_cls = CategoryList # For customization purposes.
KNOWN_ACTIONS = ('add', 'remove')
def __init__(self, request, obj=None, error_messages_extra_tags=None):
"""
:param Request request: Django request object
:param Model obj: `ModelWithCategory` heir to bind CategoryList objects upon.
"""
self._request = request
self._lists = OrderedDict()
self._obj = obj
self.error_messages_extra_tags = error_messages_extra_tags or ''
def register_lists(self, category_lists, lists_init_kwargs=None, editor_init_kwargs=None):
"""Registers CategoryList objects to handle their requests.
:param list category_lists: CategoryList objects
:param dict lists_init_kwargs: Attributes to apply to each of CategoryList objects
"""
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
for lst in category_lists:
if isinstance(lst, string_types): # Spawn CategoryList object from base category alias.
lst = self.list_cls(lst, **lists_init_kwargs)
elif not isinstance(lst, CategoryList):
raise SitecatsConfigurationError(
'`CategoryRequestHandler.register_lists()` accepts only '
'`CategoryList` objects or category aliases.'
)
if self._obj:
lst.set_obj(self._obj)
for name, val in lists_init_kwargs.items(): # Setting CategoryList attributes from kwargs.
setattr(lst, name, val)
lst.enable_editor(**editor_init_kwargs)
self._lists[lst.get_id()] = lst
@classmethod
@classmethod
def action_add(cls, request, category_list):
"""Handles `add` action from CategoryList editor.
Adds an actual category if a target object is not set for the list.
Adds a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: CategoryModel object on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_add:
raise SitecatsSecurityException('`action_add()` is not supported by `%s` category.' % category_list.alias)
titles = request.POST.get('category_title', '').strip()
if not titles:
raise SitecatsSecurityException(
'Unsupported `category_title` value - `%s` - is passed to `action_add()`.' % titles)
if category_list.editor.category_separator is None:
titles = [titles]
else:
titles = [
title.strip() for title in titles.split(category_list.editor.category_separator) if title.strip()
]
def check_max_num(num, max_num, category_title):
if max_num is not None and num+1 > max_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', max_num)
error_msg = _(
'Unable to add "%(target_category)s" category into "%(parent_category)s": '
'parent category can have at most %(num)s %(subcats_str)s.'
) % {
'target_category': category_title,
'parent_category': category_list.get_title(),
'num': max_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
target_category = None
for category_title in titles:
exists = get_cache().find_category(category_list.alias, category_title)
if exists and category_list.obj is None: # Already exists.
return exists
if not exists and not category_list.editor.allow_new:
error_msg = _(
'Unable to create a new "%(new_category)s" category inside of "%(parent_category)s": '
'parent category does not support this action.'
) % {
'new_category': category_title,
'parent_category': category_list.get_title()
}
raise SitecatsNewCategoryException(error_msg)
max_num = category_list.editor.max_num
child_ids = get_cache().get_child_ids(category_list.alias)
if not exists: # Add new category.
if category_list.obj is None:
check_max_num(len(child_ids), max_num, category_title)
# TODO status
target_category = get_category_model().add(
category_title, request.user, parent=category_list.get_category_model()
)
else:
target_category = exists # Use existing one for a tie.
if category_list.obj is not None:
# TODO status
check_max_num(category_list.obj.get_ties_for_categories_qs(child_ids).count(), max_num, category_title)
category_list.obj.add_to_category(target_category, request.user)
return target_category
def listen(self):
"""Instructs handler to listen to Django request and handle
CategoryList editor requests (if any).
:return: None on success otherwise and exception from SitecatsException family is raised.
"""
requested_action = self._request.POST.get('category_action', False)
if not requested_action:
return None # No action supplied. Pass.
if requested_action not in self.KNOWN_ACTIONS:
raise SitecatsSecurityException('Unknown `category_action` - `%s` - requested.')
category_base_id = self._request.POST.get('category_base_id', False)
if category_base_id == 'None':
category_base_id = None
else:
category_base_id = int(category_base_id)
if category_base_id not in self._lists.keys():
raise SitecatsSecurityException('Unknown `category_base_id` - `%s` - requested.')
category_list = self._lists[category_base_id]
if category_list.editor is None:
raise SitecatsSecurityException('Editor is disabled for `%s` category.' % category_list.alias)
action_method = getattr(self, 'action_%s' % requested_action)
try:
return action_method(self._request, category_list)
except SitecatsNewCategoryException as e:
messages.error(self._request, e, extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
except SitecatsValidationError as e:
messages.error(self._request, e.messages[0], extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
finally:
self._request.POST = {} # Prevent other forms fail.
def get_lists(self):
"""Returns a list of previously registered CategoryList objects.
:rtype: list
:return: A list of CategoryList objects.
"""
return list(self._lists.values())
|
idlesign/django-sitecats | sitecats/toolbox.py | CategoryRequestHandler.action_add | python | def action_add(cls, request, category_list):
if not category_list.editor.allow_add:
raise SitecatsSecurityException('`action_add()` is not supported by `%s` category.' % category_list.alias)
titles = request.POST.get('category_title', '').strip()
if not titles:
raise SitecatsSecurityException(
'Unsupported `category_title` value - `%s` - is passed to `action_add()`.' % titles)
if category_list.editor.category_separator is None:
titles = [titles]
else:
titles = [
title.strip() for title in titles.split(category_list.editor.category_separator) if title.strip()
]
def check_max_num(num, max_num, category_title):
if max_num is not None and num+1 > max_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', max_num)
error_msg = _(
'Unable to add "%(target_category)s" category into "%(parent_category)s": '
'parent category can have at most %(num)s %(subcats_str)s.'
) % {
'target_category': category_title,
'parent_category': category_list.get_title(),
'num': max_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
target_category = None
for category_title in titles:
exists = get_cache().find_category(category_list.alias, category_title)
if exists and category_list.obj is None: # Already exists.
return exists
if not exists and not category_list.editor.allow_new:
error_msg = _(
'Unable to create a new "%(new_category)s" category inside of "%(parent_category)s": '
'parent category does not support this action.'
) % {
'new_category': category_title,
'parent_category': category_list.get_title()
}
raise SitecatsNewCategoryException(error_msg)
max_num = category_list.editor.max_num
child_ids = get_cache().get_child_ids(category_list.alias)
if not exists: # Add new category.
if category_list.obj is None:
check_max_num(len(child_ids), max_num, category_title)
# TODO status
target_category = get_category_model().add(
category_title, request.user, parent=category_list.get_category_model()
)
else:
target_category = exists # Use existing one for a tie.
if category_list.obj is not None:
# TODO status
check_max_num(category_list.obj.get_ties_for_categories_qs(child_ids).count(), max_num, category_title)
category_list.obj.add_to_category(target_category, request.user)
return target_category | Handles `add` action from CategoryList editor.
Adds an actual category if a target object is not set for the list.
Adds a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: CategoryModel object on success otherwise and exception from SitecatsException family is raised. | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/toolbox.py#L345-L418 | [
"def get_category_model():\n \"\"\"Returns the Category model, set for the project.\"\"\"\n return get_model_class_from_string(MODEL_CATEGORY)\n",
"def get_cache():\n \"\"\"Returns global cache object.\n\n :rtype: Cache\n :return: cache object\n \"\"\"\n global _SITECATS_CACHE\n\n if _SITE... | class CategoryRequestHandler(object):
"""This one can handle requests issued by CategoryList editors. Can be used in views."""
list_cls = CategoryList # For customization purposes.
KNOWN_ACTIONS = ('add', 'remove')
def __init__(self, request, obj=None, error_messages_extra_tags=None):
"""
:param Request request: Django request object
:param Model obj: `ModelWithCategory` heir to bind CategoryList objects upon.
"""
self._request = request
self._lists = OrderedDict()
self._obj = obj
self.error_messages_extra_tags = error_messages_extra_tags or ''
def register_lists(self, category_lists, lists_init_kwargs=None, editor_init_kwargs=None):
"""Registers CategoryList objects to handle their requests.
:param list category_lists: CategoryList objects
:param dict lists_init_kwargs: Attributes to apply to each of CategoryList objects
"""
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
for lst in category_lists:
if isinstance(lst, string_types): # Spawn CategoryList object from base category alias.
lst = self.list_cls(lst, **lists_init_kwargs)
elif not isinstance(lst, CategoryList):
raise SitecatsConfigurationError(
'`CategoryRequestHandler.register_lists()` accepts only '
'`CategoryList` objects or category aliases.'
)
if self._obj:
lst.set_obj(self._obj)
for name, val in lists_init_kwargs.items(): # Setting CategoryList attributes from kwargs.
setattr(lst, name, val)
lst.enable_editor(**editor_init_kwargs)
self._lists[lst.get_id()] = lst
@classmethod
def action_remove(cls, request, category_list):
"""Handles `remove` action from CategoryList editor.
Removes an actual category if a target object is not set for the list.
Removes a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: True on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_remove:
raise SitecatsSecurityException(
'`action_remove()` is not supported by parent `%s`category.' % category_list.alias)
category_id = int(request.POST.get('category_id', 0))
if not category_id:
raise SitecatsSecurityException(
'Unsupported `category_id` value - `%s` - is passed to `action_remove()`.' % category_id)
category = get_cache().get_category_by_id(category_id)
if not category:
raise SitecatsSecurityException('Unable to get `%s` category in `action_remove()`.' % category_id)
cat_ident = category.alias or category.id
if category.is_locked:
raise SitecatsSecurityException('`action_remove()` is not supported by `%s` category.' % cat_ident)
if category.parent_id != category_list.get_id():
raise SitecatsSecurityException(
'`action_remove()` is unable to remove `%s`: '
'not a child of parent `%s` category.' % (cat_ident, category_list.alias)
)
min_num = category_list.editor.min_num
def check_min_num(num):
if min_num is not None and num-1 < min_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', min_num)
error_msg = _(
'Unable to remove "%(target_category)s" category from "%(parent_category)s": '
'parent category requires at least %(num)s %(subcats_str)s.'
) % {
'target_category': category.title,
'parent_category': category_list.get_title(),
'num': min_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
child_ids = get_cache().get_child_ids(category_list.alias)
check_min_num(len(child_ids))
if category_list.obj is None: # Remove category itself and children.
category.delete()
else: # Remove just a category-to-object tie.
# TODO filter user/status
check_min_num(category_list.obj.get_ties_for_categories_qs(child_ids).count())
category_list.obj.remove_from_category(category)
return True
@classmethod
def listen(self):
"""Instructs handler to listen to Django request and handle
CategoryList editor requests (if any).
:return: None on success otherwise and exception from SitecatsException family is raised.
"""
requested_action = self._request.POST.get('category_action', False)
if not requested_action:
return None # No action supplied. Pass.
if requested_action not in self.KNOWN_ACTIONS:
raise SitecatsSecurityException('Unknown `category_action` - `%s` - requested.')
category_base_id = self._request.POST.get('category_base_id', False)
if category_base_id == 'None':
category_base_id = None
else:
category_base_id = int(category_base_id)
if category_base_id not in self._lists.keys():
raise SitecatsSecurityException('Unknown `category_base_id` - `%s` - requested.')
category_list = self._lists[category_base_id]
if category_list.editor is None:
raise SitecatsSecurityException('Editor is disabled for `%s` category.' % category_list.alias)
action_method = getattr(self, 'action_%s' % requested_action)
try:
return action_method(self._request, category_list)
except SitecatsNewCategoryException as e:
messages.error(self._request, e, extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
except SitecatsValidationError as e:
messages.error(self._request, e.messages[0], extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
finally:
self._request.POST = {} # Prevent other forms fail.
def get_lists(self):
"""Returns a list of previously registered CategoryList objects.
:rtype: list
:return: A list of CategoryList objects.
"""
return list(self._lists.values())
|
idlesign/django-sitecats | sitecats/toolbox.py | CategoryRequestHandler.listen | python | def listen(self):
requested_action = self._request.POST.get('category_action', False)
if not requested_action:
return None # No action supplied. Pass.
if requested_action not in self.KNOWN_ACTIONS:
raise SitecatsSecurityException('Unknown `category_action` - `%s` - requested.')
category_base_id = self._request.POST.get('category_base_id', False)
if category_base_id == 'None':
category_base_id = None
else:
category_base_id = int(category_base_id)
if category_base_id not in self._lists.keys():
raise SitecatsSecurityException('Unknown `category_base_id` - `%s` - requested.')
category_list = self._lists[category_base_id]
if category_list.editor is None:
raise SitecatsSecurityException('Editor is disabled for `%s` category.' % category_list.alias)
action_method = getattr(self, 'action_%s' % requested_action)
try:
return action_method(self._request, category_list)
except SitecatsNewCategoryException as e:
messages.error(self._request, e, extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
except SitecatsValidationError as e:
messages.error(self._request, e.messages[0], extra_tags=self.error_messages_extra_tags, fail_silently=True)
return None
finally:
self._request.POST = {} | Instructs handler to listen to Django request and handle
CategoryList editor requests (if any).
:return: None on success otherwise and exception from SitecatsException family is raised. | train | https://github.com/idlesign/django-sitecats/blob/9b45e91fc0dcb63a0011780437fe28145e3ecce9/sitecats/toolbox.py#L420-L457 | null | class CategoryRequestHandler(object):
"""This one can handle requests issued by CategoryList editors. Can be used in views."""
list_cls = CategoryList # For customization purposes.
KNOWN_ACTIONS = ('add', 'remove')
def __init__(self, request, obj=None, error_messages_extra_tags=None):
"""
:param Request request: Django request object
:param Model obj: `ModelWithCategory` heir to bind CategoryList objects upon.
"""
self._request = request
self._lists = OrderedDict()
self._obj = obj
self.error_messages_extra_tags = error_messages_extra_tags or ''
def register_lists(self, category_lists, lists_init_kwargs=None, editor_init_kwargs=None):
"""Registers CategoryList objects to handle their requests.
:param list category_lists: CategoryList objects
:param dict lists_init_kwargs: Attributes to apply to each of CategoryList objects
"""
lists_init_kwargs = lists_init_kwargs or {}
editor_init_kwargs = editor_init_kwargs or {}
for lst in category_lists:
if isinstance(lst, string_types): # Spawn CategoryList object from base category alias.
lst = self.list_cls(lst, **lists_init_kwargs)
elif not isinstance(lst, CategoryList):
raise SitecatsConfigurationError(
'`CategoryRequestHandler.register_lists()` accepts only '
'`CategoryList` objects or category aliases.'
)
if self._obj:
lst.set_obj(self._obj)
for name, val in lists_init_kwargs.items(): # Setting CategoryList attributes from kwargs.
setattr(lst, name, val)
lst.enable_editor(**editor_init_kwargs)
self._lists[lst.get_id()] = lst
@classmethod
def action_remove(cls, request, category_list):
"""Handles `remove` action from CategoryList editor.
Removes an actual category if a target object is not set for the list.
Removes a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: True on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_remove:
raise SitecatsSecurityException(
'`action_remove()` is not supported by parent `%s`category.' % category_list.alias)
category_id = int(request.POST.get('category_id', 0))
if not category_id:
raise SitecatsSecurityException(
'Unsupported `category_id` value - `%s` - is passed to `action_remove()`.' % category_id)
category = get_cache().get_category_by_id(category_id)
if not category:
raise SitecatsSecurityException('Unable to get `%s` category in `action_remove()`.' % category_id)
cat_ident = category.alias or category.id
if category.is_locked:
raise SitecatsSecurityException('`action_remove()` is not supported by `%s` category.' % cat_ident)
if category.parent_id != category_list.get_id():
raise SitecatsSecurityException(
'`action_remove()` is unable to remove `%s`: '
'not a child of parent `%s` category.' % (cat_ident, category_list.alias)
)
min_num = category_list.editor.min_num
def check_min_num(num):
if min_num is not None and num-1 < min_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', min_num)
error_msg = _(
'Unable to remove "%(target_category)s" category from "%(parent_category)s": '
'parent category requires at least %(num)s %(subcats_str)s.'
) % {
'target_category': category.title,
'parent_category': category_list.get_title(),
'num': min_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
child_ids = get_cache().get_child_ids(category_list.alias)
check_min_num(len(child_ids))
if category_list.obj is None: # Remove category itself and children.
category.delete()
else: # Remove just a category-to-object tie.
# TODO filter user/status
check_min_num(category_list.obj.get_ties_for_categories_qs(child_ids).count())
category_list.obj.remove_from_category(category)
return True
@classmethod
def action_add(cls, request, category_list):
"""Handles `add` action from CategoryList editor.
Adds an actual category if a target object is not set for the list.
Adds a tie-to-category object if a target object is set for the list.
:param Request request: Django request object
:param CategoryList category_list: CategoryList object to operate upon.
:return: CategoryModel object on success otherwise and exception from SitecatsException family is raised.
"""
if not category_list.editor.allow_add:
raise SitecatsSecurityException('`action_add()` is not supported by `%s` category.' % category_list.alias)
titles = request.POST.get('category_title', '').strip()
if not titles:
raise SitecatsSecurityException(
'Unsupported `category_title` value - `%s` - is passed to `action_add()`.' % titles)
if category_list.editor.category_separator is None:
titles = [titles]
else:
titles = [
title.strip() for title in titles.split(category_list.editor.category_separator) if title.strip()
]
def check_max_num(num, max_num, category_title):
if max_num is not None and num+1 > max_num:
subcats_str = ungettext_lazy('subcategory', 'subcategories', max_num)
error_msg = _(
'Unable to add "%(target_category)s" category into "%(parent_category)s": '
'parent category can have at most %(num)s %(subcats_str)s.'
) % {
'target_category': category_title,
'parent_category': category_list.get_title(),
'num': max_num,
'subcats_str': subcats_str
}
raise SitecatsValidationError(error_msg)
target_category = None
for category_title in titles:
exists = get_cache().find_category(category_list.alias, category_title)
if exists and category_list.obj is None: # Already exists.
return exists
if not exists and not category_list.editor.allow_new:
error_msg = _(
'Unable to create a new "%(new_category)s" category inside of "%(parent_category)s": '
'parent category does not support this action.'
) % {
'new_category': category_title,
'parent_category': category_list.get_title()
}
raise SitecatsNewCategoryException(error_msg)
max_num = category_list.editor.max_num
child_ids = get_cache().get_child_ids(category_list.alias)
if not exists: # Add new category.
if category_list.obj is None:
check_max_num(len(child_ids), max_num, category_title)
# TODO status
target_category = get_category_model().add(
category_title, request.user, parent=category_list.get_category_model()
)
else:
target_category = exists # Use existing one for a tie.
if category_list.obj is not None:
# TODO status
check_max_num(category_list.obj.get_ties_for_categories_qs(child_ids).count(), max_num, category_title)
category_list.obj.add_to_category(target_category, request.user)
return target_category
# Prevent other forms fail.
def get_lists(self):
"""Returns a list of previously registered CategoryList objects.
:rtype: list
:return: A list of CategoryList objects.
"""
return list(self._lists.values())
|
pjamesjoyce/lcopt | lcopt/settings.py | LcoptSettings.write | python | def write(self):
with open(storage.config_file, 'w') as cfg:
yaml.dump(self.as_dict(), cfg, default_flow_style=False)
storage.refresh() | write the current settings to the config file | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/settings.py#L98-L103 | [
"def refresh(self):\n with open(self.config_file, 'r') as cf:\n self.config = yaml.load(cf)\n",
"def as_dict(self):\n d = {}\n for k in self._sections:\n d[k] = getattr(self, k).as_dict()\n return d\n"
] | class LcoptSettings(object):
def __init__(self, **kwargs):
self.refresh(**kwargs)
self.write() # if floats get conveted to strings during setup, it might auto-overwite with a partial config - this makes sure it doesn't
def as_dict(self):
d = {}
for k in self._sections:
d[k] = getattr(self, k).as_dict()
return d
def __repr__(self):
string = ""
for section, content in self.as_dict().items():
string += "{}:\n".format(section)
for k, v in content.items():
string += "\t{}: {}\n".format(k,v)
return "Lcopt settings: \n\n{}".format(string)
def refresh(self, **kwargs):
self.config = storage.load_config()
self._sections = []
for section, content in self.config.items():
setattr(self, section, SettingsDict(content, self.write, **kwargs))
self._sections.append(section)
def launch_interact(self):
from .settings_gui import FlaskSettingsGUI
s = FlaskSettingsGUI()
s.run()
self.refresh()
self.write()
|
pjamesjoyce/lcopt | lcopt/bw2_import.py | hierarchy_pos | python | def hierarchy_pos(links, root, width=1., vert_gap=0.2, vert_loc=0, xcenter=0.5, pos=None, parent=None, min_dx=0.03):
'''If there is a cycle that is reachable from root, then this will see infinite recursion.
G: the graph
root: the root node of current branch
width: horizontal space allocated for this branch - avoids overlap with other branches
vert_gap: gap between levels of hierarchy
vert_loc: vertical location of root
xcenter: horizontal location of root
pos: a dict saying where all nodes go if they have been assigned
parent: parent of this branch.'''
if pos is None:
pos = {root: (xcenter, vert_loc)}
else:
pos[root] = (xcenter, vert_loc)
neighbors = get_sandbox_neighbours(links, root)
if len(neighbors) != 0:
dx = max(width / len(neighbors), min_dx)
#nextx = xcenter - width / 2 - dx / 2
nextx = pos[root][0] - (len(neighbors) - 1) * dx / 2 - dx
for neighbor in neighbors:
nextx += dx
pos = hierarchy_pos(links, neighbor, width=dx, vert_gap=vert_gap,
vert_loc=vert_loc - vert_gap, xcenter=nextx, pos=pos,
parent=root)
return pos | If there is a cycle that is reachable from root, then this will see infinite recursion.
G: the graph
root: the root node of current branch
width: horizontal space allocated for this branch - avoids overlap with other branches
vert_gap: gap between levels of hierarchy
vert_loc: vertical location of root
xcenter: horizontal location of root
pos: a dict saying where all nodes go if they have been assigned
parent: parent of this branch. | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/bw2_import.py#L60-L87 | [
"def get_sandbox_neighbours(sandbox_links, root):\n neighbours = []\n for x in sandbox_links:\n if x['targetID'] == root:\n neighbours.append(x['sourceID'])\n\n return neighbours \n",
"def hierarchy_pos(links, root, width=1., vert_gap=0.2, vert_loc=0, xcenter=0.5, pos=None, parent=Non... | from bw2io.package import BW2Package
from lcopt.model import LcoptModel, unnormalise_unit
from lcopt.interact import FlaskSandbox
from copy import deepcopy
from collections import OrderedDict
from warnings import warn
#import networkx as nx
def validate_imported_model(model):
db = model.database['items']
ecoinvent_name = model.ecoinventName
ecoinvent_items = [x['items'] for x in model.external_databases if x['name'] == ecoinvent_name][0]
ecoinvent_links = []
for key, item in db.items():
if item.get('ext_link'):
if item['ext_link'][0] == ecoinvent_name:
ecoinvent_links.append(item['ext_link'])
for link in ecoinvent_links:
if not ecoinvent_items.get(link):
warn("{} not found in ecoinvent 3.3 cutoff database".format(link))
return False
return True
def get_sandbox_root(links):
froms = []
tos = []
for l in links:
froms.append(l['sourceID'])
tos.append(l['targetID'])
fset = set(froms)
tset = set(tos)
roots = [x for x in tset if x not in fset]
#print(sorted(fset))
#print(sorted(tset))
if len(roots) == 1:
return roots[0]
else:
print('Multiple roots found!')
return False
def get_sandbox_neighbours(sandbox_links, root):
neighbours = []
for x in sandbox_links:
if x['targetID'] == root:
neighbours.append(x['sourceID'])
return neighbours
def hierarchy_pos(links, root, width=1., vert_gap=0.2, vert_loc=0, xcenter=0.5, pos=None, parent=None, min_dx=0.03):
'''If there is a cycle that is reachable from root, then this will see infinite recursion.
G: the graph
root: the root node of current branch
width: horizontal space allocated for this branch - avoids overlap with other branches
vert_gap: gap between levels of hierarchy
vert_loc: vertical location of root
xcenter: horizontal location of root
pos: a dict saying where all nodes go if they have been assigned
parent: parent of this branch.'''
if pos is None:
pos = {root: (xcenter, vert_loc)}
else:
pos[root] = (xcenter, vert_loc)
neighbors = get_sandbox_neighbours(links, root)
if len(neighbors) != 0:
dx = max(width / len(neighbors), min_dx)
#nextx = xcenter - width / 2 - dx / 2
nextx = pos[root][0] - (len(neighbors) - 1) * dx / 2 - dx
for neighbor in neighbors:
nextx += dx
pos = hierarchy_pos(links, neighbor, width=dx, vert_gap=vert_gap,
vert_loc=vert_loc - vert_gap, xcenter=nextx, pos=pos,
parent=root)
return pos
def compute_layout(fs):
#nx_nodes = []
#n = deepcopy(nodes)
#for x in n:
# i = x.pop('id')
# nx_nodes.append((i, x))
#nx_links = []
#l = deepcopy(links)
#for x in l:
# from_id = x.pop('sourceID')
# to_id = x.pop('targetID')
# nx_links.append((from_id, to_id, x))
#G = nx.Graph()
#G.add_nodes_from(nx_nodes)
#G.add_edges_from(nx_links)
nodes = fs.nodes
links = fs.links
pos = hierarchy_pos(links, get_sandbox_root(links))
pos90 = {k: (v[1], -v[0]) for k, v in pos.items()}
xs = [v[0] for k, v in pos90.items()]
ys = [v[1] for k, v in pos90.items()]
s_xs = [(x - min(xs))for x in xs]
s_ys = [(y - min(ys))for y in ys]
row = 50
col = 300
max_height = 1000
max_width = 1100
height = min([max_height, len(set(ys)) * row])
width = min([max_width, len(set(xs)) * col])
pad_top = 20
pad_left = 20
pos_scaled = {k: ((v[0] - min(xs)) / max(s_xs) * width + pad_left, (v[1] - min(ys)) / max(s_ys) * height + pad_top) for k, v in pos90.items()}
sandbox = {k: {'x': v[0], 'y': v[1]} for k, v in pos_scaled.items()}
processes = [k for k, v in fs.reverse_process_output_map.items()]
process_fudge_factor = 10 # process boxes are (generally) 20px taller than inputs, so if we shift these up 10 pixels it looks nicer...
for k, v in sandbox.items():
if k in processes:
sandbox[k]['y'] -= process_fudge_factor
return sandbox
def create_LcoptModel_from_BW2Package(import_filename, autosetup=True):
import_data = BW2Package.load_file(import_filename)
orig_db = import_data[0]['data']
db_name = import_data[0]['name']
model = LcoptModel(db_name, autosetup=autosetup)
db = deepcopy(orig_db)
temp_param_set = []
temp_production_param_set = []
for k, v in db.items():
exchanges = []
production_amount = v.get('production amount', 1)
if production_amount != 1:
print("NOTE: Production amount for {} is not 1 unit ({})".format(v['name'], production_amount, production_amount))
temp_production_param_set.append({'of': v['name'], 'amount': production_amount})
"""p_exs = [e for e in v['exchanges'] if e['type'] == 'production']
t_exs = [e for e in v['exchanges'] if e['type'] == 'technosphere']
if len(p_exs) == 0:
print(v['name'] + " has no production exchange")
if len(p_exs) == 0 and len(t_exs) == 1:
temp_tech_exc = deepcopy(t_exs[0])
exc_name = temp_tech_exc.pop('name')
exc_input = temp_tech_exc.pop('input')
exc_unit = unnormalise_unit(temp_tech_exc.pop('unit'))
exc_type = 'production'
this_exc = {
'name': exc_name,
'type': exc_type,
'unit': exc_unit,
'amount': 1,
'lcopt_type': 'intermediate',
}
exchanges.append(this_exc)"""
for e in v['exchanges']:
exc_name = e.pop('name')
exc_input = e.pop('input')
exc_unit = unnormalise_unit(e.pop('unit'))
exc_amount = e.pop('amount')
exc_type = e.pop('type')
temp_param_set.append({'from': exc_name, 'to': v['name'], 'amount': exc_amount})
if e.get('location'):
e.pop('location')
if exc_type == 'production':
this_exc = {
'name': exc_name,
'type': exc_type,
'unit': exc_unit,
'lcopt_type': 'intermediate',
}
this_exc = {**this_exc, **e}
exchanges.append(this_exc)
elif exc_type == 'technosphere':
this_exc = {
'name': exc_name,
'type': exc_type,
'unit': exc_unit,
}
exc_db = exc_input[0]
if exc_db == db_name:
this_exc['lcopt_type'] = 'intermediate'
else:
this_exc['ext_link'] = ('Ecoinvent3_3_cutoff', exc_input[1])
this_exc['lcopt_type'] = 'input'
this_exc = {**this_exc, **e}
exchanges.append(this_exc)
elif exc_type == 'biosphere':
this_exc = {
'name': exc_name,
'type': 'technosphere',
'unit': exc_unit,
}
this_exc['ext_link'] = exc_input
this_exc['lcopt_type'] = 'biosphere'
this_exc = {**this_exc, **e}
exchanges.append(this_exc)
model.create_process(v['name'], exchanges)
param_set = OrderedDict()
#model.parameter_scan()
#print (model.names)
for p in temp_param_set:
exc_from = model.names.index(p['from'])
exc_to = model.names.index(p['to'])
if exc_from != exc_to:
parameter_id = "p_{}_{}".format(exc_from, exc_to)
param_set[parameter_id] = p['amount']
for p in temp_production_param_set:
exc_of = model.names.index(p['of'])
parameter_id = "p_{}_production".format(exc_of)
param_set[parameter_id] = p['amount']
model.parameter_sets[db_name] = param_set
model.parameter_scan()
fs = FlaskSandbox(model)
model.sandbox_positions = compute_layout(fs)
if validate_imported_model(model):
print('\nModel created successfully')
return model
else:
print('\nModel not valid - check the ecoinvent version in brightway2')
return None
|
pjamesjoyce/lcopt | lcopt/utils.py | lcopt_bw2_setup | python | def lcopt_bw2_setup(ecospold_path, overwrite=False, db_name=None): # pragma: no cover
default_ei_name = "Ecoinvent3_3_cutoff"
if db_name is None:
db_name = DEFAULT_PROJECT_STEM + default_ei_name
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_setup in a python shell using overwrite = True')
return False
bw2.projects.set_current(db_name)
bw2.bw2setup()
ei = bw2.SingleOutputEcospold2Importer(fix_mac_path_escapes(ecospold_path), default_ei_name)
ei.apply_strategies()
ei.statistics()
ei.write_database()
return True | Utility function to set up brightway2 to work correctly with lcopt.
It requires the path to the ecospold files containing the Ecoinvent 3.3 cutoff database.
If you don't have these files, log into `ecoinvent.org <http://www.ecoinvent.org/login-databases.html>`_ and go to the Files tab
Download the file called ``ecoinvent 3.3_cutoff_ecoSpold02.7z``
Extract the file somewhere sensible on your machine, you might need to download `7-zip <http://www.7-zip.org/download.html>`_ to extract the files.
Make a note of the path of the folder that contains the .ecospold files, its probably ``<path/extracted/to>/datasets/``
Use this path (as a string) as the first parameter in this function
To overwrite an existing version, set overwrite=True | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/utils.py#L38-L77 | [
"def fix_mac_path_escapes(string):\n return string.replace(r\"\\ \", \" \")\n"
] | """
lcopt.utils
-----------
Module containing the utility function to set up brightway2 to work with lcopt
"""
import tempfile
import requests
import zipfile
import io
import os
import eidl
from functools import partial
import yaml
import pickle
import getpass
import socket
try:
import brightway2 as bw2
except: # pragma: no cover
raise ImportError('Please install the brightway2 package first')
from .data_store import storage
from .constants import (DEFAULT_PROJECT_STEM,
DEFAULT_BIOSPHERE_PROJECT,
DEFAULT_DB_NAME,
FORWAST_PROJECT_NAME,
FORWAST_URL,
ASSET_PATH,
DEFAULT_CONFIG,
DEFAULT_SINGLE_PROJECT)
def lcopt_bw2_setup(ecospold_path, overwrite=False, db_name=None): # pragma: no cover
"""
Utility function to set up brightway2 to work correctly with lcopt.
It requires the path to the ecospold files containing the Ecoinvent 3.3 cutoff database.
If you don't have these files, log into `ecoinvent.org <http://www.ecoinvent.org/login-databases.html>`_ and go to the Files tab
Download the file called ``ecoinvent 3.3_cutoff_ecoSpold02.7z``
Extract the file somewhere sensible on your machine, you might need to download `7-zip <http://www.7-zip.org/download.html>`_ to extract the files.
Make a note of the path of the folder that contains the .ecospold files, its probably ``<path/extracted/to>/datasets/``
Use this path (as a string) as the first parameter in this function
To overwrite an existing version, set overwrite=True
"""
default_ei_name = "Ecoinvent3_3_cutoff"
if db_name is None:
db_name = DEFAULT_PROJECT_STEM + default_ei_name
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_setup in a python shell using overwrite = True')
return False
bw2.projects.set_current(db_name)
bw2.bw2setup()
ei = bw2.SingleOutputEcospold2Importer(fix_mac_path_escapes(ecospold_path), default_ei_name)
ei.apply_strategies()
ei.statistics()
ei.write_database()
return True
def bw2_project_exists(project_name):
return project_name in bw2.projects
def upgrade_old_default():
default_ei_name = "Ecoinvent3_3_cutoff"
bw2.projects.set_current(DEFAULT_PROJECT_STEM[:-1])
bw2.projects.copy_project(DEFAULT_PROJECT_STEM + default_ei_name, switch=True)
write_search_index(DEFAULT_PROJECT_STEM + default_ei_name, default_ei_name)
print('Copied old lcopt setup project')
return True
def check_for_config():
config = None
try:
config = storage.config
except:
pass
return config
def write_search_index(project_name, ei_name, overwrite=False):
si_fp = fix_mac_path_escapes(os.path.join(storage.search_index_dir, '{}.pickle'.format(ei_name))) #os.path.join(ASSET_PATH, '{}.pickle'.format(ei_name))
if not os.path.isfile(si_fp) or overwrite:
search_index = create_search_index(project_name, ei_name)
with open(si_fp, 'wb') as handle:
print("Writing {} search index to search folder".format(ei_name))
pickle.dump(search_index, handle)
#else:
# print("{} search index already exists in assets folder".format(ei_name))
def lcopt_biosphere_setup():
print("Running bw2setup for lcopt - this only needs to be done once")
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.bw2setup()
def lcopt_bw2_autosetup(ei_username=None, ei_password=None, write_config=None, ecoinvent_version='3.3', ecoinvent_system_model = "cutoff", overwrite=False):
"""
Utility function to automatically set up brightway2 to work correctly with lcopt.
It requires a valid username and password to login to the ecoinvent website.
These can be entered directly into the function using the keyword arguments `ei_username` and `ei_password` or entered interactively by using no arguments.
`ecoinvent_version` needs to be a string representation of a valid ecoinvent database, at time of writing these are "3.01", "3.1", "3.2", "3.3", "3.4"
`ecoinvent_system_model` needs to be one of "cutoff", "apos", "consequential"
To overwrite an existing version, set overwrite=True
"""
ei_name = "Ecoinvent{}_{}_{}".format(*ecoinvent_version.split('.'), ecoinvent_system_model)
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if ei_name in bw2.databases and overwrite == False:
#print ('{} is already set up'.format(ei_name))
return True
else: # default to 'unique'
project_name = DEFAULT_PROJECT_STEM + ei_name
if bw2_project_exists(project_name):
if overwrite:
bw2.projects.delete_project(name=project_name, delete_dir=True)
auto_ecoinvent = partial(eidl.get_ecoinvent,db_name=ei_name, auto_write=True, version=ecoinvent_version, system_model=ecoinvent_system_model)
# check for a config file (lcopt_config.yml)
if config is not None:
if "ecoinvent" in config:
if ei_username is None:
ei_username = config['ecoinvent'].get('username')
if ei_password is None:
ei_password = config['ecoinvent'].get('password')
write_config = False
if ei_username is None:
ei_username = input('ecoinvent username: ')
if ei_password is None:
ei_password = getpass.getpass('ecoinvent password: ')
if write_config is None:
write_config = input('store username and password on this computer? y/[n]') in ['y', 'Y', 'yes', 'YES', 'Yes']
if write_config:
config['ecoinvent'] = {
'username': ei_username,
'password': ei_password
}
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if store_option == 'single':
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
if ei_username is not None and ei_password is not None:
auto_ecoinvent(username=ei_username, password=ei_password)
else:
auto_ecoinvent()
write_search_index(project_name, ei_name, overwrite=overwrite)
return True
def forwast_autosetup(forwast_name = 'forwast'):
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if forwast_name in bw2.databases:
return True
else: # default to 'unique'
project_name = FORWAST_PROJECT_NAME
if bw2_project_exists(project_name):
return True
if store_option == 'single':
print('its a single setup')
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
print('doing the setup')
forwast_filepath = forwast_autodownload(FORWAST_URL)
bw2.BW2Package.import_file(forwast_filepath)
return True
def forwast_autodownload(FORWAST_URL):
"""
Autodownloader for forwast database package for brightway. Used by `lcopt_bw2_forwast_setup` to get the database data. Not designed to be used on its own
"""
dirpath = tempfile.mkdtemp()
r = requests.get(FORWAST_URL)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall(dirpath)
return os.path.join(dirpath, 'forwast.bw2package')
def lcopt_bw2_forwast_setup(use_autodownload=True, forwast_path=None, db_name=FORWAST_PROJECT_NAME, overwrite=False):
"""
Utility function to set up brightway2 to work correctly with lcopt using the FORWAST database instead of ecoinvent
By default it'll try and download the forwast database as a .bw2package file from lca-net
If you've downloaded the forwast .bw2package file already you can set use_autodownload=False and forwast_path to point to the downloaded file
To overwrite an existing version, set overwrite=True
"""
if use_autodownload:
forwast_filepath = forwast_autodownload(FORWAST_URL)
elif forwast_path is not None:
forwast_filepath = forwast_path
else:
raise ValueError('Need a path if not using autodownload')
if storage.project_type == 'single':
db_name = storage.single_project_name
if bw2_project_exists(db_name):
bw2.projects.set_current(db_name)
else:
bw2.projects.set_current(db_name)
bw2.bw2setup()
else:
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up for the FORWAST database - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_forwast_setup in a python shell using overwrite = True')
return False
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(db_name, switch=True)
bw2.BW2Package.import_file(forwast_filepath)
return True
def create_search_index(project_name, ei_name):
keep = ['database',
'location',
'name',
'reference product',
'unit',
'production amount',
'code',
'activity']
bw2.projects.set_current(project_name)
db = bw2.Database(ei_name)
print("Creating {} search index".format(ei_name))
search_dict = {k: {xk: xv for xk, xv in v.items() if xk in keep} for k, v in db.load().items()}
return search_dict
def find_port():
for port in range(5000, 5100):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('127.0.0.1', port))
if result != 0:
return port
else:
print('port {} is in use, checking {}'.format(port, port + 1))
def fix_mac_path_escapes(string):
return string.replace(r"\ ", " ")
|
pjamesjoyce/lcopt | lcopt/utils.py | lcopt_bw2_autosetup | python | def lcopt_bw2_autosetup(ei_username=None, ei_password=None, write_config=None, ecoinvent_version='3.3', ecoinvent_system_model = "cutoff", overwrite=False):
ei_name = "Ecoinvent{}_{}_{}".format(*ecoinvent_version.split('.'), ecoinvent_system_model)
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if ei_name in bw2.databases and overwrite == False:
#print ('{} is already set up'.format(ei_name))
return True
else: # default to 'unique'
project_name = DEFAULT_PROJECT_STEM + ei_name
if bw2_project_exists(project_name):
if overwrite:
bw2.projects.delete_project(name=project_name, delete_dir=True)
auto_ecoinvent = partial(eidl.get_ecoinvent,db_name=ei_name, auto_write=True, version=ecoinvent_version, system_model=ecoinvent_system_model)
# check for a config file (lcopt_config.yml)
if config is not None:
if "ecoinvent" in config:
if ei_username is None:
ei_username = config['ecoinvent'].get('username')
if ei_password is None:
ei_password = config['ecoinvent'].get('password')
write_config = False
if ei_username is None:
ei_username = input('ecoinvent username: ')
if ei_password is None:
ei_password = getpass.getpass('ecoinvent password: ')
if write_config is None:
write_config = input('store username and password on this computer? y/[n]') in ['y', 'Y', 'yes', 'YES', 'Yes']
if write_config:
config['ecoinvent'] = {
'username': ei_username,
'password': ei_password
}
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if store_option == 'single':
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
if ei_username is not None and ei_password is not None:
auto_ecoinvent(username=ei_username, password=ei_password)
else:
auto_ecoinvent()
write_search_index(project_name, ei_name, overwrite=overwrite)
return True | Utility function to automatically set up brightway2 to work correctly with lcopt.
It requires a valid username and password to login to the ecoinvent website.
These can be entered directly into the function using the keyword arguments `ei_username` and `ei_password` or entered interactively by using no arguments.
`ecoinvent_version` needs to be a string representation of a valid ecoinvent database, at time of writing these are "3.01", "3.1", "3.2", "3.3", "3.4"
`ecoinvent_system_model` needs to be one of "cutoff", "apos", "consequential"
To overwrite an existing version, set overwrite=True | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/utils.py#L123-L228 | [
"def bw2_project_exists(project_name):\n return project_name in bw2.projects\n",
"def check_for_config():\n\n config = None\n\n try:\n config = storage.config\n except:\n pass\n\n return config\n",
"def write_search_index(project_name, ei_name, overwrite=False):\n si_fp = fix_mac... | """
lcopt.utils
-----------
Module containing the utility function to set up brightway2 to work with lcopt
"""
import tempfile
import requests
import zipfile
import io
import os
import eidl
from functools import partial
import yaml
import pickle
import getpass
import socket
try:
import brightway2 as bw2
except: # pragma: no cover
raise ImportError('Please install the brightway2 package first')
from .data_store import storage
from .constants import (DEFAULT_PROJECT_STEM,
DEFAULT_BIOSPHERE_PROJECT,
DEFAULT_DB_NAME,
FORWAST_PROJECT_NAME,
FORWAST_URL,
ASSET_PATH,
DEFAULT_CONFIG,
DEFAULT_SINGLE_PROJECT)
def lcopt_bw2_setup(ecospold_path, overwrite=False, db_name=None): # pragma: no cover
"""
Utility function to set up brightway2 to work correctly with lcopt.
It requires the path to the ecospold files containing the Ecoinvent 3.3 cutoff database.
If you don't have these files, log into `ecoinvent.org <http://www.ecoinvent.org/login-databases.html>`_ and go to the Files tab
Download the file called ``ecoinvent 3.3_cutoff_ecoSpold02.7z``
Extract the file somewhere sensible on your machine, you might need to download `7-zip <http://www.7-zip.org/download.html>`_ to extract the files.
Make a note of the path of the folder that contains the .ecospold files, its probably ``<path/extracted/to>/datasets/``
Use this path (as a string) as the first parameter in this function
To overwrite an existing version, set overwrite=True
"""
default_ei_name = "Ecoinvent3_3_cutoff"
if db_name is None:
db_name = DEFAULT_PROJECT_STEM + default_ei_name
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_setup in a python shell using overwrite = True')
return False
bw2.projects.set_current(db_name)
bw2.bw2setup()
ei = bw2.SingleOutputEcospold2Importer(fix_mac_path_escapes(ecospold_path), default_ei_name)
ei.apply_strategies()
ei.statistics()
ei.write_database()
return True
def bw2_project_exists(project_name):
return project_name in bw2.projects
def upgrade_old_default():
default_ei_name = "Ecoinvent3_3_cutoff"
bw2.projects.set_current(DEFAULT_PROJECT_STEM[:-1])
bw2.projects.copy_project(DEFAULT_PROJECT_STEM + default_ei_name, switch=True)
write_search_index(DEFAULT_PROJECT_STEM + default_ei_name, default_ei_name)
print('Copied old lcopt setup project')
return True
def check_for_config():
config = None
try:
config = storage.config
except:
pass
return config
def write_search_index(project_name, ei_name, overwrite=False):
si_fp = fix_mac_path_escapes(os.path.join(storage.search_index_dir, '{}.pickle'.format(ei_name))) #os.path.join(ASSET_PATH, '{}.pickle'.format(ei_name))
if not os.path.isfile(si_fp) or overwrite:
search_index = create_search_index(project_name, ei_name)
with open(si_fp, 'wb') as handle:
print("Writing {} search index to search folder".format(ei_name))
pickle.dump(search_index, handle)
#else:
# print("{} search index already exists in assets folder".format(ei_name))
def lcopt_biosphere_setup():
print("Running bw2setup for lcopt - this only needs to be done once")
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.bw2setup()
def lcopt_bw2_autosetup(ei_username=None, ei_password=None, write_config=None, ecoinvent_version='3.3', ecoinvent_system_model = "cutoff", overwrite=False):
"""
Utility function to automatically set up brightway2 to work correctly with lcopt.
It requires a valid username and password to login to the ecoinvent website.
These can be entered directly into the function using the keyword arguments `ei_username` and `ei_password` or entered interactively by using no arguments.
`ecoinvent_version` needs to be a string representation of a valid ecoinvent database, at time of writing these are "3.01", "3.1", "3.2", "3.3", "3.4"
`ecoinvent_system_model` needs to be one of "cutoff", "apos", "consequential"
To overwrite an existing version, set overwrite=True
"""
ei_name = "Ecoinvent{}_{}_{}".format(*ecoinvent_version.split('.'), ecoinvent_system_model)
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if ei_name in bw2.databases and overwrite == False:
#print ('{} is already set up'.format(ei_name))
return True
else: # default to 'unique'
project_name = DEFAULT_PROJECT_STEM + ei_name
if bw2_project_exists(project_name):
if overwrite:
bw2.projects.delete_project(name=project_name, delete_dir=True)
auto_ecoinvent = partial(eidl.get_ecoinvent,db_name=ei_name, auto_write=True, version=ecoinvent_version, system_model=ecoinvent_system_model)
# check for a config file (lcopt_config.yml)
if config is not None:
if "ecoinvent" in config:
if ei_username is None:
ei_username = config['ecoinvent'].get('username')
if ei_password is None:
ei_password = config['ecoinvent'].get('password')
write_config = False
if ei_username is None:
ei_username = input('ecoinvent username: ')
if ei_password is None:
ei_password = getpass.getpass('ecoinvent password: ')
if write_config is None:
write_config = input('store username and password on this computer? y/[n]') in ['y', 'Y', 'yes', 'YES', 'Yes']
if write_config:
config['ecoinvent'] = {
'username': ei_username,
'password': ei_password
}
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if store_option == 'single':
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
if ei_username is not None and ei_password is not None:
auto_ecoinvent(username=ei_username, password=ei_password)
else:
auto_ecoinvent()
write_search_index(project_name, ei_name, overwrite=overwrite)
return True
def forwast_autosetup(forwast_name = 'forwast'):
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if forwast_name in bw2.databases:
return True
else: # default to 'unique'
project_name = FORWAST_PROJECT_NAME
if bw2_project_exists(project_name):
return True
if store_option == 'single':
print('its a single setup')
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
print('doing the setup')
forwast_filepath = forwast_autodownload(FORWAST_URL)
bw2.BW2Package.import_file(forwast_filepath)
return True
def forwast_autodownload(FORWAST_URL):
"""
Autodownloader for forwast database package for brightway. Used by `lcopt_bw2_forwast_setup` to get the database data. Not designed to be used on its own
"""
dirpath = tempfile.mkdtemp()
r = requests.get(FORWAST_URL)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall(dirpath)
return os.path.join(dirpath, 'forwast.bw2package')
def lcopt_bw2_forwast_setup(use_autodownload=True, forwast_path=None, db_name=FORWAST_PROJECT_NAME, overwrite=False):
"""
Utility function to set up brightway2 to work correctly with lcopt using the FORWAST database instead of ecoinvent
By default it'll try and download the forwast database as a .bw2package file from lca-net
If you've downloaded the forwast .bw2package file already you can set use_autodownload=False and forwast_path to point to the downloaded file
To overwrite an existing version, set overwrite=True
"""
if use_autodownload:
forwast_filepath = forwast_autodownload(FORWAST_URL)
elif forwast_path is not None:
forwast_filepath = forwast_path
else:
raise ValueError('Need a path if not using autodownload')
if storage.project_type == 'single':
db_name = storage.single_project_name
if bw2_project_exists(db_name):
bw2.projects.set_current(db_name)
else:
bw2.projects.set_current(db_name)
bw2.bw2setup()
else:
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up for the FORWAST database - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_forwast_setup in a python shell using overwrite = True')
return False
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(db_name, switch=True)
bw2.BW2Package.import_file(forwast_filepath)
return True
def create_search_index(project_name, ei_name):
keep = ['database',
'location',
'name',
'reference product',
'unit',
'production amount',
'code',
'activity']
bw2.projects.set_current(project_name)
db = bw2.Database(ei_name)
print("Creating {} search index".format(ei_name))
search_dict = {k: {xk: xv for xk, xv in v.items() if xk in keep} for k, v in db.load().items()}
return search_dict
def find_port():
for port in range(5000, 5100):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('127.0.0.1', port))
if result != 0:
return port
else:
print('port {} is in use, checking {}'.format(port, port + 1))
def fix_mac_path_escapes(string):
return string.replace(r"\ ", " ")
|
pjamesjoyce/lcopt | lcopt/utils.py | forwast_autodownload | python | def forwast_autodownload(FORWAST_URL):
dirpath = tempfile.mkdtemp()
r = requests.get(FORWAST_URL)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall(dirpath)
return os.path.join(dirpath, 'forwast.bw2package') | Autodownloader for forwast database package for brightway. Used by `lcopt_bw2_forwast_setup` to get the database data. Not designed to be used on its own | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/utils.py#L288-L297 | null | """
lcopt.utils
-----------
Module containing the utility function to set up brightway2 to work with lcopt
"""
import tempfile
import requests
import zipfile
import io
import os
import eidl
from functools import partial
import yaml
import pickle
import getpass
import socket
try:
import brightway2 as bw2
except: # pragma: no cover
raise ImportError('Please install the brightway2 package first')
from .data_store import storage
from .constants import (DEFAULT_PROJECT_STEM,
DEFAULT_BIOSPHERE_PROJECT,
DEFAULT_DB_NAME,
FORWAST_PROJECT_NAME,
FORWAST_URL,
ASSET_PATH,
DEFAULT_CONFIG,
DEFAULT_SINGLE_PROJECT)
def lcopt_bw2_setup(ecospold_path, overwrite=False, db_name=None): # pragma: no cover
"""
Utility function to set up brightway2 to work correctly with lcopt.
It requires the path to the ecospold files containing the Ecoinvent 3.3 cutoff database.
If you don't have these files, log into `ecoinvent.org <http://www.ecoinvent.org/login-databases.html>`_ and go to the Files tab
Download the file called ``ecoinvent 3.3_cutoff_ecoSpold02.7z``
Extract the file somewhere sensible on your machine, you might need to download `7-zip <http://www.7-zip.org/download.html>`_ to extract the files.
Make a note of the path of the folder that contains the .ecospold files, its probably ``<path/extracted/to>/datasets/``
Use this path (as a string) as the first parameter in this function
To overwrite an existing version, set overwrite=True
"""
default_ei_name = "Ecoinvent3_3_cutoff"
if db_name is None:
db_name = DEFAULT_PROJECT_STEM + default_ei_name
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_setup in a python shell using overwrite = True')
return False
bw2.projects.set_current(db_name)
bw2.bw2setup()
ei = bw2.SingleOutputEcospold2Importer(fix_mac_path_escapes(ecospold_path), default_ei_name)
ei.apply_strategies()
ei.statistics()
ei.write_database()
return True
def bw2_project_exists(project_name):
return project_name in bw2.projects
def upgrade_old_default():
default_ei_name = "Ecoinvent3_3_cutoff"
bw2.projects.set_current(DEFAULT_PROJECT_STEM[:-1])
bw2.projects.copy_project(DEFAULT_PROJECT_STEM + default_ei_name, switch=True)
write_search_index(DEFAULT_PROJECT_STEM + default_ei_name, default_ei_name)
print('Copied old lcopt setup project')
return True
def check_for_config():
config = None
try:
config = storage.config
except:
pass
return config
def write_search_index(project_name, ei_name, overwrite=False):
si_fp = fix_mac_path_escapes(os.path.join(storage.search_index_dir, '{}.pickle'.format(ei_name))) #os.path.join(ASSET_PATH, '{}.pickle'.format(ei_name))
if not os.path.isfile(si_fp) or overwrite:
search_index = create_search_index(project_name, ei_name)
with open(si_fp, 'wb') as handle:
print("Writing {} search index to search folder".format(ei_name))
pickle.dump(search_index, handle)
#else:
# print("{} search index already exists in assets folder".format(ei_name))
def lcopt_biosphere_setup():
print("Running bw2setup for lcopt - this only needs to be done once")
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.bw2setup()
def lcopt_bw2_autosetup(ei_username=None, ei_password=None, write_config=None, ecoinvent_version='3.3', ecoinvent_system_model = "cutoff", overwrite=False):
"""
Utility function to automatically set up brightway2 to work correctly with lcopt.
It requires a valid username and password to login to the ecoinvent website.
These can be entered directly into the function using the keyword arguments `ei_username` and `ei_password` or entered interactively by using no arguments.
`ecoinvent_version` needs to be a string representation of a valid ecoinvent database, at time of writing these are "3.01", "3.1", "3.2", "3.3", "3.4"
`ecoinvent_system_model` needs to be one of "cutoff", "apos", "consequential"
To overwrite an existing version, set overwrite=True
"""
ei_name = "Ecoinvent{}_{}_{}".format(*ecoinvent_version.split('.'), ecoinvent_system_model)
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if ei_name in bw2.databases and overwrite == False:
#print ('{} is already set up'.format(ei_name))
return True
else: # default to 'unique'
project_name = DEFAULT_PROJECT_STEM + ei_name
if bw2_project_exists(project_name):
if overwrite:
bw2.projects.delete_project(name=project_name, delete_dir=True)
auto_ecoinvent = partial(eidl.get_ecoinvent,db_name=ei_name, auto_write=True, version=ecoinvent_version, system_model=ecoinvent_system_model)
# check for a config file (lcopt_config.yml)
if config is not None:
if "ecoinvent" in config:
if ei_username is None:
ei_username = config['ecoinvent'].get('username')
if ei_password is None:
ei_password = config['ecoinvent'].get('password')
write_config = False
if ei_username is None:
ei_username = input('ecoinvent username: ')
if ei_password is None:
ei_password = getpass.getpass('ecoinvent password: ')
if write_config is None:
write_config = input('store username and password on this computer? y/[n]') in ['y', 'Y', 'yes', 'YES', 'Yes']
if write_config:
config['ecoinvent'] = {
'username': ei_username,
'password': ei_password
}
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if store_option == 'single':
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
if ei_username is not None and ei_password is not None:
auto_ecoinvent(username=ei_username, password=ei_password)
else:
auto_ecoinvent()
write_search_index(project_name, ei_name, overwrite=overwrite)
return True
def forwast_autosetup(forwast_name = 'forwast'):
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if forwast_name in bw2.databases:
return True
else: # default to 'unique'
project_name = FORWAST_PROJECT_NAME
if bw2_project_exists(project_name):
return True
if store_option == 'single':
print('its a single setup')
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
print('doing the setup')
forwast_filepath = forwast_autodownload(FORWAST_URL)
bw2.BW2Package.import_file(forwast_filepath)
return True
def lcopt_bw2_forwast_setup(use_autodownload=True, forwast_path=None, db_name=FORWAST_PROJECT_NAME, overwrite=False):
"""
Utility function to set up brightway2 to work correctly with lcopt using the FORWAST database instead of ecoinvent
By default it'll try and download the forwast database as a .bw2package file from lca-net
If you've downloaded the forwast .bw2package file already you can set use_autodownload=False and forwast_path to point to the downloaded file
To overwrite an existing version, set overwrite=True
"""
if use_autodownload:
forwast_filepath = forwast_autodownload(FORWAST_URL)
elif forwast_path is not None:
forwast_filepath = forwast_path
else:
raise ValueError('Need a path if not using autodownload')
if storage.project_type == 'single':
db_name = storage.single_project_name
if bw2_project_exists(db_name):
bw2.projects.set_current(db_name)
else:
bw2.projects.set_current(db_name)
bw2.bw2setup()
else:
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up for the FORWAST database - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_forwast_setup in a python shell using overwrite = True')
return False
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(db_name, switch=True)
bw2.BW2Package.import_file(forwast_filepath)
return True
def create_search_index(project_name, ei_name):
keep = ['database',
'location',
'name',
'reference product',
'unit',
'production amount',
'code',
'activity']
bw2.projects.set_current(project_name)
db = bw2.Database(ei_name)
print("Creating {} search index".format(ei_name))
search_dict = {k: {xk: xv for xk, xv in v.items() if xk in keep} for k, v in db.load().items()}
return search_dict
def find_port():
for port in range(5000, 5100):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('127.0.0.1', port))
if result != 0:
return port
else:
print('port {} is in use, checking {}'.format(port, port + 1))
def fix_mac_path_escapes(string):
return string.replace(r"\ ", " ")
|
pjamesjoyce/lcopt | lcopt/utils.py | lcopt_bw2_forwast_setup | python | def lcopt_bw2_forwast_setup(use_autodownload=True, forwast_path=None, db_name=FORWAST_PROJECT_NAME, overwrite=False):
if use_autodownload:
forwast_filepath = forwast_autodownload(FORWAST_URL)
elif forwast_path is not None:
forwast_filepath = forwast_path
else:
raise ValueError('Need a path if not using autodownload')
if storage.project_type == 'single':
db_name = storage.single_project_name
if bw2_project_exists(db_name):
bw2.projects.set_current(db_name)
else:
bw2.projects.set_current(db_name)
bw2.bw2setup()
else:
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up for the FORWAST database - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_forwast_setup in a python shell using overwrite = True')
return False
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(db_name, switch=True)
bw2.BW2Package.import_file(forwast_filepath)
return True | Utility function to set up brightway2 to work correctly with lcopt using the FORWAST database instead of ecoinvent
By default it'll try and download the forwast database as a .bw2package file from lca-net
If you've downloaded the forwast .bw2package file already you can set use_autodownload=False and forwast_path to point to the downloaded file
To overwrite an existing version, set overwrite=True | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/utils.py#L300-L350 | [
"def bw2_project_exists(project_name):\n return project_name in bw2.projects\n",
"def lcopt_biosphere_setup():\n print(\"Running bw2setup for lcopt - this only needs to be done once\")\n bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)\n bw2.bw2setup()\n",
"def forwast_autodownload(FORWAST_URL): ... | """
lcopt.utils
-----------
Module containing the utility function to set up brightway2 to work with lcopt
"""
import tempfile
import requests
import zipfile
import io
import os
import eidl
from functools import partial
import yaml
import pickle
import getpass
import socket
try:
import brightway2 as bw2
except: # pragma: no cover
raise ImportError('Please install the brightway2 package first')
from .data_store import storage
from .constants import (DEFAULT_PROJECT_STEM,
DEFAULT_BIOSPHERE_PROJECT,
DEFAULT_DB_NAME,
FORWAST_PROJECT_NAME,
FORWAST_URL,
ASSET_PATH,
DEFAULT_CONFIG,
DEFAULT_SINGLE_PROJECT)
def lcopt_bw2_setup(ecospold_path, overwrite=False, db_name=None): # pragma: no cover
"""
Utility function to set up brightway2 to work correctly with lcopt.
It requires the path to the ecospold files containing the Ecoinvent 3.3 cutoff database.
If you don't have these files, log into `ecoinvent.org <http://www.ecoinvent.org/login-databases.html>`_ and go to the Files tab
Download the file called ``ecoinvent 3.3_cutoff_ecoSpold02.7z``
Extract the file somewhere sensible on your machine, you might need to download `7-zip <http://www.7-zip.org/download.html>`_ to extract the files.
Make a note of the path of the folder that contains the .ecospold files, its probably ``<path/extracted/to>/datasets/``
Use this path (as a string) as the first parameter in this function
To overwrite an existing version, set overwrite=True
"""
default_ei_name = "Ecoinvent3_3_cutoff"
if db_name is None:
db_name = DEFAULT_PROJECT_STEM + default_ei_name
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_setup in a python shell using overwrite = True')
return False
bw2.projects.set_current(db_name)
bw2.bw2setup()
ei = bw2.SingleOutputEcospold2Importer(fix_mac_path_escapes(ecospold_path), default_ei_name)
ei.apply_strategies()
ei.statistics()
ei.write_database()
return True
def bw2_project_exists(project_name):
return project_name in bw2.projects
def upgrade_old_default():
default_ei_name = "Ecoinvent3_3_cutoff"
bw2.projects.set_current(DEFAULT_PROJECT_STEM[:-1])
bw2.projects.copy_project(DEFAULT_PROJECT_STEM + default_ei_name, switch=True)
write_search_index(DEFAULT_PROJECT_STEM + default_ei_name, default_ei_name)
print('Copied old lcopt setup project')
return True
def check_for_config():
config = None
try:
config = storage.config
except:
pass
return config
def write_search_index(project_name, ei_name, overwrite=False):
si_fp = fix_mac_path_escapes(os.path.join(storage.search_index_dir, '{}.pickle'.format(ei_name))) #os.path.join(ASSET_PATH, '{}.pickle'.format(ei_name))
if not os.path.isfile(si_fp) or overwrite:
search_index = create_search_index(project_name, ei_name)
with open(si_fp, 'wb') as handle:
print("Writing {} search index to search folder".format(ei_name))
pickle.dump(search_index, handle)
#else:
# print("{} search index already exists in assets folder".format(ei_name))
def lcopt_biosphere_setup():
print("Running bw2setup for lcopt - this only needs to be done once")
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.bw2setup()
def lcopt_bw2_autosetup(ei_username=None, ei_password=None, write_config=None, ecoinvent_version='3.3', ecoinvent_system_model = "cutoff", overwrite=False):
"""
Utility function to automatically set up brightway2 to work correctly with lcopt.
It requires a valid username and password to login to the ecoinvent website.
These can be entered directly into the function using the keyword arguments `ei_username` and `ei_password` or entered interactively by using no arguments.
`ecoinvent_version` needs to be a string representation of a valid ecoinvent database, at time of writing these are "3.01", "3.1", "3.2", "3.3", "3.4"
`ecoinvent_system_model` needs to be one of "cutoff", "apos", "consequential"
To overwrite an existing version, set overwrite=True
"""
ei_name = "Ecoinvent{}_{}_{}".format(*ecoinvent_version.split('.'), ecoinvent_system_model)
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if ei_name in bw2.databases and overwrite == False:
#print ('{} is already set up'.format(ei_name))
return True
else: # default to 'unique'
project_name = DEFAULT_PROJECT_STEM + ei_name
if bw2_project_exists(project_name):
if overwrite:
bw2.projects.delete_project(name=project_name, delete_dir=True)
auto_ecoinvent = partial(eidl.get_ecoinvent,db_name=ei_name, auto_write=True, version=ecoinvent_version, system_model=ecoinvent_system_model)
# check for a config file (lcopt_config.yml)
if config is not None:
if "ecoinvent" in config:
if ei_username is None:
ei_username = config['ecoinvent'].get('username')
if ei_password is None:
ei_password = config['ecoinvent'].get('password')
write_config = False
if ei_username is None:
ei_username = input('ecoinvent username: ')
if ei_password is None:
ei_password = getpass.getpass('ecoinvent password: ')
if write_config is None:
write_config = input('store username and password on this computer? y/[n]') in ['y', 'Y', 'yes', 'YES', 'Yes']
if write_config:
config['ecoinvent'] = {
'username': ei_username,
'password': ei_password
}
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if store_option == 'single':
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
if ei_username is not None and ei_password is not None:
auto_ecoinvent(username=ei_username, password=ei_password)
else:
auto_ecoinvent()
write_search_index(project_name, ei_name, overwrite=overwrite)
return True
def forwast_autosetup(forwast_name = 'forwast'):
config = check_for_config()
# If, for some reason, there's no config file, write the defaults
if config is None:
config = DEFAULT_CONFIG
with open(storage.config_file, "w") as cfg:
yaml.dump(config, cfg, default_flow_style=False)
store_option = storage.project_type
# Check if there's already a project set up that matches the current configuration
if store_option == 'single':
project_name = storage.single_project_name
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
if forwast_name in bw2.databases:
return True
else: # default to 'unique'
project_name = FORWAST_PROJECT_NAME
if bw2_project_exists(project_name):
return True
if store_option == 'single':
print('its a single setup')
if bw2_project_exists(project_name):
bw2.projects.set_current(project_name)
else:
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
bw2.projects.set_current(project_name)
bw2.bw2setup()
else:
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
else: #if store_option == 'unique':
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(project_name, switch=True)
print('doing the setup')
forwast_filepath = forwast_autodownload(FORWAST_URL)
bw2.BW2Package.import_file(forwast_filepath)
return True
def forwast_autodownload(FORWAST_URL):
"""
Autodownloader for forwast database package for brightway. Used by `lcopt_bw2_forwast_setup` to get the database data. Not designed to be used on its own
"""
dirpath = tempfile.mkdtemp()
r = requests.get(FORWAST_URL)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall(dirpath)
return os.path.join(dirpath, 'forwast.bw2package')
def lcopt_bw2_forwast_setup(use_autodownload=True, forwast_path=None, db_name=FORWAST_PROJECT_NAME, overwrite=False):
"""
Utility function to set up brightway2 to work correctly with lcopt using the FORWAST database instead of ecoinvent
By default it'll try and download the forwast database as a .bw2package file from lca-net
If you've downloaded the forwast .bw2package file already you can set use_autodownload=False and forwast_path to point to the downloaded file
To overwrite an existing version, set overwrite=True
"""
if use_autodownload:
forwast_filepath = forwast_autodownload(FORWAST_URL)
elif forwast_path is not None:
forwast_filepath = forwast_path
else:
raise ValueError('Need a path if not using autodownload')
if storage.project_type == 'single':
db_name = storage.single_project_name
if bw2_project_exists(db_name):
bw2.projects.set_current(db_name)
else:
bw2.projects.set_current(db_name)
bw2.bw2setup()
else:
if db_name in bw2.projects:
if overwrite:
bw2.projects.delete_project(name=db_name, delete_dir=True)
else:
print('Looks like bw2 is already set up for the FORWAST database - if you want to overwrite the existing version run lcopt.utils.lcopt_bw2_forwast_setup in a python shell using overwrite = True')
return False
# no need to keep running bw2setup - we can just copy a blank project which has been set up before
if not bw2_project_exists(DEFAULT_BIOSPHERE_PROJECT):
lcopt_biosphere_setup()
bw2.projects.set_current(DEFAULT_BIOSPHERE_PROJECT)
bw2.create_core_migrations()
bw2.projects.copy_project(db_name, switch=True)
bw2.BW2Package.import_file(forwast_filepath)
return True
def create_search_index(project_name, ei_name):
keep = ['database',
'location',
'name',
'reference product',
'unit',
'production amount',
'code',
'activity']
bw2.projects.set_current(project_name)
db = bw2.Database(ei_name)
print("Creating {} search index".format(ei_name))
search_dict = {k: {xk: xv for xk, xv in v.items() if xk in keep} for k, v in db.load().items()}
return search_dict
def find_port():
for port in range(5000, 5100):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('127.0.0.1', port))
if result != 0:
return port
else:
print('port {} is in use, checking {}'.format(port, port + 1))
def fix_mac_path_escapes(string):
return string.replace(r"\ ", " ")
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.save | python | def save(self):
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file) | save the instance as a .lcopt file | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L253-L269 | [
"def fix_mac_path_escapes(string):\n return string.replace(r\"\\ \", \" \")\n"
] | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.load | python | def load(self, filename):
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model) | load data from a saved .lcopt file | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L271-L334 | [
"def fix_mac_path_escapes(string):\n return string.replace(r\"\\ \", \" \")\n"
] | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.create_product | python | def create_product (self, name, location='GLO', unit='kg', **kwargs):
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False | Create a new product in the model database | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L338-L352 | [
"def item_factory(name, type, unit='kg', exchanges=None, location='GLO', categories=None, **kwargs):\n\n if exchanges is None:\n exchanges = []\n\n if categories is None:\n categories = []\n\n to_hash = name + type + unit + location\n code = hashlib.md5(to_hash.encode('utf-8')).hexdigest()... | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.create_process | python | def create_process(self, name, exchanges, location='GLO', unit='kg'):
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True | Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L354-L382 | [
"def exchange_factory(input, type, amount, uncertainty, comment, **kwargs):\n data_structure = {\n 'input': input,\n 'type': type,\n 'amount': amount,\n 'uncertainty type': uncertainty,\n 'comment': comment,\n }\n\n for kw in kwargs:\n data_structure[kw] = kwargs[k... | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.remove_input_link | python | def remove_input_link(self, process_code, input_code):
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges) | Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L396-L438 | [
"def check_param_function_use(self, param_id):\n\n current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}\n\n problem_list = []\n\n for k, f in current_functions.items():\n if param_id in f:\n problem_list.append((k, f))\n\n return problem_l... | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.unlink_intermediate | python | def unlink_intermediate(self, sourceId, targetId):
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True | Remove a link between two processes | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L440-L456 | [
"def parameter_scan(self):\n \"\"\"\n Scan the database of the model instance to generate and expose parameters.\n\n This is called by other functions when items are added/removed from the model, but can be run by itself if you like\n \"\"\"\n\n #self.parameter_map = {}\n #self.params = OrderedDic... | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.parameter_scan | python | def parameter_scan(self):
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True | Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L458-L573 | null | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.generate_parameter_set_excel_file | python | def generate_parameter_set_excel_file(self):
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name | Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx" | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L575-L633 | null | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.add_parameter | python | def add_parameter(self, param_name, description=None, default=0, unit=None):
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name)) | Add a global parameter to the database that can be accessed by functions | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L635-L651 | null | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.list_parameters_as_df | python | def list_parameters_as_df(self):
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df | Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L653-L684 | null | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.import_external_db | python | def import_external_db(self, db_file, db_type=None):
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'") | Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L686-L725 | null | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.search_databases | python | def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result | Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L727-L770 | null | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.database_to_SimaPro_csv | python | def database_to_SimaPro_csv(self):
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn | Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv" | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L772-L954 | [
"def unnormalise_unit(unit):\n if unit in UNITS_NORMALIZATION.keys():\n return unit\n else:\n\n un_units = list(filter(lambda x: UNITS_NORMALIZATION[x] == unit, UNITS_NORMALIZATION))\n #print (un_units)\n return un_units[0]\n",
"def parameter_scan(self):\n \"\"\"\n Scan the... | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.export_to_bw2 | python | def export_to_bw2(self):
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db | Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process() | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L969-L987 | [
"def export_to_bw2(self):\n\n db = self.modelInstance.database['items']\n name = self.modelInstance.database['name']\n ext_db_names = [x['name'] for x in self.modelInstance.external_databases]\n\n altbw2database = deepcopy(db)\n\n products = list(filter(lambda x: altbw2database[x]['type'] == 'product... | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def analyse(self, demand_item, demand_item_code):
""" Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results
"""
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/model.py | LcoptModel.analyse | python | def analyse(self, demand_item, demand_item_code):
my_analysis = Bw2Analysis(self)
self.result_set = my_analysis.run_analyses(demand_item, demand_item_code, **self.analysis_settings)
return True | Run the analyis of the model
Doesn't return anything, but creates a new item ``LcoptModel.result_set`` containing the results | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/model.py#L989-L996 | [
" def run_analyses(self, demand_item, demand_item_code, amount=1, methods=[('IPCC 2013', 'climate change', 'GWP 100a')], pie_cutoff=0.05, **kwargs):\n\n ready = self.setup_bw2()\n name = self.bw2_database_name\n bw2_ps_name = \"{}_all\".format(name)\n\n if ready:\n if name ... | class LcoptModel(object):
"""
This is the base model class.
To create a new model, enter a name e.g. ``model = LcoptModel('My_Model')``
To load an existing model use the ``load`` option e.g. ``model = LcoptModel(load = 'My_Model')``
"""
def __init__(self, name=hex(random.getrandbits(128))[2:-1], load=None, useForwast=False, ecoinvent_version=None, ecoinvent_system_model=None, ei_username = None, ei_password = None, write_config=None, autosetup=True):
super(LcoptModel, self).__init__()
# name the instance
self.name = name
# set up the database, parameter dictionaries, the matrix and the names of the exchanges
self.database = {'items': OrderedDict(), 'name': '{}_Database'.format(self.name)}
self.external_databases = []
self.params = OrderedDict()
self.production_params = OrderedDict()
self.allocation_params = OrderedDict()
self.ext_params = []
self.matrix = None
self.names = None
self.parameter_sets = OrderedDict()
self.model_matrices = OrderedDict()
self.technosphere_matrices = OrderedDict()
self.leontif_matrices = OrderedDict()
self.parameter_map = {}
self.sandbox_positions = {}
# If ecoinvent isn't specified in the setup, look for a default in the config and fall back on default set in constants
if ecoinvent_version is None:
self.ecoinvent_version = str(storage.ecoinvent_version)
else:
self.ecoinvent_version = ecoinvent_version
if ecoinvent_system_model is None:
self.ecoinvent_system_model = storage.ecoinvent_system_model
else:
self.ecoinvent_system_model = ecoinvent_system_model
ei_name = "Ecoinvent{}_{}_{}".format(*self.ecoinvent_version.split("."), self.ecoinvent_system_model) #"Ecoinvent3_3_cutoff"
self.ecoinventName = ei_name # "Ecoinvent3_3_cutoff"
self.biosphereName = "biosphere3"
self.ecoinventFilename = ei_name # "ecoinvent3_3"
self.biosphereFilename = "biosphere3"
self.forwastName = "forwast"
self.forwastFilename = "forwast"
self.useForwast = useForwast
self.technosphere_databases = []
#if self.useForwast:
# self.technosphere_databases = [self.forwastName]
#else:
# self.technosphere_databases = [self.ecoinventName]
#self.biosphere_databases = [self.biosphereName]
self.biosphere_databases = []
# default settings for bw2 analysis
self.analysis_settings = {'amount': 1,
'methods': [('IPCC 2013', 'climate change', 'GWP 100a'), ('USEtox', 'human toxicity', 'total')],
#'top_processes': 10,
#'gt_cutoff': 0.01,
'pie_cutoff': 0.05
}
self.allow_allocation = False
# initialise with a blank result set
self.result_set = None
# set the save option, this defaults to the config value but should be overwritten on load for existing models
self.save_option = storage.save_option
if load is not None:
self.load(load)
# check if lcopt is set up, and if not, set it up
is_setup = self.lcopt_setup(ei_username=ei_username, ei_password=ei_password, write_config=write_config,
ecoinvent_version=self.ecoinvent_version, ecoinvent_system_model = self.ecoinvent_system_model, autosetup=autosetup)
if not is_setup:
warnings.warn('lcopt autosetup did not run')
asset_path = fix_mac_path_escapes(storage.search_index_dir) #os.path.join(os.path.dirname(os.path.realpath(__file__)), 'assets')
ecoinventPath = os.path.join(asset_path, self.ecoinventFilename)
biospherePath = os.path.join(asset_path, self.biosphereFilename)
forwastPath = os.path.join(asset_path, self.forwastFilename)
# Try and initialise the external databases if they're not there already
if self.useForwast:
if self.forwastName not in [x['name'] for x in self.external_databases]:
self.import_external_db(forwastPath, 'technosphere')
else:
if self.ecoinventName not in [x['name'] for x in self.external_databases]:
self.import_external_db(ecoinventPath, 'technosphere')
if self.biosphereName not in [x['name'] for x in self.external_databases]:
self.import_external_db(biospherePath, 'biosphere')
# create partial version of io functions
self.add_to_database = partial(add_to_specified_database, database=self.database)
self.get_exchange = partial(get_exchange_from_database, database=self.database)
self.exists_in_database = partial(exists_in_specific_database, database=self.database)
self.get_name = partial(get_exchange_name_from_database, database=self.database)
self.get_unit = partial(get_exchange_unit_from_database, database=self.database)
self.parameter_scan()
def lcopt_setup(self, ei_username, ei_password, write_config, ecoinvent_version, ecoinvent_system_model, autosetup):
if not autosetup:
return False
if storage.project_type == 'single':
if self.useForwast:
forwast_autosetup()
else:
self.base_project_name = storage.single_project_name
#if bw2_project_exists(self.base_project_name):
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=False)
elif not self.useForwast:
self.base_project_name = DEFAULT_PROJECT_STEM + self.ecoinventName
old_default = DEFAULT_PROJECT_STEM[:-1]
is_default = ecoinvent_version == "3.3" and ecoinvent_system_model == "cutoff"
if bw2_project_exists(self.base_project_name):
# make sure the search index file is there too
write_search_index(self.base_project_name, self.ecoinventName)
elif is_default and bw2_project_exists(old_default):
upgrade_old_default()
else:
print("Lcopt needs to be set up to integrate with brightway2 - this only needs to be done once per version/system model combo")
lcopt_bw2_autosetup(ei_username = ei_username, ei_password = ei_password, write_config=write_config, ecoinvent_version=ecoinvent_version, ecoinvent_system_model = ecoinvent_system_model, overwrite=True)
else:
forwast_autosetup()
return True
def rename(self, newname):
"""change the name of the model (i.e. what the .lcopt file will be saved as)"""
self.name = newname
#def saveAs(self, filename):
# """save the instance as a pickle"""
# pickle.dump(self, open("{}.pickle".format(filename), "wb"))
def save(self):
"""save the instance as a .lcopt file"""
if self.save_option == 'curdir':
model_path = os.path.join(
os.getcwd(),
'{}.lcopt'.format(self.name)
)
else: # default to appdir
model_path = os.path.join(
storage.model_dir,
'{}.lcopt'.format(self.name)
)
model_path = fix_mac_path_escapes(model_path)
with open(model_path, 'wb') as model_file:
pickle.dump(self, model_file)
def load(self, filename):
"""load data from a saved .lcopt file"""
if filename[-6:] != ".lcopt":
filename += ".lcopt"
try:
savedInstance = pickle.load(open("{}".format(filename), "rb"))
except FileNotFoundError:
savedInstance = pickle.load(open(fix_mac_path_escapes(os.path.join(storage.model_dir, "{}".format(filename))), "rb"))
attributes = ['name',
'database',
'params',
'production_params',
'allocation_params',
'ext_params',
'matrix',
'names',
'parameter_sets',
'model_matrices',
'technosphere_matrices',
'leontif_matrices',
'external_databases',
'parameter_map',
'sandbox_positions',
'ecoinventName',
'biosphereName',
'forwastName',
'analysis_settings',
'technosphere_databases',
'biosphere_databases',
'result_set',
'evaluated_parameter_sets',
'useForwast',
'base_project_name',
'save_option',
'allow_allocation',
'ecoinvent_version',
'ecoinvent_system_model',
]
for attr in attributes:
if hasattr(savedInstance, attr):
setattr(self, attr, getattr(savedInstance, attr))
else:
pass
#print ("can't set {}".format(attr))
# use legacy save option if this is missing from the model
if not hasattr(savedInstance, 'save_option'):
setattr(self, 'save_option', LEGACY_SAVE_OPTION)
# figure out ecoinvent version and system model if these are missing from the model
if not hasattr(savedInstance, 'ecoinvent_version') or not hasattr(savedInstance, 'ecoinvent_system_model'):
parts = savedInstance.ecoinventName.split("_")
main_version = parts[0][-1]
sub_version = parts[1]
system_model = parts[2]
#print(parts)
setattr(self, 'ecoinvent_version', '{}.{}'.format(main_version, sub_version))
setattr(self, 'ecoinvent_system_model', system_model)
def create_product (self, name, location='GLO', unit='kg', **kwargs):
"""
Create a new product in the model database
"""
new_product = item_factory(name=name, location=location, unit=unit, type='product', **kwargs)
if not self.exists_in_database(new_product['code']):
self.add_to_database(new_product)
#print ('{} added to database'.format(name))
return self.get_exchange(name)
else:
#print('{} already exists in this database'.format(name))
return False
def create_process(self, name, exchanges, location='GLO', unit='kg'):
"""
Create a new process, including all new exchanges (in brightway2's exchange format) in the model database.
Exchanges must have at least a name, type and unit field
"""
found_exchanges = []
for e in exchanges:
exc_name = e.pop('name', None)
exc_type = e.pop('type', None)
this_exchange = self.get_exchange(exc_name)
if this_exchange is False:
my_unit = e.pop('unit', unit)
this_exchange = self.create_product(exc_name, location=location, unit=my_unit, **e)
found_exchanges.append(exchange_factory(this_exchange, exc_type, 1, 1, '{} exchange of {}'.format(exc_type, exc_name)))
new_process = item_factory(name=name, location=location, unit=unit, type='process', exchanges=found_exchanges)
self.add_to_database(new_process)
self.parameter_scan()
return True
def check_param_function_use(self, param_id):
current_functions = {k: x['function'] for k, x in self.params.items() if x['function'] is not None}
problem_list = []
for k, f in current_functions.items():
if param_id in f:
problem_list.append((k, f))
return problem_list
def remove_input_link(self, process_code, input_code):
"""
Remove an input (technosphere or biosphere exchange) from a process, resolving all parameter issues
"""
# 1. find correct process
# 2. find correct exchange
# 3. remove that exchange
# 4. check for parameter conflicts?
# 4. run parameter scan to rebuild matrices?
#print(process_code, input_code)
process = self.database['items'][process_code]
exchanges = process['exchanges']
initial_count = len(exchanges)
new_exchanges = [e for e in exchanges if e['input'] != input_code]
product_code = [e['input'] for e in exchanges if e['type'] == 'production'][0]
#print(product_code)
param_id = [k for k, v in self.params.items() if (v['from'] == input_code[1] and v['to'] == product_code[1])][0]
#print (param_id)
problem_functions = self.check_param_function_use(param_id)
if len(problem_functions) != 0:
#print('the following functions have been removed:')
for p in problem_functions:
self.params[p[0]]['function'] = None
#print(p)
process['exchanges'] = new_exchanges
del self.params[param_id]
self.parameter_scan()
return initial_count - len(new_exchanges)
def unlink_intermediate(self, sourceId, targetId):
"""
Remove a link between two processes
"""
source = self.database['items'][(self.database.get('name'), sourceId)]
target = self.database['items'][(self.database.get('name'), targetId)]
production_exchange = [x['input'] for x in source['exchanges'] if x['type'] == 'production'][0]
new_exchanges = [x for x in target['exchanges'] if x['input'] != production_exchange]
target['exchanges'] = new_exchanges
self.parameter_scan()
return True
def parameter_scan(self):
"""
Scan the database of the model instance to generate and expose parameters.
This is called by other functions when items are added/removed from the model, but can be run by itself if you like
"""
#self.parameter_map = {}
#self.params = OrderedDict()
cr_list = []
items = self.database['items']
#print(items)
for key in items.keys():
i = items[key]
#print(i['name'], i['type'])
if i['type'] == 'product':
cr_list.append(i['code'])
no_products = len(cr_list)
self.names = [self.get_name(x) for x in cr_list]
self.matrix = np.zeros((no_products, no_products))
for key in items.keys():
i = items[key]
if i['type'] == 'process':
inputs = []
#print(i['name'])
#print([(e['comment'], e['type']) for e in i['exchanges']])
for e in i['exchanges']:
if e['type'] == 'production':
col_code = cr_list.index(e['input'][1])
if not 'p_{}_production'.format(col_code) in self.production_params:
self.production_params['p_{}_production'.format(col_code)] = {
'function': None,
'description': 'Production parameter for {}'.format(self.get_name(e['input'][1])),
'unit': self.get_unit(e['input'][1]),
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'production',
}
if not 'p_{}_allocation'.format(col_code) in self.allocation_params:
self.allocation_params['p_{}_allocation'.format(col_code)] = {
'function': None,
'description': 'Allocation parameter for {}'.format(self.get_name(e['input'][1])),
'unit': "% (as decimal)",
'from': e['input'],
'from_name': self.get_name(e['input'][1]),
'type': 'allocation',
}
elif e['type'] == 'technosphere':
#print(cr_list)
row_code = cr_list.index(e['input'][1])
inputs.append((row_code, e['amount']))
for ip in inputs:
self.matrix[(ip[0], col_code)] = ip[1]
param_check_list = []
for c, column in enumerate(self.matrix.T):
for r, i in enumerate(column):
if i > 0:
p_from = cr_list[r]
p_to = cr_list[c]
coords = (r, c)
from_item_type = self.database['items'][(self.database['name'], p_from)]['lcopt_type']
#print('{}\t| {} --> {}'.format(coords, self.get_name(p_from), self.get_name(p_to)))
param_check_list.append('p_{}_{}'.format(coords[0], coords[1]))
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.params:
self.params['p_{}_{}'.format(coords[0], coords[1])] = {
'function': None,
'normalisation_parameter': '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1]),
'description': 'Input of {} to create {}'.format(self.get_name(p_from), self.get_name(p_to)),
'coords': coords,
'unit': self.get_unit(p_from),
'from': p_from,
'from_name': self.get_name(p_from),
'to': p_to,
'to_name': self.get_name(p_to),
'type': from_item_type,
}
#elif 'normalisation_parameter' not in self.params['p_{}_{}'.format(coords[0], coords[1])].keys():
#print("Adding normalisation_parameter to {}".format('p_{}_{}'.format(coords[0], coords[1])))
#self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
#print('p_{}_{} already exists'.format(coords[0],coords[1]))
else:
pass # print("SOMETHING WRONG HERE\n{}\n".format(self.params['p_{}_{}'.format(coords[0], coords[1])]))
# make sure the parameter is being normalised and allocated properly
self.params['p_{}_{}'.format(coords[0], coords[1])]['normalisation_parameter'] = '(p_{}_production / p_{}_allocation)'.format(coords[1], coords[1])
if not 'p_{}_{}'.format(coords[0], coords[1]) in self.parameter_map:
self.parameter_map[(p_from, p_to)] = 'p_{}_{}'.format(coords[0], coords[1])
kill_list = []
for k in self.params.keys():
if k not in param_check_list:
#print("{} may be obsolete".format(k))
kill_list.append(k)
for p in kill_list:
#print("deleting parameter {}".format(p))
del self.params[p]
return True
def generate_parameter_set_excel_file(self):
"""
Generate an excel file containing the parameter sets in a format you can import into SimaPro Developer.
The file will be called "ParameterSet_<ModelName>_input_file.xlsx"
"""
parameter_sets = self.parameter_sets
p_set = []
filename = "ParameterSet_{}_input_file.xlsx".format(self.name)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
p_set_name = os.path.join(base_dir, filename)
p = self.params
for k in p.keys():
if p[k]['function'] is None:
base_dict = {'id': k, 'name': p[k]['description'], 'unit': p[k]['unit']}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][k]
p_set.append(base_dict)
else:
pass
#print("{} is determined by a function".format(p[k]['description']))
for e in self.ext_params:
base_dict = {'id': '{}'.format(e['name']), 'type': 'external', 'name': e['description'], 'unit': ''}
for s in parameter_sets.keys():
base_dict[s] = parameter_sets[s][e['name']]
p_set.append(base_dict)
df = pd.DataFrame(p_set)
with pd.ExcelWriter(p_set_name, engine='xlsxwriter') as writer:
ps_columns = [k for k in parameter_sets.keys()]
#print (ps_columns)
my_columns = ['name', 'unit', 'id']
my_columns.extend(ps_columns)
#print (my_columns)
#print(df)
df.to_excel(writer, sheet_name=self.name, columns=my_columns, index=False, merge_cells=False)
return p_set_name
def add_parameter(self, param_name, description=None, default=0, unit=None):
"""
Add a global parameter to the database that can be accessed by functions
"""
if description is None:
description = "Parameter called {}".format(param_name)
if unit is None:
unit = "-"
name_check = lambda x: x['name'] == param_name
name_check_list = list(filter(name_check, self.ext_params))
if len(name_check_list) == 0:
self.ext_params.append({'name': param_name, 'description': description, 'default': default, 'unit': unit})
else:
print('{} already exists - choose a different name'.format(param_name))
def list_parameters_as_df(self):
"""
Only really useful when running from a jupyter notebook.
Lists the parameters in the model in a pandas dataframe
Columns: id, matrix coordinates, description, function
"""
to_df = []
for i, e in enumerate(self.ext_params):
row = {}
row['id'] = e['name']
row['coords'] = "n/a"
row['description'] = e['description']
row['function'] = "n/a"
to_df.append(row)
for pk in self.params:
p = self.params[pk]
row = {}
row['id'] = pk
row['coords'] = p['coords']
row['description'] = p['description']
row['function'] = p['function']
to_df.append(row)
df = pd.DataFrame(to_df)
return df
def import_external_db(self, db_file, db_type=None):
"""
Import an external database for use in lcopt
db_type must be one of ``technosphere`` or ``biosphere``
The best way to 'obtain' an external database is to 'export' it from brightway as a pickle file
e.g.::
import brightway2 as bw
bw.projects.set_current('MyModel')
db = bw.Database('MyDatabase')
db_as_dict = db.load()
import pickle
with open('MyExport.pickle', 'wb') as f:
pickle.dump(db_as_dict, f)
NOTE: The Ecoinvent cutoff 3.3 database and the full biosphere database are included in the lcopt model as standard - no need to import those
This can be useful if you have your own methods which require new biosphere flows that you want to analyse using lcopt
"""
db = pickle.load(open("{}.pickle".format(db_file), "rb"))
name = list(db.keys())[0][0]
new_db = {'items': db, 'name': name}
self.external_databases.append(new_db)
if db_type is None: # Assume its a technosphere database
db_type = 'technosphere'
if db_type == 'technosphere':
self.technosphere_databases.append(name)
elif db_type == 'biosphere':
self.biosphere_databases.append(name)
else:
raise Exception
print ("Database type must be 'technosphere' or 'biosphere'")
def search_databases(self, search_term, location=None, markets_only=False, databases_to_search=None, allow_internal=False):
"""
Search external databases linked to your lcopt model.
To restrict the search to particular databases (e.g. technosphere or biosphere only) use a list of database names in the ``database_to_search`` variable
"""
dict_list = []
if allow_internal:
internal_dict = {}
for k, v in self.database['items'].items():
if v.get('lcopt_type') == 'intermediate':
internal_dict[k] = v
dict_list.append(internal_dict)
if databases_to_search is None:
#Search all of the databases available
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases])
dict_list += [x['items'] for x in self.external_databases]
else:
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
dict_list += [x['items'] for x in self.external_databases if x['name'] in databases_to_search]
data = Dictionaries(*dict_list)
#data = Dictionaries(self.database['items'], *[x['items'] for x in self.external_databases if x['name'] in databases_to_search])
query = Query()
if markets_only:
market_filter = Filter("name", "has", "market for")
query.add(market_filter)
if location is not None:
location_filter = Filter("location", "is", location)
query.add(location_filter)
query.add(Filter("name", "ihas", search_term))
result = query(data)
return result
def database_to_SimaPro_csv(self):
"""
Export the lcopt model as a SimaPro csv file.
The file will be called "<ModelName>_database_export.csv"
"""
self.parameter_scan()
csv_args = {}
csv_args['processes'] = []
db = self.database['items']
product_filter = lambda x: db[x]['type'] == 'product'
process_filter = lambda x: db[x]['type'] == 'process'
processes = list(filter(process_filter, db))
products = list(filter(product_filter, db))
created_exchanges = []
project_input_params = []
project_calc_params = []
for k in processes:
item = db[k]
current = {}
current['name'] = item['name']
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = item['unit']
current['exchanges'] = []
process_params = []
production_filter = lambda x: x['type'] == 'production'
output_code = list(filter(production_filter, item['exchanges']))[0]['input'][1]
for e in item['exchanges']:
if e['type'] == 'technosphere':
this_exchange = {}
this_code = e['input'][1]
formatted_name = self.get_name(this_code)
this_exchange['formatted_name'] = formatted_name
param_key = (this_code, output_code)
#print(param_key)
#param_check = (formatted_name, item['name'])
this_param = self.parameter_map[param_key]
process_params.append(this_param)
this_exchange['amount'] = this_param
this_exchange['unit'] = self.get_unit(this_code)
current['exchanges'].append(this_exchange)
elif e['type'] == 'production':
this_code = e['input'][1]
name = self.get_name(this_code)
current['output_name'] = name
created_exchanges.append(name)
# process parameters
for p in process_params:
if self.params[p]['function'] is None:
project_input_params.append({'name': p, 'comment': self.params[p]['description']})
else:
project_calc_params.append({'name': p, 'comment': self.params[p]['description'], 'formula': self.params[p]['function']})
csv_args['processes'].append(current)
for k in products:
this_item = db[k]
this_name = this_item['name']
if this_item['name'] in created_exchanges:
#print ('{} already created'.format(this_name))
pass
else:
#print ('Need to create {}'.format(this_name))
current = {}
current['name'] = this_name
current['output_name'] = this_name
current['id'] = (self.name.replace(" ", "") + "XXXXXXXX")[:8] + ('00000000000' + str(randint(1, 99999999999)))[-11:]
current['unit'] = this_item['unit']
#current['exchanges'] = []
if 'ext_link' in this_item.keys():
ext_link = this_item['ext_link']
if ext_link[0] != self.database['name']:
db_filter = lambda x: x['name'] == ext_link[0]
extdb = list(filter(db_filter, self.external_databases))[0]['items']
ext_item = extdb[ext_link]
if ext_link[0] != self.biosphereName:
ref_prod = ext_item['reference product']
name = ext_item['name'].replace(" " + ref_prod, "")
location = ext_item['location']
system_model = "Alloc Def"
process_type = "U"
unit = unnormalise_unit(ext_item['unit'])
simaPro_name = "{} {{{}}}| {} | {}, {}".format(ref_prod.capitalize(), location, name, system_model, process_type)
#print ('{} has an external link to {}'.format(this_name, simaPro_name))
current['exchanges'] = [{'formatted_name': simaPro_name, 'unit': unit, 'amount': 1}]
else:
#print('{} has a biosphere exchange - need to sort this out'.format(this_name))
#print(ext_item)
unit = unnormalise_unit(ext_item['unit'])
formatted_name = ext_item['name']
if 'air' in ext_item['categories']:
current['air_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to air'.format(formatted_name)}]
elif 'water' in ext_item['categories']:
current['water_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to water'.format(formatted_name)}]
elif 'soil' in ext_item['categories']:
current['soil_emissions'] = [{'formatted_name': formatted_name, 'subcompartment': '', 'unit': unit, 'amount': 1, 'comment': 'emission of {} to soil'.format(formatted_name)}]
else:
print('{} has a biosphere exchange that isnt to air water or soil')
print(ext_item)
else:
warnings.warn('{} has NO internal or external link - it is burden free'.format(this_name))
csv_args['processes'].append(current)
created_exchanges.append(this_name)
#print(csv_args)
#print(created_exchanges)
csv_args['project'] = {}
#NOTE - currently external parameters can only be constants
csv_args['project']['calculated_parameters'] = project_calc_params
#add the external parameters to the input parameter list
for p in self.ext_params:
project_input_params.append({'name': p['name'], 'comment': p['description'], 'default': p['default']})
csv_args['project']['input_parameters'] = project_input_params
#print (csv_args)
env = Environment(
loader=PackageLoader('lcopt', 'templates'),
)
filename = "{}_database_export.csv".format(self.name.replace(" ", "_"))
csv_template = env.get_template('export.csv')
output = csv_template.render(**csv_args)
if self.save_option == 'curdir':
base_dir = os.getcwd()
else:
base_dir = os.path.join(storage.simapro_dir, self.name.replace(" ", "_"))
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
efn = os.path.join(base_dir, filename)
with open(efn, "w") as f:
f.write(output)
return efn
# << Flask >> #
def launch_interact(self): # pragma: no cover
"""
This is probably the most important method in the model - you use it to launch the GUI
"""
my_flask = FlaskSandbox(self)
my_flask.run()
# << Brightway2 >> #
def export_to_bw2(self):
"""
Export the lcopt model in the native brightway 2 format
returns name, database
to use it to export, then import to brightway::
name, db = model.export_to_bw2()
import brightway2 as bw
bw.projects.set_current('MyProject')
new_db = bw.Database(name)
new_db.write(db)
new_db.process()
"""
my_exporter = Bw2Exporter(self)
name, bw2db = my_exporter.export_to_bw2()
return name, bw2db
# << Disclosures >> #
def export_disclosure(self, parameter_set=None, folder_path=None):
return export_disclosure(self, parameter_set, folder_path)
|
pjamesjoyce/lcopt | lcopt/bw2_export.py | Bw2Exporter.evaluate_parameter_sets | python | def evaluate_parameter_sets(self):
#parameter_interpreter = ParameterInterpreter(self.modelInstance)
#parameter_interpreter.evaluate_parameter_sets()
self.parameter_interpreter = LcoptParameterSet(self.modelInstance)
self.modelInstance.evaluated_parameter_sets = self.parameter_interpreter.evaluated_parameter_sets
self.modelInstance.bw2_export_params = self.parameter_interpreter.bw2_export_params | This takes the parameter sets of the model instance and evaluates any formulas using the parameter values to create a
fixed, full set of parameters for each parameter set in the model | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/bw2_export.py#L18-L28 | null | class Bw2Exporter():
def __init__(self, modelInstance):
self.modelInstance = modelInstance
# set up the parameter hook dictionary
self.evaluate_parameter_sets()
self.create_parameter_map()
self.modelInstance.parameter_map = self.parameter_map
def create_parameter_map(self):
"""
Creates a parameter map which takes a tuple of the exchange 'from' and exchange 'to' codes
and returns the parameter name for that exchange
"""
names = self.modelInstance.names
db = self.modelInstance.database['items']
parameter_map = {}
def get_names_index(my_thing):
return[i for i, x in enumerate(names) if x == my_thing][0]
for k, this_item in db.items():
if this_item['type'] == 'process':
production_id = [x['input'] for x in this_item['exchanges'] if x['type'] == 'production'][0]
input_ids = [x['input'] for x in this_item['exchanges'] if x['type'] == 'technosphere']
production_index = get_names_index(db[production_id]['name'])
input_indexes = [get_names_index(db[x]['name']) for x in input_ids]
parameter_ids = ['n_p_{}_{}'.format(x, production_index) for x in input_indexes]
parameter_map_items = {(input_ids[n], k): parameter_ids[n] for n, x in enumerate(input_ids)}
#check = [self.modelInstance.params[x]['description'] for x in parameter_ids]
#print(check)
#print(parameter_map_items)
parameter_map.update(parameter_map_items)
self.parameter_map = parameter_map
#return parameter_map
def get_output(self, process_id):
exchanges = self.modelInstance.database['items'][process_id]['exchanges']
production_filter = lambda x: x['type'] == 'production'
code = list(filter(production_filter, exchanges))[0]['input']
# name = self.modelInstance.database['items'][code]['name']
return code
def export_to_bw2(self):
db = self.modelInstance.database['items']
name = self.modelInstance.database['name']
ext_db_names = [x['name'] for x in self.modelInstance.external_databases]
altbw2database = deepcopy(db)
products = list(filter(lambda x: altbw2database[x]['type'] == 'product', altbw2database))
processes = list(filter(lambda x: altbw2database[x]['type'] == 'process', altbw2database))
#for i in processes:
# print(self.get_output(i))
#intermediates = [self.output_code(x) for x in processes]
intermediate_map = {self.get_output(x): x for x in processes}
#print (intermediate_map)
for p in products:
product = altbw2database[p]
product['type'] = 'process'
new_exchanges = [x for x in product['exchanges'] if x['type'] != 'production']
#print([x for x in product['exchanges'] if x['type'] == 'production'])
product['exchanges'] = new_exchanges
#link to intermediate generator
if p in intermediate_map.keys():
#print (db[p]['name'])
product['exchanges'].append(exchange_factory(intermediate_map[p], 'technosphere', 1, 1, 'intermediate link', name=db[p]['name'], unit=db[p]['unit']))
#add external links
if 'ext_link' in product.keys():
if product['ext_link'][0] == self.modelInstance.database['name']:
print('internal external link...')
ex_name = self.modelInstance.database['items'][product['ext_link']]['name']
ex_unit = self.modelInstance.database['items'][product['ext_link']]['unit']
product['exchanges'].append(exchange_factory(product['ext_link'], 'technosphere', 1, 1, 'external link to {}'.format(product['ext_link'][0]), name=ex_name, unit=ex_unit))
elif product['ext_link'][0] in self.modelInstance.biosphere_databases:
ed_ix = ext_db_names.index(product['ext_link'][0])
ex_name = self.modelInstance.external_databases[ed_ix]['items'][product['ext_link']]['name']
ex_unit = self.modelInstance.external_databases[ed_ix]['items'][product['ext_link']]['unit']
product['exchanges'].append(exchange_factory(product['ext_link'], 'biosphere', 1, 1, 'external link to {}'.format(product['ext_link'][0]), name=ex_name, unit=ex_unit))
else:
ed_ix = ext_db_names.index(product['ext_link'][0])
ex_name = self.modelInstance.external_databases[ed_ix]['items'][product['ext_link']]['name']
ex_unit = self.modelInstance.external_databases[ed_ix]['items'][product['ext_link']]['unit']
product['exchanges'].append(exchange_factory(product['ext_link'], 'technosphere', 1, 1, 'external link to {}'.format(product['ext_link'][0]), name=ex_name, unit=ex_unit))
for p in processes:
process = altbw2database[p]
new_exchanges = [x for x in process['exchanges'] ]#if x['type'] != 'production']
#print([x for x in process['exchanges'] if x['type'] == 'production'])
# add parameter hooks
for e in new_exchanges:
ex_name = self.modelInstance.get_name(e['input'][1])
ex_unit = self.modelInstance.get_unit(e['input'][1])
#print (self.parameter_map[(e['input'], p)])
if e['type'] != 'production':
#e['parameter_hook'] = self.parameter_map[(e['input'], p)]
e['formula'] = self.parameter_map[(e['input'], p)]
e['name'] = ex_name
e['unit'] = ex_unit
#print (new_exchanges)
process['exchanges'] = new_exchanges
return name, altbw2database |
pjamesjoyce/lcopt | lcopt/bw2_export.py | Bw2Exporter.create_parameter_map | python | def create_parameter_map(self):
names = self.modelInstance.names
db = self.modelInstance.database['items']
parameter_map = {}
def get_names_index(my_thing):
return[i for i, x in enumerate(names) if x == my_thing][0]
for k, this_item in db.items():
if this_item['type'] == 'process':
production_id = [x['input'] for x in this_item['exchanges'] if x['type'] == 'production'][0]
input_ids = [x['input'] for x in this_item['exchanges'] if x['type'] == 'technosphere']
production_index = get_names_index(db[production_id]['name'])
input_indexes = [get_names_index(db[x]['name']) for x in input_ids]
parameter_ids = ['n_p_{}_{}'.format(x, production_index) for x in input_indexes]
parameter_map_items = {(input_ids[n], k): parameter_ids[n] for n, x in enumerate(input_ids)}
#check = [self.modelInstance.params[x]['description'] for x in parameter_ids]
#print(check)
#print(parameter_map_items)
parameter_map.update(parameter_map_items)
self.parameter_map = parameter_map | Creates a parameter map which takes a tuple of the exchange 'from' and exchange 'to' codes
and returns the parameter name for that exchange | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/bw2_export.py#L30-L55 | [
"def get_names_index(my_thing):\n return[i for i, x in enumerate(names) if x == my_thing][0]\n"
] | class Bw2Exporter():
def __init__(self, modelInstance):
self.modelInstance = modelInstance
# set up the parameter hook dictionary
self.evaluate_parameter_sets()
self.create_parameter_map()
self.modelInstance.parameter_map = self.parameter_map
def evaluate_parameter_sets(self):
"""
This takes the parameter sets of the model instance and evaluates any formulas using the parameter values to create a
fixed, full set of parameters for each parameter set in the model
"""
#parameter_interpreter = ParameterInterpreter(self.modelInstance)
#parameter_interpreter.evaluate_parameter_sets()
self.parameter_interpreter = LcoptParameterSet(self.modelInstance)
self.modelInstance.evaluated_parameter_sets = self.parameter_interpreter.evaluated_parameter_sets
self.modelInstance.bw2_export_params = self.parameter_interpreter.bw2_export_params
def create_parameter_map(self):
"""
Creates a parameter map which takes a tuple of the exchange 'from' and exchange 'to' codes
and returns the parameter name for that exchange
"""
names = self.modelInstance.names
db = self.modelInstance.database['items']
parameter_map = {}
def get_names_index(my_thing):
return[i for i, x in enumerate(names) if x == my_thing][0]
for k, this_item in db.items():
if this_item['type'] == 'process':
production_id = [x['input'] for x in this_item['exchanges'] if x['type'] == 'production'][0]
input_ids = [x['input'] for x in this_item['exchanges'] if x['type'] == 'technosphere']
production_index = get_names_index(db[production_id]['name'])
input_indexes = [get_names_index(db[x]['name']) for x in input_ids]
parameter_ids = ['n_p_{}_{}'.format(x, production_index) for x in input_indexes]
parameter_map_items = {(input_ids[n], k): parameter_ids[n] for n, x in enumerate(input_ids)}
#check = [self.modelInstance.params[x]['description'] for x in parameter_ids]
#print(check)
#print(parameter_map_items)
parameter_map.update(parameter_map_items)
self.parameter_map = parameter_map
#return parameter_map
def get_output(self, process_id):
exchanges = self.modelInstance.database['items'][process_id]['exchanges']
production_filter = lambda x: x['type'] == 'production'
code = list(filter(production_filter, exchanges))[0]['input']
# name = self.modelInstance.database['items'][code]['name']
return code
def export_to_bw2(self):
db = self.modelInstance.database['items']
name = self.modelInstance.database['name']
ext_db_names = [x['name'] for x in self.modelInstance.external_databases]
altbw2database = deepcopy(db)
products = list(filter(lambda x: altbw2database[x]['type'] == 'product', altbw2database))
processes = list(filter(lambda x: altbw2database[x]['type'] == 'process', altbw2database))
#for i in processes:
# print(self.get_output(i))
#intermediates = [self.output_code(x) for x in processes]
intermediate_map = {self.get_output(x): x for x in processes}
#print (intermediate_map)
for p in products:
product = altbw2database[p]
product['type'] = 'process'
new_exchanges = [x for x in product['exchanges'] if x['type'] != 'production']
#print([x for x in product['exchanges'] if x['type'] == 'production'])
product['exchanges'] = new_exchanges
#link to intermediate generator
if p in intermediate_map.keys():
#print (db[p]['name'])
product['exchanges'].append(exchange_factory(intermediate_map[p], 'technosphere', 1, 1, 'intermediate link', name=db[p]['name'], unit=db[p]['unit']))
#add external links
if 'ext_link' in product.keys():
if product['ext_link'][0] == self.modelInstance.database['name']:
print('internal external link...')
ex_name = self.modelInstance.database['items'][product['ext_link']]['name']
ex_unit = self.modelInstance.database['items'][product['ext_link']]['unit']
product['exchanges'].append(exchange_factory(product['ext_link'], 'technosphere', 1, 1, 'external link to {}'.format(product['ext_link'][0]), name=ex_name, unit=ex_unit))
elif product['ext_link'][0] in self.modelInstance.biosphere_databases:
ed_ix = ext_db_names.index(product['ext_link'][0])
ex_name = self.modelInstance.external_databases[ed_ix]['items'][product['ext_link']]['name']
ex_unit = self.modelInstance.external_databases[ed_ix]['items'][product['ext_link']]['unit']
product['exchanges'].append(exchange_factory(product['ext_link'], 'biosphere', 1, 1, 'external link to {}'.format(product['ext_link'][0]), name=ex_name, unit=ex_unit))
else:
ed_ix = ext_db_names.index(product['ext_link'][0])
ex_name = self.modelInstance.external_databases[ed_ix]['items'][product['ext_link']]['name']
ex_unit = self.modelInstance.external_databases[ed_ix]['items'][product['ext_link']]['unit']
product['exchanges'].append(exchange_factory(product['ext_link'], 'technosphere', 1, 1, 'external link to {}'.format(product['ext_link'][0]), name=ex_name, unit=ex_unit))
for p in processes:
process = altbw2database[p]
new_exchanges = [x for x in process['exchanges'] ]#if x['type'] != 'production']
#print([x for x in process['exchanges'] if x['type'] == 'production'])
# add parameter hooks
for e in new_exchanges:
ex_name = self.modelInstance.get_name(e['input'][1])
ex_unit = self.modelInstance.get_unit(e['input'][1])
#print (self.parameter_map[(e['input'], p)])
if e['type'] != 'production':
#e['parameter_hook'] = self.parameter_map[(e['input'], p)]
e['formula'] = self.parameter_map[(e['input'], p)]
e['name'] = ex_name
e['unit'] = ex_unit
#print (new_exchanges)
process['exchanges'] = new_exchanges
return name, altbw2database |
pjamesjoyce/lcopt | lcopt/parameters.py | LcoptParameterSet.check_production_parameters_exist | python | def check_production_parameters_exist(self):
for k, v in self.modelInstance.parameter_sets.items():
for p_id in self.modelInstance.production_params.keys():
if v.get(p_id):
#print('{} already exists'.format(p_id))
pass
else:
#print('No production parameter called {} - setting it to 1'.format(p_id))
v[p_id] = 1.0
for p_id in self.modelInstance.allocation_params.keys():
if v.get(p_id):
#print('{} already exists'.format(p_id))
pass
else:
#print('No production parameter called {} - setting it to 1'.format(p_id))
v[p_id] = 1.0 | old versions of models won't have produciton parameters, leading to ZeroDivision errors and breaking things | train | https://github.com/pjamesjoyce/lcopt/blob/3f1caca31fece4a3068a384900707e6d21d04597/lcopt/parameters.py#L100-L117 | null | class LcoptParameterSet(ParameterSet):
"""
Subclass of `bw2parameters.parameter_set.ParameterSet` that takes a `lcopt.LcoptModel` and delegates parameter ordering and evaluation to `bw2parameters`
TODO: Add more documentation and write tests
"""
def __init__(self, modelInstance):
self.modelInstance = modelInstance
self.norm_params = self.normalise_parameters()
self.check_production_parameters_exist()
self.all_params = {**self.modelInstance.params, **self.modelInstance.production_params, **self.norm_params, **self.modelInstance.allocation_params}
self.bw2_params, self.bw2_global_params, self.bw2_export_params = self.lcopt_to_bw2_params(0)
super().__init__(self.bw2_params, self.bw2_global_params)
self.evaluated_parameter_sets = self.preevaluate_exchange_params()
def lcopt_to_bw2_params(self, ps_key):
k0 = list(self.modelInstance.parameter_sets.keys())[ps_key]
ps1 = self.modelInstance.parameter_sets[k0]
bw2_params = {k:{(x if x != 'function' else 'formula'):y for x, y in v.items()} for k,v in self.all_params.items()}
for k in bw2_params.keys():
bw2_params[k]['amount'] = ps1.get(k,0)
bw2_global_params = {x['name']: ps1.get(x['name'],x['default']) for x in self.modelInstance.ext_params}
bw2_export_params = []
for k, v in bw2_params.items():
to_append = {'name': k}
if v.get('formula'):
to_append['formula'] = v['formula']
else:
to_append['amount'] = v['amount']
bw2_export_params.append(to_append)
for k, v in bw2_global_params.items():
bw2_export_params.append({'name':k, 'amount':v})
return bw2_params, bw2_global_params, bw2_export_params
def normalise_parameters(self):
param_copy = deepcopy(self.modelInstance.params)
#production_params = deepcopy(self.modelInstance.production_params)
#allocation_params = deepcopy(self.modelInstance.allocation_params)
norm_params = OrderedDict()
for k, v in param_copy.items():
norm_params['n_{}'.format(k)] = {}
for key, item in v.items():
if key == 'function':
if not item:
norm_function = '{} / {}'.format(k, v['normalisation_parameter'])
else:
norm_function = '({}) / {}'.format(item, v['normalisation_parameter'])
norm_params['n_{}'.format(k)][key] = norm_function
else:
norm_params['n_{}'.format(k)][key] = item
return norm_params
def preevaluate_exchange_params(self):
evaluated_params = OrderedDict()
for n, k in enumerate(self.modelInstance.parameter_sets.keys()):
self.params, self.global_params, _ = self.lcopt_to_bw2_params(n)
self.evaluate_and_set_amount_field()
this_set = {}
for j, v in self.params.items():
this_set[j] = v['amount']
evaluated_params[k] = this_set
self.params, self.global_params , _ = self.lcopt_to_bw2_params(0)
self.evaluate_and_set_amount_field()
return evaluated_params
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/human.py | human_filesize | python | def human_filesize(i):
bytes = float(i)
if bytes < 1024:
return u"%d Byte%s" % (bytes, bytes != 1 and u"s" or u"")
if bytes < 1024 * 1024:
return u"%.1f KB" % (bytes / 1024)
if bytes < 1024 * 1024 * 1024:
return u"%.1f MB" % (bytes / (1024 * 1024))
return u"%.1f GB" % (bytes / (1024 * 1024 * 1024)) | 'human-readable' file size (i.e. 13 KB, 4.1 MB, 102 bytes, etc). | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/human.py#L23-L34 | null | import datetime
import django
from django.template import defaultfilters
from django.utils.translation import ugettext as _
from django.contrib.humanize.templatetags import humanize
def human_time(t):
if t > 3600:
divisor = 3600.0
unit = "h"
elif t > 60:
divisor = 60.0
unit = "min"
else:
divisor = 1
unit = "sec"
return "%.1f%s" % (round(t / divisor, 2), unit)
def to_percent(part, total):
try:
return part / total * 100
except ZeroDivisionError:
# e.g.: Backup only 0-Bytes files ;)
return 0
def dt2naturaltimesince(dt):
"""
datetime to a human readable representation with how old this entry is information
e.g.:
Jan. 27, 2016, 9:04 p.m. (31 minutes ago)
"""
date = defaultfilters.date(dt, _("DATETIME_FORMAT"))
nt = humanize.naturaltime(dt)
return "%s (%s)" % (date, nt)
def ns2naturaltimesince(ns):
"""
nanoseconds to a human readable representation with how old this entry is information
e.g.:
Jan. 27, 2016, 9:04 p.m. (31 minutes ago)
"""
timestamp = ns / 1000000000
dt = datetime.datetime.utcfromtimestamp(timestamp)
return dt2naturaltimesince(dt)
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/human.py | dt2naturaltimesince | python | def dt2naturaltimesince(dt):
date = defaultfilters.date(dt, _("DATETIME_FORMAT"))
nt = humanize.naturaltime(dt)
return "%s (%s)" % (date, nt) | datetime to a human readable representation with how old this entry is information
e.g.:
Jan. 27, 2016, 9:04 p.m. (31 minutes ago) | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/human.py#L45-L53 | null | import datetime
import django
from django.template import defaultfilters
from django.utils.translation import ugettext as _
from django.contrib.humanize.templatetags import humanize
def human_time(t):
if t > 3600:
divisor = 3600.0
unit = "h"
elif t > 60:
divisor = 60.0
unit = "min"
else:
divisor = 1
unit = "sec"
return "%.1f%s" % (round(t / divisor, 2), unit)
def human_filesize(i):
"""
'human-readable' file size (i.e. 13 KB, 4.1 MB, 102 bytes, etc).
"""
bytes = float(i)
if bytes < 1024:
return u"%d Byte%s" % (bytes, bytes != 1 and u"s" or u"")
if bytes < 1024 * 1024:
return u"%.1f KB" % (bytes / 1024)
if bytes < 1024 * 1024 * 1024:
return u"%.1f MB" % (bytes / (1024 * 1024))
return u"%.1f GB" % (bytes / (1024 * 1024 * 1024))
def to_percent(part, total):
try:
return part / total * 100
except ZeroDivisionError:
# e.g.: Backup only 0-Bytes files ;)
return 0
def ns2naturaltimesince(ns):
"""
nanoseconds to a human readable representation with how old this entry is information
e.g.:
Jan. 27, 2016, 9:04 p.m. (31 minutes ago)
"""
timestamp = ns / 1000000000
dt = datetime.datetime.utcfromtimestamp(timestamp)
return dt2naturaltimesince(dt)
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/human.py | ns2naturaltimesince | python | def ns2naturaltimesince(ns):
timestamp = ns / 1000000000
dt = datetime.datetime.utcfromtimestamp(timestamp)
return dt2naturaltimesince(dt) | nanoseconds to a human readable representation with how old this entry is information
e.g.:
Jan. 27, 2016, 9:04 p.m. (31 minutes ago) | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/human.py#L56-L64 | [
"def dt2naturaltimesince(dt):\n \"\"\"\n datetime to a human readable representation with how old this entry is information\n e.g.:\n Jan. 27, 2016, 9:04 p.m. (31 minutes ago)\n \"\"\"\n date = defaultfilters.date(dt, _(\"DATETIME_FORMAT\"))\n nt = humanize.naturaltime(dt)\n return \"%s ... | import datetime
import django
from django.template import defaultfilters
from django.utils.translation import ugettext as _
from django.contrib.humanize.templatetags import humanize
def human_time(t):
if t > 3600:
divisor = 3600.0
unit = "h"
elif t > 60:
divisor = 60.0
unit = "min"
else:
divisor = 1
unit = "sec"
return "%.1f%s" % (round(t / divisor, 2), unit)
def human_filesize(i):
"""
'human-readable' file size (i.e. 13 KB, 4.1 MB, 102 bytes, etc).
"""
bytes = float(i)
if bytes < 1024:
return u"%d Byte%s" % (bytes, bytes != 1 and u"s" or u"")
if bytes < 1024 * 1024:
return u"%.1f KB" % (bytes / 1024)
if bytes < 1024 * 1024 * 1024:
return u"%.1f MB" % (bytes / (1024 * 1024))
return u"%.1f GB" % (bytes / (1024 * 1024 * 1024))
def to_percent(part, total):
try:
return part / total * 100
except ZeroDivisionError:
# e.g.: Backup only 0-Bytes files ;)
return 0
def dt2naturaltimesince(dt):
"""
datetime to a human readable representation with how old this entry is information
e.g.:
Jan. 27, 2016, 9:04 p.m. (31 minutes ago)
"""
date = defaultfilters.date(dt, _("DATETIME_FORMAT"))
nt = humanize.naturaltime(dt)
return "%s (%s)" % (date, nt)
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb_cli.py | cli | python | def cli(ctx):
click.secho("\nPyHardLinkBackup v%s\n" % PyHardLinkBackup.__version__, bg="blue", fg="white", bold=True) | PyHardLinkBackup | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb_cli.py#L36-L38 | null | #!/usr/bin/env python3
"""
PyHardLinkBackup cli using click
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
import os
import sys
# Use the built-in version of scandir/walk if possible, otherwise
# use the scandir module version
try:
from os import scandir # new in Python 3.5
except ImportError:
# use https://pypi.python.org/pypi/scandir
try:
from scandir import scandir
except ImportError:
raise ImportError("For Python <2.5: Please install 'scandir' !")
import click
import PyHardLinkBackup
from PyHardLinkBackup.phlb.config import phlb_config
PHLB_BASE_DIR = os.path.abspath(os.path.dirname(PyHardLinkBackup.__file__))
@click.group()
@click.version_option(version=PyHardLinkBackup.__version__)
@click.pass_context
@cli.command()
@click.argument("path", type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=True, resolve_path=True))
def helper(path):
"""
link helper files to given path
"""
if sys.platform.startswith("win"):
# link batch files
src_path = os.path.join(PHLB_BASE_DIR, "helper_cmd")
elif sys.platform.startswith("linux"):
# link shell scripts
src_path = os.path.join(PHLB_BASE_DIR, "helper_sh")
else:
print("TODO: %s" % sys.platform)
return
if not os.path.isdir(src_path):
raise RuntimeError("Helper script path not found here: '%s'" % src_path)
for entry in scandir(src_path):
print("_" * 79)
print("Link file: '%s'" % entry.name)
src = entry.path
dst = os.path.join(path, entry.name)
if os.path.exists(dst):
print("Remove old file '%s'" % dst)
try:
os.remove(dst)
except OSError as err:
print("\nERROR:\n%s\n" % err)
continue
print("source.....: '%s'" % src)
print("destination: '%s'" % dst)
try:
os.link(src, dst)
except OSError as err:
print("\nERROR:\n%s\n" % err)
continue
cli.add_command(helper)
@click.command()
@click.option("--debug", is_flag=True, default=False, help="Display used config and exit.")
def config(debug):
"""Create/edit .ini config file"""
if debug:
phlb_config.print_config()
else:
phlb_config.open_editor()
cli.add_command(config)
@click.command()
@click.argument(
"path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=False, readable=True, resolve_path=True),
)
@click.option("--name", help="Force a backup name (If not set: Use parent directory name)")
def backup(path, name=None):
"""Start a Backup run"""
from PyHardLinkBackup.phlb.phlb_main import backup
backup(path, name)
cli.add_command(backup)
@click.command()
@click.argument(
"backup_path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=False, readable=True, resolve_path=True),
)
@click.option("--fast", is_flag=True, default=False, help="Don't compare real file content (Skip calculate hash)")
def verify(backup_path, fast):
"""Verify a existing backup"""
from PyHardLinkBackup.phlb.verify import verify_backup
verify_backup(backup_path, fast)
cli.add_command(verify)
@click.command()
def add():
"""Scan all existing backup and add missing ones to database."""
from PyHardLinkBackup.phlb.add import add_backups
add_backups()
cli.add_command(add)
if __name__ == "__main__":
cli()
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb_cli.py | helper | python | def helper(path):
if sys.platform.startswith("win"):
# link batch files
src_path = os.path.join(PHLB_BASE_DIR, "helper_cmd")
elif sys.platform.startswith("linux"):
# link shell scripts
src_path = os.path.join(PHLB_BASE_DIR, "helper_sh")
else:
print("TODO: %s" % sys.platform)
return
if not os.path.isdir(src_path):
raise RuntimeError("Helper script path not found here: '%s'" % src_path)
for entry in scandir(src_path):
print("_" * 79)
print("Link file: '%s'" % entry.name)
src = entry.path
dst = os.path.join(path, entry.name)
if os.path.exists(dst):
print("Remove old file '%s'" % dst)
try:
os.remove(dst)
except OSError as err:
print("\nERROR:\n%s\n" % err)
continue
print("source.....: '%s'" % src)
print("destination: '%s'" % dst)
try:
os.link(src, dst)
except OSError as err:
print("\nERROR:\n%s\n" % err)
continue | link helper files to given path | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb_cli.py#L43-L79 | null | #!/usr/bin/env python3
"""
PyHardLinkBackup cli using click
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
import os
import sys
# Use the built-in version of scandir/walk if possible, otherwise
# use the scandir module version
try:
from os import scandir # new in Python 3.5
except ImportError:
# use https://pypi.python.org/pypi/scandir
try:
from scandir import scandir
except ImportError:
raise ImportError("For Python <2.5: Please install 'scandir' !")
import click
import PyHardLinkBackup
from PyHardLinkBackup.phlb.config import phlb_config
PHLB_BASE_DIR = os.path.abspath(os.path.dirname(PyHardLinkBackup.__file__))
@click.group()
@click.version_option(version=PyHardLinkBackup.__version__)
@click.pass_context
def cli(ctx):
"""PyHardLinkBackup"""
click.secho("\nPyHardLinkBackup v%s\n" % PyHardLinkBackup.__version__, bg="blue", fg="white", bold=True)
@cli.command()
@click.argument("path", type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=True, resolve_path=True))
cli.add_command(helper)
@click.command()
@click.option("--debug", is_flag=True, default=False, help="Display used config and exit.")
def config(debug):
"""Create/edit .ini config file"""
if debug:
phlb_config.print_config()
else:
phlb_config.open_editor()
cli.add_command(config)
@click.command()
@click.argument(
"path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=False, readable=True, resolve_path=True),
)
@click.option("--name", help="Force a backup name (If not set: Use parent directory name)")
def backup(path, name=None):
"""Start a Backup run"""
from PyHardLinkBackup.phlb.phlb_main import backup
backup(path, name)
cli.add_command(backup)
@click.command()
@click.argument(
"backup_path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=False, readable=True, resolve_path=True),
)
@click.option("--fast", is_flag=True, default=False, help="Don't compare real file content (Skip calculate hash)")
def verify(backup_path, fast):
"""Verify a existing backup"""
from PyHardLinkBackup.phlb.verify import verify_backup
verify_backup(backup_path, fast)
cli.add_command(verify)
@click.command()
def add():
"""Scan all existing backup and add missing ones to database."""
from PyHardLinkBackup.phlb.add import add_backups
add_backups()
cli.add_command(add)
if __name__ == "__main__":
cli()
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb_cli.py | backup | python | def backup(path, name=None):
from PyHardLinkBackup.phlb.phlb_main import backup
backup(path, name) | Start a Backup run | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb_cli.py#L104-L108 | [
"def backup(path, name):\n django.setup()\n\n path_helper = PathHelper(path, name)\n\n # create backup destination to create summary file in there\n path_helper.summary_filepath.parent.makedirs( # calls os.makedirs()\n mode=phlb_config.default_new_path_mode, exist_ok=True\n )\n with path_h... | #!/usr/bin/env python3
"""
PyHardLinkBackup cli using click
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
import os
import sys
# Use the built-in version of scandir/walk if possible, otherwise
# use the scandir module version
try:
from os import scandir # new in Python 3.5
except ImportError:
# use https://pypi.python.org/pypi/scandir
try:
from scandir import scandir
except ImportError:
raise ImportError("For Python <2.5: Please install 'scandir' !")
import click
import PyHardLinkBackup
from PyHardLinkBackup.phlb.config import phlb_config
PHLB_BASE_DIR = os.path.abspath(os.path.dirname(PyHardLinkBackup.__file__))
@click.group()
@click.version_option(version=PyHardLinkBackup.__version__)
@click.pass_context
def cli(ctx):
"""PyHardLinkBackup"""
click.secho("\nPyHardLinkBackup v%s\n" % PyHardLinkBackup.__version__, bg="blue", fg="white", bold=True)
@cli.command()
@click.argument("path", type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=True, resolve_path=True))
def helper(path):
"""
link helper files to given path
"""
if sys.platform.startswith("win"):
# link batch files
src_path = os.path.join(PHLB_BASE_DIR, "helper_cmd")
elif sys.platform.startswith("linux"):
# link shell scripts
src_path = os.path.join(PHLB_BASE_DIR, "helper_sh")
else:
print("TODO: %s" % sys.platform)
return
if not os.path.isdir(src_path):
raise RuntimeError("Helper script path not found here: '%s'" % src_path)
for entry in scandir(src_path):
print("_" * 79)
print("Link file: '%s'" % entry.name)
src = entry.path
dst = os.path.join(path, entry.name)
if os.path.exists(dst):
print("Remove old file '%s'" % dst)
try:
os.remove(dst)
except OSError as err:
print("\nERROR:\n%s\n" % err)
continue
print("source.....: '%s'" % src)
print("destination: '%s'" % dst)
try:
os.link(src, dst)
except OSError as err:
print("\nERROR:\n%s\n" % err)
continue
cli.add_command(helper)
@click.command()
@click.option("--debug", is_flag=True, default=False, help="Display used config and exit.")
def config(debug):
"""Create/edit .ini config file"""
if debug:
phlb_config.print_config()
else:
phlb_config.open_editor()
cli.add_command(config)
@click.command()
@click.argument(
"path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=False, readable=True, resolve_path=True),
)
@click.option("--name", help="Force a backup name (If not set: Use parent directory name)")
cli.add_command(backup)
@click.command()
@click.argument(
"backup_path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=False, readable=True, resolve_path=True),
)
@click.option("--fast", is_flag=True, default=False, help="Don't compare real file content (Skip calculate hash)")
def verify(backup_path, fast):
"""Verify a existing backup"""
from PyHardLinkBackup.phlb.verify import verify_backup
verify_backup(backup_path, fast)
cli.add_command(verify)
@click.command()
def add():
"""Scan all existing backup and add missing ones to database."""
from PyHardLinkBackup.phlb.add import add_backups
add_backups()
cli.add_command(add)
if __name__ == "__main__":
cli()
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb_cli.py | verify | python | def verify(backup_path, fast):
from PyHardLinkBackup.phlb.verify import verify_backup
verify_backup(backup_path, fast) | Verify a existing backup | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb_cli.py#L120-L124 | [
"def verify_backup(backup_path, fast):\n django.setup()\n\n backup_path = Path2(backup_path).resolve()\n print(\"\\nVerify: %s\" % backup_path)\n\n backup_run = BackupRun.objects.get_from_config_file(backup_path)\n print(\"\\nBackup run:\\n%s\\n\" % backup_run)\n\n backup_entries = BackupEntry.obj... | #!/usr/bin/env python3
"""
PyHardLinkBackup cli using click
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
import os
import sys
# Use the built-in version of scandir/walk if possible, otherwise
# use the scandir module version
try:
from os import scandir # new in Python 3.5
except ImportError:
# use https://pypi.python.org/pypi/scandir
try:
from scandir import scandir
except ImportError:
raise ImportError("For Python <2.5: Please install 'scandir' !")
import click
import PyHardLinkBackup
from PyHardLinkBackup.phlb.config import phlb_config
PHLB_BASE_DIR = os.path.abspath(os.path.dirname(PyHardLinkBackup.__file__))
@click.group()
@click.version_option(version=PyHardLinkBackup.__version__)
@click.pass_context
def cli(ctx):
"""PyHardLinkBackup"""
click.secho("\nPyHardLinkBackup v%s\n" % PyHardLinkBackup.__version__, bg="blue", fg="white", bold=True)
@cli.command()
@click.argument("path", type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=True, resolve_path=True))
def helper(path):
"""
link helper files to given path
"""
if sys.platform.startswith("win"):
# link batch files
src_path = os.path.join(PHLB_BASE_DIR, "helper_cmd")
elif sys.platform.startswith("linux"):
# link shell scripts
src_path = os.path.join(PHLB_BASE_DIR, "helper_sh")
else:
print("TODO: %s" % sys.platform)
return
if not os.path.isdir(src_path):
raise RuntimeError("Helper script path not found here: '%s'" % src_path)
for entry in scandir(src_path):
print("_" * 79)
print("Link file: '%s'" % entry.name)
src = entry.path
dst = os.path.join(path, entry.name)
if os.path.exists(dst):
print("Remove old file '%s'" % dst)
try:
os.remove(dst)
except OSError as err:
print("\nERROR:\n%s\n" % err)
continue
print("source.....: '%s'" % src)
print("destination: '%s'" % dst)
try:
os.link(src, dst)
except OSError as err:
print("\nERROR:\n%s\n" % err)
continue
cli.add_command(helper)
@click.command()
@click.option("--debug", is_flag=True, default=False, help="Display used config and exit.")
def config(debug):
"""Create/edit .ini config file"""
if debug:
phlb_config.print_config()
else:
phlb_config.open_editor()
cli.add_command(config)
@click.command()
@click.argument(
"path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=False, readable=True, resolve_path=True),
)
@click.option("--name", help="Force a backup name (If not set: Use parent directory name)")
def backup(path, name=None):
"""Start a Backup run"""
from PyHardLinkBackup.phlb.phlb_main import backup
backup(path, name)
cli.add_command(backup)
@click.command()
@click.argument(
"backup_path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, writable=False, readable=True, resolve_path=True),
)
@click.option("--fast", is_flag=True, default=False, help="Don't compare real file content (Skip calculate hash)")
cli.add_command(verify)
@click.command()
def add():
"""Scan all existing backup and add missing ones to database."""
from PyHardLinkBackup.phlb.add import add_backups
add_backups()
cli.add_command(add)
if __name__ == "__main__":
cli()
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/path_helper.py | PathHelper.set_src_filepath | python | def set_src_filepath(self, src_dir_path):
log.debug("set_src_filepath() with: '%s'", src_dir_path)
self.abs_src_filepath = src_dir_path.resolved_path
log.debug(" * abs_src_filepath: %s" % self.abs_src_filepath)
if self.abs_src_filepath is None:
log.info("Can't resolve source path: %s", src_dir_path)
return
self.sub_filepath = self.abs_src_filepath.relative_to(self.abs_src_root)
log.debug(" * sub_filepath: %s" % self.sub_filepath)
self.sub_path = self.sub_filepath.parent
log.debug(" * sub_path: %s" % self.sub_path)
self.filename = self.sub_filepath.name
log.debug(" * filename: %s" % self.filename)
self.abs_dst_path = Path2(self.abs_dst_root, self.sub_path)
log.debug(" * abs_dst_path: %s" % self.abs_dst_path)
self.abs_dst_filepath = Path2(self.abs_dst_root, self.sub_filepath)
log.debug(" * abs_dst_filepath: %s" % self.abs_dst_filepath)
self.abs_dst_hash_filepath = Path2("%s%s%s" % (self.abs_dst_filepath, os.extsep, phlb_config.hash_name))
log.debug(" * abs_dst_hash_filepath: %s" % self.abs_dst_hash_filepath) | Set one filepath to backup this file.
Called for every file in the source directory.
:argument src_dir_path: filesystem_walk.DirEntryPath() instance | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/path_helper.py#L109-L140 | null | class PathHelper(object):
"""
e.g.: backup run called with: /abs/source/path/source_root
|<---------self.abs_src_filepath------------->|
| |
|<--self.abs_src_root-->|<-self.sub_filepath->|
| | |
/abs/source/path/source_root/sub/path/filename
| | | | | | | |
+-------------' +--------' +-----' +-----'
| | | |
| | | `-> self.filename
| | `-> self.sub_path
| `-> self.backup_name (root dir to backup)
`-> self.src_prefix_path
|<---------self.abs_dst_filepath------------------>|
| |
|<----self.abs_dst_root----->|<-self.sub_filepath->|
| | |
|<---------self.abs_dst_path-+------->| .---'
| | | |
/abs/destination/name/datetime/sub/path/filename
|-------------' |-' |-----' |-----' |-----'
| | | | `-> self.filename
| | | `-> self.sub_path
| | `-> self.time_string (Start time of the backup run)
| `<- self.backup_name
`- phlb_config.backup_path (root dir storage for all backups runs)
"""
def __init__(self, src_path, force_name=None):
"""
:param src_path: Path2() instance of the source directory
:param force_name: Force this name for the backup
"""
self.abs_src_root = Path2(src_path).resolve()
log.debug(" * abs_src_root: '%s'", self.abs_src_root)
if not self.abs_src_root.is_dir():
raise OSError("Source path '%s' doesn't exists!" % self.abs_src_root)
self.src_prefix_path = self.abs_src_root.parent
log.debug(" * src_prefix_path: '%s'", self.src_prefix_path)
self.backup_name = self.abs_src_root.name
if force_name is not None:
self.backup_name = force_name
elif not self.backup_name:
print("\nError get name for this backup!", file=sys.stderr)
print("\nPlease use '--name' for force a backup name!\n", file=sys.stderr)
sys.exit(-1)
log.debug(" * backup_name: '%s'", self.backup_name)
self.backup_datetime = datetime.datetime.now()
self.time_string = self.backup_datetime.strftime(phlb_config.sub_dir_formatter)
log.debug(" * time_string: %r", self.time_string)
self.abs_dst_root = Path2(phlb_config.backup_path, self.backup_name, self.time_string)
log.debug(" * abs_dst_root: '%s'", self.abs_dst_root)
self.log_filepath = Path2(phlb_config.backup_path, self.backup_name, self.time_string + ".log")
self.summary_filepath = Path2(phlb_config.backup_path, self.backup_name, self.time_string + " summary.txt")
# set in set_src_filepath():
self.abs_src_filepath = None
self.sub_filepath = None
self.sub_path = None
self.filename = None
self.abs_dst_path = None
self.abs_dst_filepath = None
self.abs_dst_hash_filepath = None
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/phlb_main.py | FileBackup.fast_deduplication_backup | python | def fast_deduplication_backup(self, old_backup_entry, process_bar):
# TODO: merge code with parts from deduplication_backup()
src_path = self.dir_path.resolved_path
log.debug("*** fast deduplication backup: '%s'", src_path)
old_file_path = old_backup_entry.get_backup_path()
if not self.path_helper.abs_dst_path.is_dir():
try:
self.path_helper.abs_dst_path.makedirs(mode=phlb_config.default_new_path_mode)
except OSError as err:
raise BackupFileError("Error creating out path: %s" % err)
else:
assert not self.path_helper.abs_dst_filepath.is_file(), (
"Out file already exists: %r" % self.path_helper.abs_src_filepath
)
with self.path_helper.abs_dst_hash_filepath.open("w") as hash_file:
try:
old_file_path.link(self.path_helper.abs_dst_filepath) # call os.link()
except OSError as err:
log.error("Can't link '%s' to '%s': %s" % (old_file_path, self.path_helper.abs_dst_filepath, err))
log.info("Mark %r with 'no link source'.", old_backup_entry)
old_backup_entry.no_link_source = True
old_backup_entry.save()
# do a normal copy backup
self.deduplication_backup(process_bar)
return
hash_hexdigest = old_backup_entry.content_info.hash_hexdigest
hash_file.write(hash_hexdigest)
file_size = self.dir_path.stat.st_size
if file_size > 0:
# tqdm will not accept 0 bytes files ;)
process_bar.update(file_size)
BackupEntry.objects.create(
backup_run=self.backup_run,
backup_entry_path=self.path_helper.abs_dst_filepath,
hash_hexdigest=hash_hexdigest,
)
if self._SIMULATE_SLOW_SPEED:
log.error("Slow down speed for tests!")
time.sleep(self._SIMULATE_SLOW_SPEED)
self.fast_backup = True # Was a fast backup used?
self.file_linked = True | We can just link a old backup entry
:param latest_backup: old BackupEntry model instance
:param process_bar: tqdm process bar | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/phlb_main.py#L100-L154 | null | class FileBackup(object):
"""
backup one file
"""
# TODO: remove with Mock solution:
_SIMULATE_SLOW_SPEED = False # for unittests only!
def __init__(self, dir_path, path_helper, backup_run):
"""
:param dir_path: DirEntryPath() instance of the source file
:param path_helper: PathHelper(backup_root) instance
"""
self.dir_path = dir_path
self.path_helper = path_helper
self.backup_run = backup_run
self.fast_backup = None # Was a fast backup used?
self.file_linked = None # Was a hardlink used?
if self._SIMULATE_SLOW_SPEED:
log.error("Slow down speed for tests activated!")
def _deduplication_backup(self, file_entry, in_file, out_file, process_bar):
hash = hashlib.new(phlb_config.hash_name)
while True:
data = in_file.read(phlb_config.chunk_size)
if not data:
break
if self._SIMULATE_SLOW_SPEED:
log.error("Slow down speed for tests!")
time.sleep(self._SIMULATE_SLOW_SPEED)
out_file.write(data)
hash.update(data)
process_bar.update(len(data))
return hash
# Was a hardlink used?
def deduplication_backup(self, process_bar):
"""
Backup the current file and compare the content.
:param process_bar: tqdm process bar
"""
self.fast_backup = False # Was a fast backup used?
src_path = self.dir_path.resolved_path
log.debug("*** deduplication backup: '%s'", src_path)
log.debug("abs_src_filepath: '%s'", self.path_helper.abs_src_filepath)
log.debug("abs_dst_filepath: '%s'", self.path_helper.abs_dst_filepath)
log.debug("abs_dst_hash_filepath: '%s'", self.path_helper.abs_dst_hash_filepath)
log.debug("abs_dst_dir: '%s'", self.path_helper.abs_dst_path)
if not self.path_helper.abs_dst_path.is_dir():
try:
self.path_helper.abs_dst_path.makedirs(mode=phlb_config.default_new_path_mode)
except OSError as err:
raise BackupFileError("Error creating out path: %s" % err)
else:
assert not self.path_helper.abs_dst_filepath.is_file(), (
"Out file already exists: %r" % self.path_helper.abs_src_filepath
)
try:
try:
with self.path_helper.abs_src_filepath.open("rb") as in_file:
with self.path_helper.abs_dst_hash_filepath.open("w") as hash_file:
with self.path_helper.abs_dst_filepath.open("wb") as out_file:
hash = self._deduplication_backup(self.dir_path, in_file, out_file, process_bar)
hash_hexdigest = hash.hexdigest()
hash_file.write(hash_hexdigest)
except OSError as err:
# FIXME: Better error message
raise BackupFileError("Skip file %s error: %s" % (self.path_helper.abs_src_filepath, err))
except KeyboardInterrupt:
# Try to remove created files
try:
self.path_helper.abs_dst_filepath.unlink()
except OSError:
pass
try:
self.path_helper.abs_dst_hash_filepath.unlink()
except OSError:
pass
raise KeyboardInterrupt
old_backup_entry = deduplicate(self.path_helper.abs_dst_filepath, hash_hexdigest)
if old_backup_entry is None:
log.debug("File is unique.")
self.file_linked = False # Was a hardlink used?
else:
log.debug("File was deduplicated via hardlink to: %s" % old_backup_entry)
self.file_linked = True # Was a hardlink used?
# set origin access/modified times to the new created backup file
atime_ns = self.dir_path.stat.st_atime_ns
mtime_ns = self.dir_path.stat.st_mtime_ns
self.path_helper.abs_dst_filepath.utime(ns=(atime_ns, mtime_ns)) # call os.utime()
log.debug("Set mtime to: %s" % mtime_ns)
BackupEntry.objects.create(
backup_run=self.backup_run,
backup_entry_path=self.path_helper.abs_dst_filepath,
hash_hexdigest=hash_hexdigest,
)
self.fast_backup = False # Was a fast backup used?
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/phlb_main.py | FileBackup.deduplication_backup | python | def deduplication_backup(self, process_bar):
self.fast_backup = False # Was a fast backup used?
src_path = self.dir_path.resolved_path
log.debug("*** deduplication backup: '%s'", src_path)
log.debug("abs_src_filepath: '%s'", self.path_helper.abs_src_filepath)
log.debug("abs_dst_filepath: '%s'", self.path_helper.abs_dst_filepath)
log.debug("abs_dst_hash_filepath: '%s'", self.path_helper.abs_dst_hash_filepath)
log.debug("abs_dst_dir: '%s'", self.path_helper.abs_dst_path)
if not self.path_helper.abs_dst_path.is_dir():
try:
self.path_helper.abs_dst_path.makedirs(mode=phlb_config.default_new_path_mode)
except OSError as err:
raise BackupFileError("Error creating out path: %s" % err)
else:
assert not self.path_helper.abs_dst_filepath.is_file(), (
"Out file already exists: %r" % self.path_helper.abs_src_filepath
)
try:
try:
with self.path_helper.abs_src_filepath.open("rb") as in_file:
with self.path_helper.abs_dst_hash_filepath.open("w") as hash_file:
with self.path_helper.abs_dst_filepath.open("wb") as out_file:
hash = self._deduplication_backup(self.dir_path, in_file, out_file, process_bar)
hash_hexdigest = hash.hexdigest()
hash_file.write(hash_hexdigest)
except OSError as err:
# FIXME: Better error message
raise BackupFileError("Skip file %s error: %s" % (self.path_helper.abs_src_filepath, err))
except KeyboardInterrupt:
# Try to remove created files
try:
self.path_helper.abs_dst_filepath.unlink()
except OSError:
pass
try:
self.path_helper.abs_dst_hash_filepath.unlink()
except OSError:
pass
raise KeyboardInterrupt
old_backup_entry = deduplicate(self.path_helper.abs_dst_filepath, hash_hexdigest)
if old_backup_entry is None:
log.debug("File is unique.")
self.file_linked = False # Was a hardlink used?
else:
log.debug("File was deduplicated via hardlink to: %s" % old_backup_entry)
self.file_linked = True # Was a hardlink used?
# set origin access/modified times to the new created backup file
atime_ns = self.dir_path.stat.st_atime_ns
mtime_ns = self.dir_path.stat.st_mtime_ns
self.path_helper.abs_dst_filepath.utime(ns=(atime_ns, mtime_ns)) # call os.utime()
log.debug("Set mtime to: %s" % mtime_ns)
BackupEntry.objects.create(
backup_run=self.backup_run,
backup_entry_path=self.path_helper.abs_dst_filepath,
hash_hexdigest=hash_hexdigest,
)
self.fast_backup = False | Backup the current file and compare the content.
:param process_bar: tqdm process bar | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/phlb_main.py#L156-L225 | [
"def deduplicate(backup_entry, hash_hexdigest):\n abs_dst_root = Path2(phlb_config.backup_path)\n\n try:\n backup_entry.relative_to(abs_dst_root)\n except ValueError as err:\n raise ValueError(\"Backup entry not in backup root path: %s\" % err)\n\n assert backup_entry.is_file(), \"Is not a... | class FileBackup(object):
"""
backup one file
"""
# TODO: remove with Mock solution:
_SIMULATE_SLOW_SPEED = False # for unittests only!
def __init__(self, dir_path, path_helper, backup_run):
"""
:param dir_path: DirEntryPath() instance of the source file
:param path_helper: PathHelper(backup_root) instance
"""
self.dir_path = dir_path
self.path_helper = path_helper
self.backup_run = backup_run
self.fast_backup = None # Was a fast backup used?
self.file_linked = None # Was a hardlink used?
if self._SIMULATE_SLOW_SPEED:
log.error("Slow down speed for tests activated!")
def _deduplication_backup(self, file_entry, in_file, out_file, process_bar):
hash = hashlib.new(phlb_config.hash_name)
while True:
data = in_file.read(phlb_config.chunk_size)
if not data:
break
if self._SIMULATE_SLOW_SPEED:
log.error("Slow down speed for tests!")
time.sleep(self._SIMULATE_SLOW_SPEED)
out_file.write(data)
hash.update(data)
process_bar.update(len(data))
return hash
def fast_deduplication_backup(self, old_backup_entry, process_bar):
"""
We can just link a old backup entry
:param latest_backup: old BackupEntry model instance
:param process_bar: tqdm process bar
"""
# TODO: merge code with parts from deduplication_backup()
src_path = self.dir_path.resolved_path
log.debug("*** fast deduplication backup: '%s'", src_path)
old_file_path = old_backup_entry.get_backup_path()
if not self.path_helper.abs_dst_path.is_dir():
try:
self.path_helper.abs_dst_path.makedirs(mode=phlb_config.default_new_path_mode)
except OSError as err:
raise BackupFileError("Error creating out path: %s" % err)
else:
assert not self.path_helper.abs_dst_filepath.is_file(), (
"Out file already exists: %r" % self.path_helper.abs_src_filepath
)
with self.path_helper.abs_dst_hash_filepath.open("w") as hash_file:
try:
old_file_path.link(self.path_helper.abs_dst_filepath) # call os.link()
except OSError as err:
log.error("Can't link '%s' to '%s': %s" % (old_file_path, self.path_helper.abs_dst_filepath, err))
log.info("Mark %r with 'no link source'.", old_backup_entry)
old_backup_entry.no_link_source = True
old_backup_entry.save()
# do a normal copy backup
self.deduplication_backup(process_bar)
return
hash_hexdigest = old_backup_entry.content_info.hash_hexdigest
hash_file.write(hash_hexdigest)
file_size = self.dir_path.stat.st_size
if file_size > 0:
# tqdm will not accept 0 bytes files ;)
process_bar.update(file_size)
BackupEntry.objects.create(
backup_run=self.backup_run,
backup_entry_path=self.path_helper.abs_dst_filepath,
hash_hexdigest=hash_hexdigest,
)
if self._SIMULATE_SLOW_SPEED:
log.error("Slow down speed for tests!")
time.sleep(self._SIMULATE_SLOW_SPEED)
self.fast_backup = True # Was a fast backup used?
self.file_linked = True # Was a hardlink used?
# Was a fast backup used?
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/phlb_main.py | HardLinkBackup.fast_compare | python | def fast_compare(self, dir_path):
if self.latest_backup is None:
# No old backup run was found
return
if self.latest_mtime_ns is None:
# No timestamp from old backup run was found
return
# There was a completed old backup run
# Check if we can made a 'fast compare'
mtime_ns = dir_path.stat.st_mtime_ns
if mtime_ns > self.latest_mtime_ns:
# The current source file is newer than
# the latest file from last completed backup
log.info("Fast compare: source file is newer than latest backuped file.")
return
# Look into database and compare mtime and size
try:
old_backup_entry = BackupEntry.objects.get(
backup_run=self.latest_backup,
directory__directory=self.path_helper.sub_path,
filename__filename=self.path_helper.filename,
no_link_source=False,
)
except BackupEntry.DoesNotExist:
log.debug("No old backup entry found")
return
content_info = old_backup_entry.content_info
file_size = content_info.file_size
if file_size != dir_path.stat.st_size:
log.info("Fast compare: File size is different: %i != %i" % (file_size, dir_path.stat.st_size))
return
old_backup_filepath = old_backup_entry.get_backup_path()
try:
old_file_mtime_ns = old_backup_filepath.stat().st_mtime_ns
except FileNotFoundError as err:
log.error("Old backup file not found: %s" % err)
old_backup_entry.no_link_source = True
old_backup_entry.save()
return
if old_file_mtime_ns != old_backup_entry.file_mtime_ns:
log.error("ERROR: mtime from database is different to the file!")
log.error(" * File: %s" % old_backup_filepath)
log.error(" * Database mtime: %s" % old_backup_entry.file_mtime_ns)
log.error(" * File mtime: %s" % old_file_mtime_ns)
if old_file_mtime_ns != dir_path.stat.st_mtime_ns:
log.info("Fast compare mtime is different between:")
log.info(" * %s" % old_backup_entry)
log.info(" * %s" % dir_path)
log.info(" * mtime: %i != %i" % (old_file_mtime_ns, dir_path.stat.st_mtime_ns))
return
# We found a old entry with same size and mtime
return old_backup_entry | :param dir_path: filesystem_walk.DirEntryPath() instance | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/phlb_main.py#L390-L454 | null | class HardLinkBackup(object):
def __init__(self, path_helper, summary):
"""
:param src_path: Path2() instance of the source directory
:param force_name: Force this name for the backup
"""
self.start_time = default_timer()
self.path_helper = path_helper
self.summary = summary
self.duration = 0
self.total_file_link_count = 0
self.total_stined_bytes = 0
self.total_new_file_count = 0
self.total_new_bytes = 0
self.total_errored_items = 0
self.total_fast_backup = 0
old_backups = BackupRun.objects.filter(name=self.path_helper.backup_name)
self.summary("%r was backuped %i time(s)" % (self.path_helper.backup_name, old_backups.count()))
old_backups = old_backups.filter(completed=True)
completed_count = old_backups.count()
self.summary("There are %i backups finished completed." % completed_count)
self.latest_backup = None
self.latest_mtime_ns = None
try:
self.latest_backup = old_backups.latest()
except BackupRun.DoesNotExist:
self.summary("No old backup found with name %r" % self.path_helper.backup_name)
else:
latest_backup_datetime = self.latest_backup.backup_datetime
self.summary("Latest backup from:", dt2naturaltimesince(latest_backup_datetime))
backup_entries = BackupEntry.objects.filter(backup_run=self.latest_backup)
try:
latest_entry = backup_entries.latest()
except BackupEntry.DoesNotExist:
log.warn("Latest backup run contains no files?!?")
else:
self.latest_mtime_ns = latest_entry.file_mtime_ns
self.summary("Latest backup entry modified time: %s" % ns2naturaltimesince(self.latest_mtime_ns))
self.summary("Backup to: '%s'" % self.path_helper.abs_dst_root)
self.path_helper.abs_dst_root.makedirs( # call os.makedirs()
mode=phlb_config.default_new_path_mode, exist_ok=True
)
if not self.path_helper.abs_dst_root.is_dir():
raise NotADirectoryError("Backup path '%s' doesn't exists!" % self.path_helper.abs_dst_root)
self.backup_run = BackupRun.objects.create(
name=self.path_helper.backup_name, backup_datetime=self.path_helper.backup_datetime, completed=False
)
log.debug(" * backup_run: %s" % self.backup_run)
def backup(self):
# make temp file available in destination via link ;)
temp_log_path = Path2(settings.LOG_FILEPATH)
assert temp_log_path.is_file(), "%s doesn't exists?!?" % settings.LOG_FILEPATH
try:
temp_log_path.link(self.path_helper.log_filepath) # call os.link()
except OSError as err:
# e.g.:
# temp is on a other drive than the destination
log.error("Can't link log file: %s" % err)
copy_log = True
else:
copy_log = False
try:
self._backup()
finally:
if copy_log:
log.warn("copy log file from '%s' to '%s'" % (settings.LOG_FILEPATH, self.path_helper.log_filepath))
temp_log_path.copyfile(self.path_helper.log_filepath) # call shutil.copyfile()
self.backup_run.completed = True
self.backup_run.save()
def _evaluate_skip_pattern_info(self, skip_pattern_info, name):
if not skip_pattern_info.has_hits():
self.summary("%s doesn't match on any dir entry." % name)
else:
self.summary("%s match information:" % name)
for line in skip_pattern_info.long_info():
log.info(line)
for line in skip_pattern_info.short_info():
self.summary("%s\n" % line)
def _scandir(self, path):
start_time = default_timer()
self.summary("\nScan '%s'...\n" % path)
skip_pattern_info = SkipPatternInformation()
skip_dirs = phlb_config.skip_dirs # TODO: add tests for it!
self.summary("Scan filesystem with SKIP_DIRS: %s" % repr(skip_dirs))
tqdm_iterator = tqdm(
scandir_walk(path.path, skip_dirs, on_skip=skip_pattern_info), unit=" dir entries", leave=True
)
dir_entries = [entry for entry in tqdm_iterator]
self.summary("\n * %i dir entries" % len(dir_entries))
self._evaluate_skip_pattern_info(skip_pattern_info, name="SKIP_DIRS")
self.total_size = 0
self.file_count = 0
filtered_dir_entries = []
skip_patterns = phlb_config.skip_patterns # TODO: add tests for it!
self.summary("Filter with SKIP_PATTERNS: %s" % repr(skip_patterns))
skip_pattern_info = SkipPatternInformation()
tqdm_iterator = tqdm(
iter_filtered_dir_entry(dir_entries, skip_patterns, on_skip=skip_pattern_info),
total=len(dir_entries),
unit=" dir entries",
leave=True,
)
for entry in tqdm_iterator:
if entry is None:
# filtered out by skip_patterns
continue
if entry.is_file:
filtered_dir_entries.append(entry)
self.file_count += 1
self.total_size += entry.stat.st_size
self.summary("\n * %i filtered dir entries" % len(filtered_dir_entries))
self._evaluate_skip_pattern_info(skip_pattern_info, name="SKIP_PATTERNS")
self.summary("\nscan/filter source directory in %s\n" % (human_time(default_timer() - start_time)))
return filtered_dir_entries
def _backup_dir_item(self, dir_path, process_bar):
"""
Backup one dir item
:param dir_path: filesystem_walk.DirEntryPath() instance
"""
self.path_helper.set_src_filepath(dir_path)
if self.path_helper.abs_src_filepath is None:
self.total_errored_items += 1
log.info("Can't backup %r", dir_path)
# self.summary(no, dir_path.stat.st_mtime, end=" ")
if dir_path.is_symlink:
self.summary("TODO Symlink: %s" % dir_path)
return
if dir_path.resolve_error is not None:
self.summary("TODO resolve error: %s" % dir_path.resolve_error)
pprint_path(dir_path)
return
if dir_path.different_path:
self.summary("TODO different path:")
pprint_path(dir_path)
return
if dir_path.is_dir:
self.summary("TODO dir: %s" % dir_path)
elif dir_path.is_file:
# self.summary("Normal file: %s", dir_path)
file_backup = FileBackup(dir_path, self.path_helper, self.backup_run)
old_backup_entry = self.fast_compare(dir_path)
if old_backup_entry is not None:
# We can just link the file from a old backup
file_backup.fast_deduplication_backup(old_backup_entry, process_bar)
else:
file_backup.deduplication_backup(process_bar)
assert file_backup.fast_backup is not None, dir_path.path
assert file_backup.file_linked is not None, dir_path.path
file_size = dir_path.stat.st_size
if file_backup.file_linked:
# os.link() was used
self.total_file_link_count += 1
self.total_stined_bytes += file_size
else:
self.total_new_file_count += 1
self.total_new_bytes += file_size
if file_backup.fast_backup:
self.total_fast_backup += 1
else:
self.summary("TODO:" % dir_path)
pprint_path(dir_path)
def _backup(self):
dir_entries = self._scandir(self.path_helper.abs_src_root)
msg = "%s in %i files to backup." % (human_filesize(self.total_size), self.file_count)
self.summary(msg)
log.info(msg)
next_update_print = default_timer() + phlb_config.print_update_interval
path_iterator = enumerate(
sorted(
dir_entries,
key=lambda x: x.stat.st_mtime, # sort by last modify time
reverse=True, # sort from newest to oldes
)
)
with tqdm(total=self.total_size, unit="B", unit_scale=True) as process_bar:
for no, dir_path in path_iterator:
try:
self._backup_dir_item(dir_path, process_bar)
except BackupFileError as err:
# A known error with a good error message occurred,
# e.g: PermissionError to read source file.
log.error(err)
self.total_errored_items += 1
except Exception as err:
# A unexpected error occurred.
# Print and add traceback to summary
log.error("Can't backup %s: %s" % (dir_path, err))
self.summary.handle_low_level_error()
self.total_errored_items += 1
if default_timer() > next_update_print:
self.print_update()
next_update_print = default_timer() + phlb_config.print_update_interval
self.duration = default_timer() - self.start_time
def print_update(self):
"""
print some status information in between.
"""
print("\r\n")
now = datetime.datetime.now()
print("Update info: (from: %s)" % now.strftime("%c"))
current_total_size = self.total_stined_bytes + self.total_new_bytes
if self.total_errored_items:
print(" * WARNING: %i omitted files!" % self.total_errored_items)
print(" * fast backup: %i files" % self.total_fast_backup)
print(
" * new content saved: %i files (%s %.1f%%)"
% (
self.total_new_file_count,
human_filesize(self.total_new_bytes),
to_percent(self.total_new_bytes, current_total_size),
)
)
print(
" * stint space via hardlinks: %i files (%s %.1f%%)"
% (
self.total_file_link_count,
human_filesize(self.total_stined_bytes),
to_percent(self.total_stined_bytes, current_total_size),
)
)
duration = default_timer() - self.start_time
performance = current_total_size / duration / 1024.0 / 1024.0
print(" * present performance: %.1fMB/s\n" % performance)
def get_summary(self):
summary = ["Backup done:"]
summary.append(" * Files to backup: %i files" % self.file_count)
if self.total_errored_items:
summary.append(" * WARNING: %i omitted files!" % self.total_errored_items)
summary.append(" * Source file sizes: %s" % human_filesize(self.total_size))
summary.append(" * fast backup: %i files" % self.total_fast_backup)
summary.append(
" * new content saved: %i files (%s %.1f%%)"
% (
self.total_new_file_count,
human_filesize(self.total_new_bytes),
to_percent(self.total_new_bytes, self.total_size),
)
)
summary.append(
" * stint space via hardlinks: %i files (%s %.1f%%)"
% (
self.total_file_link_count,
human_filesize(self.total_stined_bytes),
to_percent(self.total_stined_bytes, self.total_size),
)
)
if self.duration:
performance = self.total_size / self.duration / 1024.0 / 1024.0
else:
performance = 0
summary.append(" * duration: %s %.1fMB/s\n" % (human_time(self.duration), performance))
return summary
def print_summary(self):
self.summary("\n%s\n" % "\n".join(self.get_summary()))
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/phlb_main.py | HardLinkBackup._backup_dir_item | python | def _backup_dir_item(self, dir_path, process_bar):
self.path_helper.set_src_filepath(dir_path)
if self.path_helper.abs_src_filepath is None:
self.total_errored_items += 1
log.info("Can't backup %r", dir_path)
# self.summary(no, dir_path.stat.st_mtime, end=" ")
if dir_path.is_symlink:
self.summary("TODO Symlink: %s" % dir_path)
return
if dir_path.resolve_error is not None:
self.summary("TODO resolve error: %s" % dir_path.resolve_error)
pprint_path(dir_path)
return
if dir_path.different_path:
self.summary("TODO different path:")
pprint_path(dir_path)
return
if dir_path.is_dir:
self.summary("TODO dir: %s" % dir_path)
elif dir_path.is_file:
# self.summary("Normal file: %s", dir_path)
file_backup = FileBackup(dir_path, self.path_helper, self.backup_run)
old_backup_entry = self.fast_compare(dir_path)
if old_backup_entry is not None:
# We can just link the file from a old backup
file_backup.fast_deduplication_backup(old_backup_entry, process_bar)
else:
file_backup.deduplication_backup(process_bar)
assert file_backup.fast_backup is not None, dir_path.path
assert file_backup.file_linked is not None, dir_path.path
file_size = dir_path.stat.st_size
if file_backup.file_linked:
# os.link() was used
self.total_file_link_count += 1
self.total_stined_bytes += file_size
else:
self.total_new_file_count += 1
self.total_new_bytes += file_size
if file_backup.fast_backup:
self.total_fast_backup += 1
else:
self.summary("TODO:" % dir_path)
pprint_path(dir_path) | Backup one dir item
:param dir_path: filesystem_walk.DirEntryPath() instance | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/phlb_main.py#L456-L511 | null | class HardLinkBackup(object):
def __init__(self, path_helper, summary):
"""
:param src_path: Path2() instance of the source directory
:param force_name: Force this name for the backup
"""
self.start_time = default_timer()
self.path_helper = path_helper
self.summary = summary
self.duration = 0
self.total_file_link_count = 0
self.total_stined_bytes = 0
self.total_new_file_count = 0
self.total_new_bytes = 0
self.total_errored_items = 0
self.total_fast_backup = 0
old_backups = BackupRun.objects.filter(name=self.path_helper.backup_name)
self.summary("%r was backuped %i time(s)" % (self.path_helper.backup_name, old_backups.count()))
old_backups = old_backups.filter(completed=True)
completed_count = old_backups.count()
self.summary("There are %i backups finished completed." % completed_count)
self.latest_backup = None
self.latest_mtime_ns = None
try:
self.latest_backup = old_backups.latest()
except BackupRun.DoesNotExist:
self.summary("No old backup found with name %r" % self.path_helper.backup_name)
else:
latest_backup_datetime = self.latest_backup.backup_datetime
self.summary("Latest backup from:", dt2naturaltimesince(latest_backup_datetime))
backup_entries = BackupEntry.objects.filter(backup_run=self.latest_backup)
try:
latest_entry = backup_entries.latest()
except BackupEntry.DoesNotExist:
log.warn("Latest backup run contains no files?!?")
else:
self.latest_mtime_ns = latest_entry.file_mtime_ns
self.summary("Latest backup entry modified time: %s" % ns2naturaltimesince(self.latest_mtime_ns))
self.summary("Backup to: '%s'" % self.path_helper.abs_dst_root)
self.path_helper.abs_dst_root.makedirs( # call os.makedirs()
mode=phlb_config.default_new_path_mode, exist_ok=True
)
if not self.path_helper.abs_dst_root.is_dir():
raise NotADirectoryError("Backup path '%s' doesn't exists!" % self.path_helper.abs_dst_root)
self.backup_run = BackupRun.objects.create(
name=self.path_helper.backup_name, backup_datetime=self.path_helper.backup_datetime, completed=False
)
log.debug(" * backup_run: %s" % self.backup_run)
def backup(self):
# make temp file available in destination via link ;)
temp_log_path = Path2(settings.LOG_FILEPATH)
assert temp_log_path.is_file(), "%s doesn't exists?!?" % settings.LOG_FILEPATH
try:
temp_log_path.link(self.path_helper.log_filepath) # call os.link()
except OSError as err:
# e.g.:
# temp is on a other drive than the destination
log.error("Can't link log file: %s" % err)
copy_log = True
else:
copy_log = False
try:
self._backup()
finally:
if copy_log:
log.warn("copy log file from '%s' to '%s'" % (settings.LOG_FILEPATH, self.path_helper.log_filepath))
temp_log_path.copyfile(self.path_helper.log_filepath) # call shutil.copyfile()
self.backup_run.completed = True
self.backup_run.save()
def _evaluate_skip_pattern_info(self, skip_pattern_info, name):
if not skip_pattern_info.has_hits():
self.summary("%s doesn't match on any dir entry." % name)
else:
self.summary("%s match information:" % name)
for line in skip_pattern_info.long_info():
log.info(line)
for line in skip_pattern_info.short_info():
self.summary("%s\n" % line)
def _scandir(self, path):
start_time = default_timer()
self.summary("\nScan '%s'...\n" % path)
skip_pattern_info = SkipPatternInformation()
skip_dirs = phlb_config.skip_dirs # TODO: add tests for it!
self.summary("Scan filesystem with SKIP_DIRS: %s" % repr(skip_dirs))
tqdm_iterator = tqdm(
scandir_walk(path.path, skip_dirs, on_skip=skip_pattern_info), unit=" dir entries", leave=True
)
dir_entries = [entry for entry in tqdm_iterator]
self.summary("\n * %i dir entries" % len(dir_entries))
self._evaluate_skip_pattern_info(skip_pattern_info, name="SKIP_DIRS")
self.total_size = 0
self.file_count = 0
filtered_dir_entries = []
skip_patterns = phlb_config.skip_patterns # TODO: add tests for it!
self.summary("Filter with SKIP_PATTERNS: %s" % repr(skip_patterns))
skip_pattern_info = SkipPatternInformation()
tqdm_iterator = tqdm(
iter_filtered_dir_entry(dir_entries, skip_patterns, on_skip=skip_pattern_info),
total=len(dir_entries),
unit=" dir entries",
leave=True,
)
for entry in tqdm_iterator:
if entry is None:
# filtered out by skip_patterns
continue
if entry.is_file:
filtered_dir_entries.append(entry)
self.file_count += 1
self.total_size += entry.stat.st_size
self.summary("\n * %i filtered dir entries" % len(filtered_dir_entries))
self._evaluate_skip_pattern_info(skip_pattern_info, name="SKIP_PATTERNS")
self.summary("\nscan/filter source directory in %s\n" % (human_time(default_timer() - start_time)))
return filtered_dir_entries
def fast_compare(self, dir_path):
"""
:param dir_path: filesystem_walk.DirEntryPath() instance
"""
if self.latest_backup is None:
# No old backup run was found
return
if self.latest_mtime_ns is None:
# No timestamp from old backup run was found
return
# There was a completed old backup run
# Check if we can made a 'fast compare'
mtime_ns = dir_path.stat.st_mtime_ns
if mtime_ns > self.latest_mtime_ns:
# The current source file is newer than
# the latest file from last completed backup
log.info("Fast compare: source file is newer than latest backuped file.")
return
# Look into database and compare mtime and size
try:
old_backup_entry = BackupEntry.objects.get(
backup_run=self.latest_backup,
directory__directory=self.path_helper.sub_path,
filename__filename=self.path_helper.filename,
no_link_source=False,
)
except BackupEntry.DoesNotExist:
log.debug("No old backup entry found")
return
content_info = old_backup_entry.content_info
file_size = content_info.file_size
if file_size != dir_path.stat.st_size:
log.info("Fast compare: File size is different: %i != %i" % (file_size, dir_path.stat.st_size))
return
old_backup_filepath = old_backup_entry.get_backup_path()
try:
old_file_mtime_ns = old_backup_filepath.stat().st_mtime_ns
except FileNotFoundError as err:
log.error("Old backup file not found: %s" % err)
old_backup_entry.no_link_source = True
old_backup_entry.save()
return
if old_file_mtime_ns != old_backup_entry.file_mtime_ns:
log.error("ERROR: mtime from database is different to the file!")
log.error(" * File: %s" % old_backup_filepath)
log.error(" * Database mtime: %s" % old_backup_entry.file_mtime_ns)
log.error(" * File mtime: %s" % old_file_mtime_ns)
if old_file_mtime_ns != dir_path.stat.st_mtime_ns:
log.info("Fast compare mtime is different between:")
log.info(" * %s" % old_backup_entry)
log.info(" * %s" % dir_path)
log.info(" * mtime: %i != %i" % (old_file_mtime_ns, dir_path.stat.st_mtime_ns))
return
# We found a old entry with same size and mtime
return old_backup_entry
def _backup(self):
dir_entries = self._scandir(self.path_helper.abs_src_root)
msg = "%s in %i files to backup." % (human_filesize(self.total_size), self.file_count)
self.summary(msg)
log.info(msg)
next_update_print = default_timer() + phlb_config.print_update_interval
path_iterator = enumerate(
sorted(
dir_entries,
key=lambda x: x.stat.st_mtime, # sort by last modify time
reverse=True, # sort from newest to oldes
)
)
with tqdm(total=self.total_size, unit="B", unit_scale=True) as process_bar:
for no, dir_path in path_iterator:
try:
self._backup_dir_item(dir_path, process_bar)
except BackupFileError as err:
# A known error with a good error message occurred,
# e.g: PermissionError to read source file.
log.error(err)
self.total_errored_items += 1
except Exception as err:
# A unexpected error occurred.
# Print and add traceback to summary
log.error("Can't backup %s: %s" % (dir_path, err))
self.summary.handle_low_level_error()
self.total_errored_items += 1
if default_timer() > next_update_print:
self.print_update()
next_update_print = default_timer() + phlb_config.print_update_interval
self.duration = default_timer() - self.start_time
def print_update(self):
"""
print some status information in between.
"""
print("\r\n")
now = datetime.datetime.now()
print("Update info: (from: %s)" % now.strftime("%c"))
current_total_size = self.total_stined_bytes + self.total_new_bytes
if self.total_errored_items:
print(" * WARNING: %i omitted files!" % self.total_errored_items)
print(" * fast backup: %i files" % self.total_fast_backup)
print(
" * new content saved: %i files (%s %.1f%%)"
% (
self.total_new_file_count,
human_filesize(self.total_new_bytes),
to_percent(self.total_new_bytes, current_total_size),
)
)
print(
" * stint space via hardlinks: %i files (%s %.1f%%)"
% (
self.total_file_link_count,
human_filesize(self.total_stined_bytes),
to_percent(self.total_stined_bytes, current_total_size),
)
)
duration = default_timer() - self.start_time
performance = current_total_size / duration / 1024.0 / 1024.0
print(" * present performance: %.1fMB/s\n" % performance)
def get_summary(self):
summary = ["Backup done:"]
summary.append(" * Files to backup: %i files" % self.file_count)
if self.total_errored_items:
summary.append(" * WARNING: %i omitted files!" % self.total_errored_items)
summary.append(" * Source file sizes: %s" % human_filesize(self.total_size))
summary.append(" * fast backup: %i files" % self.total_fast_backup)
summary.append(
" * new content saved: %i files (%s %.1f%%)"
% (
self.total_new_file_count,
human_filesize(self.total_new_bytes),
to_percent(self.total_new_bytes, self.total_size),
)
)
summary.append(
" * stint space via hardlinks: %i files (%s %.1f%%)"
% (
self.total_file_link_count,
human_filesize(self.total_stined_bytes),
to_percent(self.total_stined_bytes, self.total_size),
)
)
if self.duration:
performance = self.total_size / self.duration / 1024.0 / 1024.0
else:
performance = 0
summary.append(" * duration: %s %.1fMB/s\n" % (human_time(self.duration), performance))
return summary
def print_summary(self):
self.summary("\n%s\n" % "\n".join(self.get_summary()))
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/phlb_main.py | HardLinkBackup.print_update | python | def print_update(self):
print("\r\n")
now = datetime.datetime.now()
print("Update info: (from: %s)" % now.strftime("%c"))
current_total_size = self.total_stined_bytes + self.total_new_bytes
if self.total_errored_items:
print(" * WARNING: %i omitted files!" % self.total_errored_items)
print(" * fast backup: %i files" % self.total_fast_backup)
print(
" * new content saved: %i files (%s %.1f%%)"
% (
self.total_new_file_count,
human_filesize(self.total_new_bytes),
to_percent(self.total_new_bytes, current_total_size),
)
)
print(
" * stint space via hardlinks: %i files (%s %.1f%%)"
% (
self.total_file_link_count,
human_filesize(self.total_stined_bytes),
to_percent(self.total_stined_bytes, current_total_size),
)
)
duration = default_timer() - self.start_time
performance = current_total_size / duration / 1024.0 / 1024.0
print(" * present performance: %.1fMB/s\n" % performance) | print some status information in between. | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/phlb_main.py#L551-L586 | [
"def human_filesize(i):\n \"\"\"\n 'human-readable' file size (i.e. 13 KB, 4.1 MB, 102 bytes, etc).\n \"\"\"\n bytes = float(i)\n if bytes < 1024:\n return u\"%d Byte%s\" % (bytes, bytes != 1 and u\"s\" or u\"\")\n if bytes < 1024 * 1024:\n return u\"%.1f KB\" % (bytes / 1024)\n i... | class HardLinkBackup(object):
def __init__(self, path_helper, summary):
"""
:param src_path: Path2() instance of the source directory
:param force_name: Force this name for the backup
"""
self.start_time = default_timer()
self.path_helper = path_helper
self.summary = summary
self.duration = 0
self.total_file_link_count = 0
self.total_stined_bytes = 0
self.total_new_file_count = 0
self.total_new_bytes = 0
self.total_errored_items = 0
self.total_fast_backup = 0
old_backups = BackupRun.objects.filter(name=self.path_helper.backup_name)
self.summary("%r was backuped %i time(s)" % (self.path_helper.backup_name, old_backups.count()))
old_backups = old_backups.filter(completed=True)
completed_count = old_backups.count()
self.summary("There are %i backups finished completed." % completed_count)
self.latest_backup = None
self.latest_mtime_ns = None
try:
self.latest_backup = old_backups.latest()
except BackupRun.DoesNotExist:
self.summary("No old backup found with name %r" % self.path_helper.backup_name)
else:
latest_backup_datetime = self.latest_backup.backup_datetime
self.summary("Latest backup from:", dt2naturaltimesince(latest_backup_datetime))
backup_entries = BackupEntry.objects.filter(backup_run=self.latest_backup)
try:
latest_entry = backup_entries.latest()
except BackupEntry.DoesNotExist:
log.warn("Latest backup run contains no files?!?")
else:
self.latest_mtime_ns = latest_entry.file_mtime_ns
self.summary("Latest backup entry modified time: %s" % ns2naturaltimesince(self.latest_mtime_ns))
self.summary("Backup to: '%s'" % self.path_helper.abs_dst_root)
self.path_helper.abs_dst_root.makedirs( # call os.makedirs()
mode=phlb_config.default_new_path_mode, exist_ok=True
)
if not self.path_helper.abs_dst_root.is_dir():
raise NotADirectoryError("Backup path '%s' doesn't exists!" % self.path_helper.abs_dst_root)
self.backup_run = BackupRun.objects.create(
name=self.path_helper.backup_name, backup_datetime=self.path_helper.backup_datetime, completed=False
)
log.debug(" * backup_run: %s" % self.backup_run)
def backup(self):
# make temp file available in destination via link ;)
temp_log_path = Path2(settings.LOG_FILEPATH)
assert temp_log_path.is_file(), "%s doesn't exists?!?" % settings.LOG_FILEPATH
try:
temp_log_path.link(self.path_helper.log_filepath) # call os.link()
except OSError as err:
# e.g.:
# temp is on a other drive than the destination
log.error("Can't link log file: %s" % err)
copy_log = True
else:
copy_log = False
try:
self._backup()
finally:
if copy_log:
log.warn("copy log file from '%s' to '%s'" % (settings.LOG_FILEPATH, self.path_helper.log_filepath))
temp_log_path.copyfile(self.path_helper.log_filepath) # call shutil.copyfile()
self.backup_run.completed = True
self.backup_run.save()
def _evaluate_skip_pattern_info(self, skip_pattern_info, name):
if not skip_pattern_info.has_hits():
self.summary("%s doesn't match on any dir entry." % name)
else:
self.summary("%s match information:" % name)
for line in skip_pattern_info.long_info():
log.info(line)
for line in skip_pattern_info.short_info():
self.summary("%s\n" % line)
def _scandir(self, path):
start_time = default_timer()
self.summary("\nScan '%s'...\n" % path)
skip_pattern_info = SkipPatternInformation()
skip_dirs = phlb_config.skip_dirs # TODO: add tests for it!
self.summary("Scan filesystem with SKIP_DIRS: %s" % repr(skip_dirs))
tqdm_iterator = tqdm(
scandir_walk(path.path, skip_dirs, on_skip=skip_pattern_info), unit=" dir entries", leave=True
)
dir_entries = [entry for entry in tqdm_iterator]
self.summary("\n * %i dir entries" % len(dir_entries))
self._evaluate_skip_pattern_info(skip_pattern_info, name="SKIP_DIRS")
self.total_size = 0
self.file_count = 0
filtered_dir_entries = []
skip_patterns = phlb_config.skip_patterns # TODO: add tests for it!
self.summary("Filter with SKIP_PATTERNS: %s" % repr(skip_patterns))
skip_pattern_info = SkipPatternInformation()
tqdm_iterator = tqdm(
iter_filtered_dir_entry(dir_entries, skip_patterns, on_skip=skip_pattern_info),
total=len(dir_entries),
unit=" dir entries",
leave=True,
)
for entry in tqdm_iterator:
if entry is None:
# filtered out by skip_patterns
continue
if entry.is_file:
filtered_dir_entries.append(entry)
self.file_count += 1
self.total_size += entry.stat.st_size
self.summary("\n * %i filtered dir entries" % len(filtered_dir_entries))
self._evaluate_skip_pattern_info(skip_pattern_info, name="SKIP_PATTERNS")
self.summary("\nscan/filter source directory in %s\n" % (human_time(default_timer() - start_time)))
return filtered_dir_entries
def fast_compare(self, dir_path):
"""
:param dir_path: filesystem_walk.DirEntryPath() instance
"""
if self.latest_backup is None:
# No old backup run was found
return
if self.latest_mtime_ns is None:
# No timestamp from old backup run was found
return
# There was a completed old backup run
# Check if we can made a 'fast compare'
mtime_ns = dir_path.stat.st_mtime_ns
if mtime_ns > self.latest_mtime_ns:
# The current source file is newer than
# the latest file from last completed backup
log.info("Fast compare: source file is newer than latest backuped file.")
return
# Look into database and compare mtime and size
try:
old_backup_entry = BackupEntry.objects.get(
backup_run=self.latest_backup,
directory__directory=self.path_helper.sub_path,
filename__filename=self.path_helper.filename,
no_link_source=False,
)
except BackupEntry.DoesNotExist:
log.debug("No old backup entry found")
return
content_info = old_backup_entry.content_info
file_size = content_info.file_size
if file_size != dir_path.stat.st_size:
log.info("Fast compare: File size is different: %i != %i" % (file_size, dir_path.stat.st_size))
return
old_backup_filepath = old_backup_entry.get_backup_path()
try:
old_file_mtime_ns = old_backup_filepath.stat().st_mtime_ns
except FileNotFoundError as err:
log.error("Old backup file not found: %s" % err)
old_backup_entry.no_link_source = True
old_backup_entry.save()
return
if old_file_mtime_ns != old_backup_entry.file_mtime_ns:
log.error("ERROR: mtime from database is different to the file!")
log.error(" * File: %s" % old_backup_filepath)
log.error(" * Database mtime: %s" % old_backup_entry.file_mtime_ns)
log.error(" * File mtime: %s" % old_file_mtime_ns)
if old_file_mtime_ns != dir_path.stat.st_mtime_ns:
log.info("Fast compare mtime is different between:")
log.info(" * %s" % old_backup_entry)
log.info(" * %s" % dir_path)
log.info(" * mtime: %i != %i" % (old_file_mtime_ns, dir_path.stat.st_mtime_ns))
return
# We found a old entry with same size and mtime
return old_backup_entry
def _backup_dir_item(self, dir_path, process_bar):
"""
Backup one dir item
:param dir_path: filesystem_walk.DirEntryPath() instance
"""
self.path_helper.set_src_filepath(dir_path)
if self.path_helper.abs_src_filepath is None:
self.total_errored_items += 1
log.info("Can't backup %r", dir_path)
# self.summary(no, dir_path.stat.st_mtime, end=" ")
if dir_path.is_symlink:
self.summary("TODO Symlink: %s" % dir_path)
return
if dir_path.resolve_error is not None:
self.summary("TODO resolve error: %s" % dir_path.resolve_error)
pprint_path(dir_path)
return
if dir_path.different_path:
self.summary("TODO different path:")
pprint_path(dir_path)
return
if dir_path.is_dir:
self.summary("TODO dir: %s" % dir_path)
elif dir_path.is_file:
# self.summary("Normal file: %s", dir_path)
file_backup = FileBackup(dir_path, self.path_helper, self.backup_run)
old_backup_entry = self.fast_compare(dir_path)
if old_backup_entry is not None:
# We can just link the file from a old backup
file_backup.fast_deduplication_backup(old_backup_entry, process_bar)
else:
file_backup.deduplication_backup(process_bar)
assert file_backup.fast_backup is not None, dir_path.path
assert file_backup.file_linked is not None, dir_path.path
file_size = dir_path.stat.st_size
if file_backup.file_linked:
# os.link() was used
self.total_file_link_count += 1
self.total_stined_bytes += file_size
else:
self.total_new_file_count += 1
self.total_new_bytes += file_size
if file_backup.fast_backup:
self.total_fast_backup += 1
else:
self.summary("TODO:" % dir_path)
pprint_path(dir_path)
def _backup(self):
dir_entries = self._scandir(self.path_helper.abs_src_root)
msg = "%s in %i files to backup." % (human_filesize(self.total_size), self.file_count)
self.summary(msg)
log.info(msg)
next_update_print = default_timer() + phlb_config.print_update_interval
path_iterator = enumerate(
sorted(
dir_entries,
key=lambda x: x.stat.st_mtime, # sort by last modify time
reverse=True, # sort from newest to oldes
)
)
with tqdm(total=self.total_size, unit="B", unit_scale=True) as process_bar:
for no, dir_path in path_iterator:
try:
self._backup_dir_item(dir_path, process_bar)
except BackupFileError as err:
# A known error with a good error message occurred,
# e.g: PermissionError to read source file.
log.error(err)
self.total_errored_items += 1
except Exception as err:
# A unexpected error occurred.
# Print and add traceback to summary
log.error("Can't backup %s: %s" % (dir_path, err))
self.summary.handle_low_level_error()
self.total_errored_items += 1
if default_timer() > next_update_print:
self.print_update()
next_update_print = default_timer() + phlb_config.print_update_interval
self.duration = default_timer() - self.start_time
def get_summary(self):
summary = ["Backup done:"]
summary.append(" * Files to backup: %i files" % self.file_count)
if self.total_errored_items:
summary.append(" * WARNING: %i omitted files!" % self.total_errored_items)
summary.append(" * Source file sizes: %s" % human_filesize(self.total_size))
summary.append(" * fast backup: %i files" % self.total_fast_backup)
summary.append(
" * new content saved: %i files (%s %.1f%%)"
% (
self.total_new_file_count,
human_filesize(self.total_new_bytes),
to_percent(self.total_new_bytes, self.total_size),
)
)
summary.append(
" * stint space via hardlinks: %i files (%s %.1f%%)"
% (
self.total_file_link_count,
human_filesize(self.total_stined_bytes),
to_percent(self.total_stined_bytes, self.total_size),
)
)
if self.duration:
performance = self.total_size / self.duration / 1024.0 / 1024.0
else:
performance = 0
summary.append(" * duration: %s %.1fMB/s\n" % (human_time(self.duration), performance))
return summary
def print_summary(self):
self.summary("\n%s\n" % "\n".join(self.get_summary()))
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/config.py | edit_ini | python | def edit_ini(ini_filepath=None):
if ini_filepath == None:
ini_filepath = get_ini_filepath()
try:
click.edit(filename=ini_filepath)
except click.exceptions.ClickException as err:
print("Click err: %s" % err)
webbrowser.open(ini_filepath) | Open the .ini file with the operating system’s associated editor. | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/config.py#L116-L127 | [
"def get_ini_filepath():\n search_paths = get_ini_search_paths()\n for filepath in search_paths:\n # print(\"Look for .ini: %r\" % filepath)\n if filepath.is_file():\n # print(\"Use .ini from: %r\" % filepath)\n return filepath\n"
] | import inspect
import traceback
import shutil
import os
import sys
import logging
import pprint
import hashlib
import datetime
import webbrowser
import configparser
# https://github.com/mitsuhiko/click
import click
from pathlib_revised import Path2 # https://github.com/jedie/pathlib revised/
log = logging.getLogger("phlb.%s" % __name__)
CONFIG_FILENAME = "PyHardLinkBackup.ini"
DEAFULT_CONFIG_FILENAME = "config_defaults.ini"
def strftime(value):
# Just test the formatter
datetime.datetime.now().strftime(value)
return value
def commalist(value):
values = [v.strip() for v in value.split(",")] # split and strip
values = [v for v in values if v != ""] # remove empty strings
return tuple(values)
def int8(value):
return int(value, 8)
def hashname(value):
# test if exist
hashlib.new(value)
return value
def expand_abs_path(value):
value = value.strip()
if value == ":memory:":
return value
return os.path.normpath(os.path.abspath(os.path.expanduser(value)))
def logging_level(value):
level = getattr(logging, value)
return level
INI_CONVERTER_DICT = {
"database_name": expand_abs_path,
"enable_auto_login": bool,
"backup_path": expand_abs_path,
"sub_dir_formatter": strftime,
"language_code": str, # FIXME: validate
"skip_dirs": commalist,
"skip_patterns": commalist,
"print_update_interval": int,
"logging_console_level": logging_level,
"logging_file_level": logging_level,
"default_new_path_mode": int8,
"hash_name": hashname,
"chunk_size": int,
}
def get_dict_from_ini(filepath):
log.debug("Read config '%s'" % filepath)
parser = configparser.ConfigParser(interpolation=None)
parser.read_file(filepath.open("r"))
config = {}
for section in parser.sections():
config.update(dict(parser.items(section)))
log.debug("readed config:")
log.debug(pprint.pformat(config))
return config
def get_user_ini_filepath():
return Path2("~/%s" % CONFIG_FILENAME).expanduser()
def get_ini_search_paths():
p = Path2.cwd()
search_paths = [p]
search_paths += [path for path in p.parents]
search_paths = [path.joinpath(CONFIG_FILENAME) for path in search_paths]
search_paths.append(get_user_ini_filepath())
# print("Search paths:\n%s" % "\n".join(search_paths))
log.debug("Search paths: '%s'" % [path.path for path in search_paths])
return search_paths
def get_ini_filepath():
search_paths = get_ini_search_paths()
for filepath in search_paths:
# print("Look for .ini: %r" % filepath)
if filepath.is_file():
# print("Use .ini from: %r" % filepath)
return filepath
class PyHardLinkBackupConfig(object):
ini_filepath = None
_config = None
def __init__(self, ini_converter_dict):
super(PyHardLinkBackupConfig, self).__init__()
self.ini_converter_dict = ini_converter_dict
def _load(self, force=False):
if force or self._config is None:
self._config = self._read_config()
def __getattr__(self, item):
self._load()
try:
return self._config[item]
except KeyError:
raise AttributeError(
"%s missing in '%s'\nExisting keys:\n * %s"
% (item.upper(), self.ini_filepath, "\n * ".join(sorted(self._config.keys())))
)
def __repr__(self):
self._load()
return "'%s' with '%s'" % (self.ini_filepath, self._config)
def open_editor(self):
self._load()
edit_ini(self.ini_filepath)
def _read_and_convert(self, filepath, all_values):
"""
if all_values==True: the readed ini file must contain all values
"""
d = get_dict_from_ini(filepath)
result = {}
for key, func in self.ini_converter_dict.items():
if not all_values and key not in d:
continue
try:
value = d[key]
except KeyError as err:
traceback.print_exc()
print("_" * 79)
print("ERROR: %r is missing in your config!" % err)
print("Debug '%s':" % filepath)
try:
print(pprint.pformat(d))
except KeyError:
pass
print("\n")
if click.confirm("Open the editor?"):
self.open_editor()
sys.exit(-1)
if func:
try:
value = func(value)
except (KeyError, ValueError) as err:
edit_ini(self.ini_filepath)
raise Exception("%s - .ini file: '%s'" % (err, self.ini_filepath))
result[key] = value
return result
def _read_config(self):
"""
returns the config as a dict.
"""
default_config_filepath = Path2(os.path.dirname(__file__), DEAFULT_CONFIG_FILENAME)
log.debug("Read defaults from: '%s'" % default_config_filepath)
if not default_config_filepath.is_file():
raise RuntimeError(
"Internal error: Can't locate the default .ini file here: '%s'" % default_config_filepath
)
config = self._read_and_convert(default_config_filepath, all_values=True)
log.debug("Defaults: %s", pprint.pformat(config))
self.ini_filepath = get_ini_filepath()
if not self.ini_filepath:
# No .ini file made by user found
# -> Create one into user home
self.ini_filepath = get_user_ini_filepath()
# We don't use shutil.copyfile here, so the line endings will
# be converted e.g. under windows from \n to \n\r
with default_config_filepath.open("r") as infile:
with self.ini_filepath.open("w") as outfile:
outfile.write(infile.read())
print("\n*************************************************************")
print("Default config file was created into your home:")
print("\t%s" % self.ini_filepath)
print("Change it for your needs ;)")
print("*************************************************************\n")
else:
print("\nread user configuration from:")
print("\t%s\n" % self.ini_filepath)
config.update(self._read_and_convert(self.ini_filepath, all_values=False))
log.debug("RawConfig changed to: %s", pprint.pformat(config))
return config
def print_config(self):
self._load()
print("Debug config '%s':" % self.ini_filepath)
pprint.pprint(self._config)
phlb_config = PyHardLinkBackupConfig(INI_CONVERTER_DICT)
if __name__ == "__main__":
import sys
sys.stdout = sys.stderr # work-a-round for PyCharm to sync output
logging.basicConfig(level=logging.DEBUG)
phlb_config = PyHardLinkBackupConfig(INI_CONVERTER_DICT)
print("INI filepath: '%s'" % phlb_config.ini_filepath)
pprint.pprint(phlb_config)
print()
for k in phlb_config._config.keys():
print(k, getattr(phlb_config, k))
try:
phlb_config.doesntexist
except AttributeError:
print("OK")
else:
print("ERROR!")
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/config.py | PyHardLinkBackupConfig._read_and_convert | python | def _read_and_convert(self, filepath, all_values):
d = get_dict_from_ini(filepath)
result = {}
for key, func in self.ini_converter_dict.items():
if not all_values and key not in d:
continue
try:
value = d[key]
except KeyError as err:
traceback.print_exc()
print("_" * 79)
print("ERROR: %r is missing in your config!" % err)
print("Debug '%s':" % filepath)
try:
print(pprint.pformat(d))
except KeyError:
pass
print("\n")
if click.confirm("Open the editor?"):
self.open_editor()
sys.exit(-1)
if func:
try:
value = func(value)
except (KeyError, ValueError) as err:
edit_ini(self.ini_filepath)
raise Exception("%s - .ini file: '%s'" % (err, self.ini_filepath))
result[key] = value
return result | if all_values==True: the readed ini file must contain all values | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/config.py#L161-L195 | null | class PyHardLinkBackupConfig(object):
ini_filepath = None
_config = None
def __init__(self, ini_converter_dict):
super(PyHardLinkBackupConfig, self).__init__()
self.ini_converter_dict = ini_converter_dict
def _load(self, force=False):
if force or self._config is None:
self._config = self._read_config()
def __getattr__(self, item):
self._load()
try:
return self._config[item]
except KeyError:
raise AttributeError(
"%s missing in '%s'\nExisting keys:\n * %s"
% (item.upper(), self.ini_filepath, "\n * ".join(sorted(self._config.keys())))
)
def __repr__(self):
self._load()
return "'%s' with '%s'" % (self.ini_filepath, self._config)
def open_editor(self):
self._load()
edit_ini(self.ini_filepath)
def _read_config(self):
"""
returns the config as a dict.
"""
default_config_filepath = Path2(os.path.dirname(__file__), DEAFULT_CONFIG_FILENAME)
log.debug("Read defaults from: '%s'" % default_config_filepath)
if not default_config_filepath.is_file():
raise RuntimeError(
"Internal error: Can't locate the default .ini file here: '%s'" % default_config_filepath
)
config = self._read_and_convert(default_config_filepath, all_values=True)
log.debug("Defaults: %s", pprint.pformat(config))
self.ini_filepath = get_ini_filepath()
if not self.ini_filepath:
# No .ini file made by user found
# -> Create one into user home
self.ini_filepath = get_user_ini_filepath()
# We don't use shutil.copyfile here, so the line endings will
# be converted e.g. under windows from \n to \n\r
with default_config_filepath.open("r") as infile:
with self.ini_filepath.open("w") as outfile:
outfile.write(infile.read())
print("\n*************************************************************")
print("Default config file was created into your home:")
print("\t%s" % self.ini_filepath)
print("Change it for your needs ;)")
print("*************************************************************\n")
else:
print("\nread user configuration from:")
print("\t%s\n" % self.ini_filepath)
config.update(self._read_and_convert(self.ini_filepath, all_values=False))
log.debug("RawConfig changed to: %s", pprint.pformat(config))
return config
def print_config(self):
self._load()
print("Debug config '%s':" % self.ini_filepath)
pprint.pprint(self._config)
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/config.py | PyHardLinkBackupConfig._read_config | python | def _read_config(self):
default_config_filepath = Path2(os.path.dirname(__file__), DEAFULT_CONFIG_FILENAME)
log.debug("Read defaults from: '%s'" % default_config_filepath)
if not default_config_filepath.is_file():
raise RuntimeError(
"Internal error: Can't locate the default .ini file here: '%s'" % default_config_filepath
)
config = self._read_and_convert(default_config_filepath, all_values=True)
log.debug("Defaults: %s", pprint.pformat(config))
self.ini_filepath = get_ini_filepath()
if not self.ini_filepath:
# No .ini file made by user found
# -> Create one into user home
self.ini_filepath = get_user_ini_filepath()
# We don't use shutil.copyfile here, so the line endings will
# be converted e.g. under windows from \n to \n\r
with default_config_filepath.open("r") as infile:
with self.ini_filepath.open("w") as outfile:
outfile.write(infile.read())
print("\n*************************************************************")
print("Default config file was created into your home:")
print("\t%s" % self.ini_filepath)
print("Change it for your needs ;)")
print("*************************************************************\n")
else:
print("\nread user configuration from:")
print("\t%s\n" % self.ini_filepath)
config.update(self._read_and_convert(self.ini_filepath, all_values=False))
log.debug("RawConfig changed to: %s", pprint.pformat(config))
return config | returns the config as a dict. | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/config.py#L197-L233 | [
"def get_user_ini_filepath():\n return Path2(\"~/%s\" % CONFIG_FILENAME).expanduser()\n",
"def get_ini_filepath():\n search_paths = get_ini_search_paths()\n for filepath in search_paths:\n # print(\"Look for .ini: %r\" % filepath)\n if filepath.is_file():\n # print(\"Use .ini fro... | class PyHardLinkBackupConfig(object):
ini_filepath = None
_config = None
def __init__(self, ini_converter_dict):
super(PyHardLinkBackupConfig, self).__init__()
self.ini_converter_dict = ini_converter_dict
def _load(self, force=False):
if force or self._config is None:
self._config = self._read_config()
def __getattr__(self, item):
self._load()
try:
return self._config[item]
except KeyError:
raise AttributeError(
"%s missing in '%s'\nExisting keys:\n * %s"
% (item.upper(), self.ini_filepath, "\n * ".join(sorted(self._config.keys())))
)
def __repr__(self):
self._load()
return "'%s' with '%s'" % (self.ini_filepath, self._config)
def open_editor(self):
self._load()
edit_ini(self.ini_filepath)
def _read_and_convert(self, filepath, all_values):
"""
if all_values==True: the readed ini file must contain all values
"""
d = get_dict_from_ini(filepath)
result = {}
for key, func in self.ini_converter_dict.items():
if not all_values and key not in d:
continue
try:
value = d[key]
except KeyError as err:
traceback.print_exc()
print("_" * 79)
print("ERROR: %r is missing in your config!" % err)
print("Debug '%s':" % filepath)
try:
print(pprint.pformat(d))
except KeyError:
pass
print("\n")
if click.confirm("Open the editor?"):
self.open_editor()
sys.exit(-1)
if func:
try:
value = func(value)
except (KeyError, ValueError) as err:
edit_ini(self.ini_filepath)
raise Exception("%s - .ini file: '%s'" % (err, self.ini_filepath))
result[key] = value
return result
def print_config(self):
self._load()
print("Debug config '%s':" % self.ini_filepath)
pprint.pprint(self._config)
|
jedie/PyHardLinkBackup | dev/patch_cmd.py | patch | python | def patch(filepath, debug):
print("patch 'pause' in %r" % filepath)
with open(filepath, "r") as infile:
origin_content = infile.read()
new_content = origin_content.replace("pause\n", """echo "'pause'"\n""")
assert new_content!=origin_content, "not changed: %s" % origin_content
with open(filepath, "w") as outfile:
outfile.write(new_content)
print("%r patched" % filepath)
if debug:
print("-"*79)
print(repr(new_content))
print("-"*79)
print(new_content)
print("-"*79) | replace 'pause' from windows batch.
Needed for ci.appveyor.com
see: https://github.com/appveyor/ci/issues/596 | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/dev/patch_cmd.py#L3-L25 | null | |
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/filesystem_walk.py | scandir_walk | python | def scandir_walk(top, skip_dirs=(), on_skip=None):
# We may not have read permission for top, in which case we can't
# get a list of the files the directory contains. os.walk
# always suppressed the exception then, rather than blow up for a
# minor reason when (say) a thousand readable directories are still
# left to visit. That logic is copied here.
try:
scandir_it = Path2(top).scandir()
except PermissionError as err:
log.error("scandir error: %s" % err)
return
for entry in scandir_it:
if entry.is_dir(follow_symlinks=False):
if entry.name in skip_dirs:
on_skip(entry, entry.name)
else:
yield from scandir_walk(entry.path, skip_dirs, on_skip)
else:
yield entry | Just walk the filesystem tree top-down with os.scandir() and don't follow symlinks.
:param top: path to scan
:param skip_dirs: List of dir names to skip
e.g.: "__pycache__", "temp", "tmp"
:param on_skip: function that will be called if 'skip_dirs' match.
e.g.:
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
:return: yields os.DirEntry() instances | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/filesystem_walk.py#L9-L39 | [
"def scandir_walk(top, skip_dirs=(), on_skip=None):\n \"\"\"\n Just walk the filesystem tree top-down with os.scandir() and don't follow symlinks.\n :param top: path to scan\n :param skip_dirs: List of dir names to skip\n e.g.: \"__pycache__\", \"temp\", \"tmp\"\n :param on_skip: function that... | import logging
from pathlib_revised import Path2, DirEntryPath # https://github.com/jedie/pathlib revised/
from pathlib_revised.pathlib import pprint_path
log = logging.getLogger("phlb.%s" % __name__)
def scandir_limited(top, limit, deep=0):
"""
yields only directories with the given deep limit
:param top: source path
:param limit: how deep should be scanned?
:param deep: internal deep number
:return: yields os.DirEntry() instances
"""
deep += 1
try:
scandir_it = Path2(top).scandir()
except PermissionError as err:
log.error("scandir error: %s" % err)
return
for entry in scandir_it:
if entry.is_dir(follow_symlinks=False):
if deep < limit:
yield from scandir_limited(entry.path, limit, deep)
else:
yield entry
class PathLibFilter:
def __init__(self, filter):
"""
:param filter: callable to filter in self.iter()
"""
assert callable(filter)
self.filter = filter
def iter(self, dir_entries):
"""
:param dir_entries: list of os.DirEntry() instances
"""
filter = self.filter
for entry in dir_entries:
path = filter(Path2(entry.path))
if path != False:
yield path
def iter_filtered_dir_entry(dir_entries, match_patterns, on_skip):
"""
Filter a list of DirEntryPath instances with the given pattern
:param dir_entries: list of DirEntryPath instances
:param match_patterns: used with Path.match()
e.g.: "__pycache__/*", "*.tmp", "*.cache"
:param on_skip: function that will be called if 'match_patterns' hits.
e.g.:
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
:return: yields None or DirEntryPath instances
"""
def match(dir_entry_path, match_patterns, on_skip):
for match_pattern in match_patterns:
if dir_entry_path.path_instance.match(match_pattern):
on_skip(dir_entry_path, match_pattern)
return True
return False
for entry in dir_entries:
try:
dir_entry_path = DirEntryPath(entry)
except FileNotFoundError as err:
# e.g.: A file was deleted after the first filesystem scan
# Will be obsolete if we use shadow-copy / snapshot function from filesystem
# see: https://github.com/jedie/PyHardLinkBackup/issues/6
log.error("Can't make DirEntryPath() instance: %s" % err)
continue
if match(dir_entry_path, match_patterns, on_skip):
yield None
else:
yield dir_entry_path
if __name__ == "__main__":
from tqdm import tqdm
# path = Path2("/")
# path = Path2(os.path.expanduser("~")) # .home() new in Python 3.5
path = Path2("../../../../").resolve()
print("Scan: %s..." % path)
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
skip_dirs = ("__pycache__", "temp")
tqdm_iterator = tqdm(scandir_walk(path.path, skip_dirs, on_skip=on_skip), unit=" dir entries", leave=True)
dir_entries = [entry for entry in tqdm_iterator]
print()
print("=" * 79)
print("\n * %i os.DirEntry() instances" % len(dir_entries))
match_patterns = ("*.old", ".*", "__pycache__/*", "temp", "*.pyc", "*.tmp", "*.cache")
tqdm_iterator = tqdm(
iter_filtered_dir_entry(dir_entries, match_patterns, on_skip),
total=len(dir_entries),
unit=" dir entries",
leave=True,
)
filtered_files = [entry for entry in tqdm_iterator if entry]
print()
print("=" * 79)
print("\n * %i filtered Path2() instances" % len(filtered_files))
path_iterator = enumerate(
sorted(
filtered_files,
key=lambda x: x.stat.st_mtime, # sort by last modify time
reverse=True, # sort from newest to oldes
)
)
for no, path in path_iterator:
print(no, path.stat.st_mtime, end=" ")
if path.is_symlink:
print("Symlink: %s" % path)
elif path.is_dir:
print("Normal dir: %s" % path)
elif path.is_file:
print("Normal file: %s" % path)
else:
print("XXX: %s" % path)
pprint_path(path)
if path.different_path or path.resolve_error:
print(path.pformat())
print()
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/filesystem_walk.py | scandir_limited | python | def scandir_limited(top, limit, deep=0):
deep += 1
try:
scandir_it = Path2(top).scandir()
except PermissionError as err:
log.error("scandir error: %s" % err)
return
for entry in scandir_it:
if entry.is_dir(follow_symlinks=False):
if deep < limit:
yield from scandir_limited(entry.path, limit, deep)
else:
yield entry | yields only directories with the given deep limit
:param top: source path
:param limit: how deep should be scanned?
:param deep: internal deep number
:return: yields os.DirEntry() instances | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/filesystem_walk.py#L42-L63 | [
"def scandir_limited(top, limit, deep=0):\n \"\"\"\n yields only directories with the given deep limit\n\n :param top: source path\n :param limit: how deep should be scanned?\n :param deep: internal deep number\n :return: yields os.DirEntry() instances\n \"\"\"\n deep += 1\n try:\n ... | import logging
from pathlib_revised import Path2, DirEntryPath # https://github.com/jedie/pathlib revised/
from pathlib_revised.pathlib import pprint_path
log = logging.getLogger("phlb.%s" % __name__)
def scandir_walk(top, skip_dirs=(), on_skip=None):
"""
Just walk the filesystem tree top-down with os.scandir() and don't follow symlinks.
:param top: path to scan
:param skip_dirs: List of dir names to skip
e.g.: "__pycache__", "temp", "tmp"
:param on_skip: function that will be called if 'skip_dirs' match.
e.g.:
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
:return: yields os.DirEntry() instances
"""
# We may not have read permission for top, in which case we can't
# get a list of the files the directory contains. os.walk
# always suppressed the exception then, rather than blow up for a
# minor reason when (say) a thousand readable directories are still
# left to visit. That logic is copied here.
try:
scandir_it = Path2(top).scandir()
except PermissionError as err:
log.error("scandir error: %s" % err)
return
for entry in scandir_it:
if entry.is_dir(follow_symlinks=False):
if entry.name in skip_dirs:
on_skip(entry, entry.name)
else:
yield from scandir_walk(entry.path, skip_dirs, on_skip)
else:
yield entry
class PathLibFilter:
def __init__(self, filter):
"""
:param filter: callable to filter in self.iter()
"""
assert callable(filter)
self.filter = filter
def iter(self, dir_entries):
"""
:param dir_entries: list of os.DirEntry() instances
"""
filter = self.filter
for entry in dir_entries:
path = filter(Path2(entry.path))
if path != False:
yield path
def iter_filtered_dir_entry(dir_entries, match_patterns, on_skip):
"""
Filter a list of DirEntryPath instances with the given pattern
:param dir_entries: list of DirEntryPath instances
:param match_patterns: used with Path.match()
e.g.: "__pycache__/*", "*.tmp", "*.cache"
:param on_skip: function that will be called if 'match_patterns' hits.
e.g.:
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
:return: yields None or DirEntryPath instances
"""
def match(dir_entry_path, match_patterns, on_skip):
for match_pattern in match_patterns:
if dir_entry_path.path_instance.match(match_pattern):
on_skip(dir_entry_path, match_pattern)
return True
return False
for entry in dir_entries:
try:
dir_entry_path = DirEntryPath(entry)
except FileNotFoundError as err:
# e.g.: A file was deleted after the first filesystem scan
# Will be obsolete if we use shadow-copy / snapshot function from filesystem
# see: https://github.com/jedie/PyHardLinkBackup/issues/6
log.error("Can't make DirEntryPath() instance: %s" % err)
continue
if match(dir_entry_path, match_patterns, on_skip):
yield None
else:
yield dir_entry_path
if __name__ == "__main__":
from tqdm import tqdm
# path = Path2("/")
# path = Path2(os.path.expanduser("~")) # .home() new in Python 3.5
path = Path2("../../../../").resolve()
print("Scan: %s..." % path)
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
skip_dirs = ("__pycache__", "temp")
tqdm_iterator = tqdm(scandir_walk(path.path, skip_dirs, on_skip=on_skip), unit=" dir entries", leave=True)
dir_entries = [entry for entry in tqdm_iterator]
print()
print("=" * 79)
print("\n * %i os.DirEntry() instances" % len(dir_entries))
match_patterns = ("*.old", ".*", "__pycache__/*", "temp", "*.pyc", "*.tmp", "*.cache")
tqdm_iterator = tqdm(
iter_filtered_dir_entry(dir_entries, match_patterns, on_skip),
total=len(dir_entries),
unit=" dir entries",
leave=True,
)
filtered_files = [entry for entry in tqdm_iterator if entry]
print()
print("=" * 79)
print("\n * %i filtered Path2() instances" % len(filtered_files))
path_iterator = enumerate(
sorted(
filtered_files,
key=lambda x: x.stat.st_mtime, # sort by last modify time
reverse=True, # sort from newest to oldes
)
)
for no, path in path_iterator:
print(no, path.stat.st_mtime, end=" ")
if path.is_symlink:
print("Symlink: %s" % path)
elif path.is_dir:
print("Normal dir: %s" % path)
elif path.is_file:
print("Normal file: %s" % path)
else:
print("XXX: %s" % path)
pprint_path(path)
if path.different_path or path.resolve_error:
print(path.pformat())
print()
|
jedie/PyHardLinkBackup | PyHardLinkBackup/phlb/filesystem_walk.py | iter_filtered_dir_entry | python | def iter_filtered_dir_entry(dir_entries, match_patterns, on_skip):
def match(dir_entry_path, match_patterns, on_skip):
for match_pattern in match_patterns:
if dir_entry_path.path_instance.match(match_pattern):
on_skip(dir_entry_path, match_pattern)
return True
return False
for entry in dir_entries:
try:
dir_entry_path = DirEntryPath(entry)
except FileNotFoundError as err:
# e.g.: A file was deleted after the first filesystem scan
# Will be obsolete if we use shadow-copy / snapshot function from filesystem
# see: https://github.com/jedie/PyHardLinkBackup/issues/6
log.error("Can't make DirEntryPath() instance: %s" % err)
continue
if match(dir_entry_path, match_patterns, on_skip):
yield None
else:
yield dir_entry_path | Filter a list of DirEntryPath instances with the given pattern
:param dir_entries: list of DirEntryPath instances
:param match_patterns: used with Path.match()
e.g.: "__pycache__/*", "*.tmp", "*.cache"
:param on_skip: function that will be called if 'match_patterns' hits.
e.g.:
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
:return: yields None or DirEntryPath instances | train | https://github.com/jedie/PyHardLinkBackup/blob/be28666834d2d9e3d8aac1b661cb2d5bd4056c29/PyHardLinkBackup/phlb/filesystem_walk.py#L85-L118 | [
"def match(dir_entry_path, match_patterns, on_skip):\n for match_pattern in match_patterns:\n if dir_entry_path.path_instance.match(match_pattern):\n on_skip(dir_entry_path, match_pattern)\n return True\n return False\n"
] | import logging
from pathlib_revised import Path2, DirEntryPath # https://github.com/jedie/pathlib revised/
from pathlib_revised.pathlib import pprint_path
log = logging.getLogger("phlb.%s" % __name__)
def scandir_walk(top, skip_dirs=(), on_skip=None):
"""
Just walk the filesystem tree top-down with os.scandir() and don't follow symlinks.
:param top: path to scan
:param skip_dirs: List of dir names to skip
e.g.: "__pycache__", "temp", "tmp"
:param on_skip: function that will be called if 'skip_dirs' match.
e.g.:
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
:return: yields os.DirEntry() instances
"""
# We may not have read permission for top, in which case we can't
# get a list of the files the directory contains. os.walk
# always suppressed the exception then, rather than blow up for a
# minor reason when (say) a thousand readable directories are still
# left to visit. That logic is copied here.
try:
scandir_it = Path2(top).scandir()
except PermissionError as err:
log.error("scandir error: %s" % err)
return
for entry in scandir_it:
if entry.is_dir(follow_symlinks=False):
if entry.name in skip_dirs:
on_skip(entry, entry.name)
else:
yield from scandir_walk(entry.path, skip_dirs, on_skip)
else:
yield entry
def scandir_limited(top, limit, deep=0):
"""
yields only directories with the given deep limit
:param top: source path
:param limit: how deep should be scanned?
:param deep: internal deep number
:return: yields os.DirEntry() instances
"""
deep += 1
try:
scandir_it = Path2(top).scandir()
except PermissionError as err:
log.error("scandir error: %s" % err)
return
for entry in scandir_it:
if entry.is_dir(follow_symlinks=False):
if deep < limit:
yield from scandir_limited(entry.path, limit, deep)
else:
yield entry
class PathLibFilter:
def __init__(self, filter):
"""
:param filter: callable to filter in self.iter()
"""
assert callable(filter)
self.filter = filter
def iter(self, dir_entries):
"""
:param dir_entries: list of os.DirEntry() instances
"""
filter = self.filter
for entry in dir_entries:
path = filter(Path2(entry.path))
if path != False:
yield path
if __name__ == "__main__":
from tqdm import tqdm
# path = Path2("/")
# path = Path2(os.path.expanduser("~")) # .home() new in Python 3.5
path = Path2("../../../../").resolve()
print("Scan: %s..." % path)
def on_skip(entry, pattern):
log.error("Skip pattern %r hit: %s" % (pattern, entry.path))
skip_dirs = ("__pycache__", "temp")
tqdm_iterator = tqdm(scandir_walk(path.path, skip_dirs, on_skip=on_skip), unit=" dir entries", leave=True)
dir_entries = [entry for entry in tqdm_iterator]
print()
print("=" * 79)
print("\n * %i os.DirEntry() instances" % len(dir_entries))
match_patterns = ("*.old", ".*", "__pycache__/*", "temp", "*.pyc", "*.tmp", "*.cache")
tqdm_iterator = tqdm(
iter_filtered_dir_entry(dir_entries, match_patterns, on_skip),
total=len(dir_entries),
unit=" dir entries",
leave=True,
)
filtered_files = [entry for entry in tqdm_iterator if entry]
print()
print("=" * 79)
print("\n * %i filtered Path2() instances" % len(filtered_files))
path_iterator = enumerate(
sorted(
filtered_files,
key=lambda x: x.stat.st_mtime, # sort by last modify time
reverse=True, # sort from newest to oldes
)
)
for no, path in path_iterator:
print(no, path.stat.st_mtime, end=" ")
if path.is_symlink:
print("Symlink: %s" % path)
elif path.is_dir:
print("Normal dir: %s" % path)
elif path.is_file:
print("Normal file: %s" % path)
else:
print("XXX: %s" % path)
pprint_path(path)
if path.different_path or path.resolve_error:
print(path.pformat())
print()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.